CN114565507A - Hair processing method and device, electronic equipment and storage medium - Google Patents

Hair processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114565507A
CN114565507A CN202210048979.4A CN202210048979A CN114565507A CN 114565507 A CN114565507 A CN 114565507A CN 202210048979 A CN202210048979 A CN 202210048979A CN 114565507 A CN114565507 A CN 114565507A
Authority
CN
China
Prior art keywords
area
hair
region
processed
column
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210048979.4A
Other languages
Chinese (zh)
Inventor
苗锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN202210048979.4A priority Critical patent/CN114565507A/en
Publication of CN114565507A publication Critical patent/CN114565507A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a hair processing method, a hair processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: positioning a hair area and a face area in an image of a user; expanding the face area towards the vertex direction to obtain a protection area, and then acquiring an area to be processed by using the protection area and a hair area; and carrying out deformation processing on the area to be processed on the user image along the direction opposite to the direction of the top of the head to obtain the user image with deformed hair. The method comprises the steps of covering hair which does not need to be processed by expanding a face towards the top of the head, leaving a region to be processed, and deforming the region to be processed on a user image along the direction opposite to the top of the head to obtain the user image with deformed hair. When the method is applied to a changing scene, the human face area is expanded towards the top of the head to leave the area to be processed, so that after the area to be processed is subsequently deformed along the downward direction, even if changing processing is executed, no gap can be formed between the hair area and the clothes, and seamless connection between the clothes and the hair after changing can be ensured.

Description

Hair processing method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a hair processing method, a hair processing device, electronic equipment and a storage medium.
Background
Currently, when a user's clothing in an image is reloaded, if the user in the image has long hair, that is, shawl hair, after the reloading process is performed, a gap may occur between a hair region and the clothing, thereby affecting the reloading effect.
As shown in fig. 1, in which (a) is an original image before the retouching and (b) is an image after the retouching process is performed, a user in the original image is shawl hair before the retouching, and a gap appears between the hair and the clothes after the retouching.
Disclosure of Invention
The present invention provides a hair processing method, a hair processing device, an electronic apparatus, and a storage medium, which are directed to the deficiencies of the prior art mentioned above.
A first aspect of the invention provides a method of hair treatment, the method comprising:
positioning a hair area and a face area in the user image;
expanding the face area towards the vertex direction to obtain a protection area;
determining a region to be treated using the hair region and the protective region;
and carrying out deformation processing on the area to be processed on the user image along the direction opposite to the head top direction to obtain the user image with deformed hair.
In some embodiments of the present application, prior to locating the hair region and the face region in the user image, the method further comprises:
detecting key points of five sense organs of a face in a user image; and carrying out alignment processing on the user image according to the key point of the five sense organs.
In some embodiments of the present application, the method further comprises:
classifying the hair region in the user image to obtain a classification result; when the classification result is a long-hair backward cape classification, positioning a clothes area in the clothes template image; judging whether the hair area needs to be subjected to deformation treatment or not according to the clothes area and the hair area; and if so, executing a process of expanding the face area towards the vertex direction.
In some embodiments of the present application, the determining whether a deformation process needs to be performed on the hair region according to the clothes region and the hair region includes:
traversing column-by-column a difference in vertical coordinates of adjacent edges between the garment region and the hair region; acquiring a maximum vertical coordinate difference from the vertical coordinate differences; if the maximum vertical coordinate difference is smaller than a preset value, determining that deformation processing is not needed to be carried out on the hair area; if the maximum vertical coordinate difference is larger than the preset value, determining that the hair area needs to be deformed; wherein the difference in the vertical coordinates is the vertical coordinate of the edge of the garment region minus the vertical coordinate of the edge of the hair region.
In some embodiments of the present application, prior to locating the garment region in the garment template image, the method further comprises:
detecting key points of five sense organs of a virtual face in a garment template image; and carrying out alignment treatment on the clothing template image according to the key points of the five sense organs.
In some embodiments of the present application, the deforming the to-be-processed region on the user image in a direction opposite to the overhead direction includes:
acquiring a deformation coefficient corresponding to each column on the region to be processed; and according to the deformation coefficient corresponding to each column on the area to be processed, carrying out deformation processing on the area to be processed on the user image along the direction opposite to the vertex direction to obtain the user image with deformed hair.
In some embodiments of the present application, the deforming the to-be-processed area on the user image in a direction opposite to the vertex direction according to the deformation coefficient corresponding to each column on the to-be-processed area includes:
acquiring the maximum vertical coordinate and the minimum vertical coordinate of each column on the area to be processed; for each column on the region to be processed, determining a deformed vertical coordinate of the column according to the maximum vertical coordinate, the minimum vertical coordinate and the deformation coefficient of the column; and carrying out deformation processing on the area to be processed on the user image according to the deformed ordinate, the minimum ordinate and the maximum ordinate of each column on the area to be processed.
In some embodiments of the present application, the obtaining the deformation coefficient corresponding to each column on the region to be processed includes:
acquiring a maximum vertical coordinate and a minimum vertical coordinate of each column on the hair area, and acquiring a minimum vertical coordinate of each column on the clothes area; for each column on the hair area, determining a deformation coefficient corresponding to the column by using the maximum ordinate and the minimum ordinate of the column on the hair area and the minimum ordinate of the column on the clothes area; wherein the column of the area to be treated is the same as the column of the hair area.
In some embodiments of the present application, the deforming the to-be-processed area on the user image according to the deformed ordinate, the minimum ordinate, and the maximum ordinate of each column on the to-be-processed area includes:
generating a first map matrix and a second map matrix by respectively utilizing the width and the height of the user image, so that the mapping relation between the horizontal coordinate and the vertical coordinate of the user image is represented by the first map matrix and the second map matrix; performing linear interpolation according to the deformed ordinate, the minimum ordinate and the maximum ordinate of each column on the region to be processed so as to update the second map matrix; and performing deformation processing on the region to be processed on the user image by using the first map matrix and the updated second map matrix.
In some embodiments of the present application, the method further comprises:
and covering the clothes area on the user image with deformed hair to obtain a reloading image.
In some embodiments of the present application, said determining a region to be treated using said hair region and said protective region comprises:
and carrying out set subtraction operation between the hair area and the protection area to obtain an area to be treated.
A second aspect of the invention provides a hair treatment device comprising:
the positioning module is used for positioning a hair area and a face area in the user image;
the acquisition module is used for expanding the face area towards the top of the head to obtain a protection area and determining an area to be processed by using the hair area and the protection area;
and the deformation processing module is used for carrying out deformation processing on the area to be processed on the user image along the direction opposite to the vertex direction to obtain the user image with deformed hair.
A third aspect of the present invention proposes an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect when executing the program.
A fourth aspect of the present invention proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method according to the first aspect as described above.
Based on the hair treatment method and device of the first aspect and the second aspect, the invention has at least the following advantages or advantages:
after the hair area and the face area are located, the face area is expanded towards the vertex direction to obtain a protection area, the protection area comprises the hair area which does not need to be processed, so that the area to be processed is obtained through the located hair area and the protection area, then the area to be processed on the user image is subjected to deformation processing along the direction opposite to the vertex direction, and the user image with deformed hair is obtained.
Further, when the method is applied to a changing scene, the human face area is expanded towards the top of the head to cover hair which does not need to be processed, and the area to be processed is left, so that after the area to be processed is deformed along the downward direction subsequently, even if changing processing is executed, gaps do not occur between the hair area and the clothes, and seamless connection between the clothes and the hair after changing can be ensured.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram showing a comparison between a prior art system before and after reloading;
FIG. 2A is a flow chart illustrating an embodiment of a method of hair treatment according to an exemplary embodiment of the present invention;
FIG. 2B is a schematic diagram of a hair region and a face region obtained by segmentation according to the embodiment shown in FIG. 2A;
FIG. 2C is a schematic diagram of a protection area obtained after the face area shown in FIG. 2B is expanded towards the vertex direction;
FIG. 2D is a schematic view of the area to be treated obtained according to FIGS. 2B and 2C;
FIG. 2E is a schematic diagram of a first map matrix and a second map matrix according to the embodiment of the invention shown in FIG. 2A;
FIG. 2F is a schematic diagram showing a comparison of the present invention before and after reloading in accordance with the embodiment shown in FIG. 2A;
FIG. 3 is a flow chart illustrating a particular implementation of a method of hair treatment according to an exemplary embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the construction of a hair treatment device according to an exemplary embodiment of the present invention;
FIG. 5 is a diagram illustrating a hardware configuration of an electronic device according to an exemplary embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a structure of a storage medium according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to avoid the problem that gaps appear between the hair and the clothes after the clothes are changed and the clothes changing effect is affected, the clothes need to be stretched and deformed or the hair needs to be stretched and deformed, experiments show that after the clothes are stretched and deformed, the problem that gaps appear between the hair and the clothes after the clothes are changed can be solved, but the clothes after the clothes are stretched and deformed look quite different from users, and the problem that the clothes changing effect is poor exists.
Based on the above, the present application provides a hair processing method, that is, a hair region and a face region in a user image are located, the face region is expanded in a vertex direction to obtain a protection region, then a region to be processed is determined by using the hair region and the protection region, and the region to be processed on the user image is subjected to deformation processing in a direction opposite to the vertex direction to obtain a user image with deformed hair.
The technical effects that can be achieved based on the above description are:
after the hair area and the face area are located, the face area is expanded towards the vertex direction to obtain a protection area, the protection area comprises the hair area which does not need to be processed, so that the area to be processed is obtained through the located hair area and the protection area, then the area to be processed on the user image is subjected to deformation processing along the direction opposite to the vertex direction, and the user image with deformed hair is obtained.
Further, when the method is applied to a changing scene, the human face area is expanded towards the top of the head to cover hair which does not need to be processed, and the area to be processed is left, so that after the area to be processed is deformed along the downward direction subsequently, even if changing processing is executed, gaps do not occur between the hair area and the clothes, and seamless connection between the clothes and the hair after changing can be ensured.
It should be added that the hair treatment scheme provided by the patent can be applied to any scene needing hair deformation, and is not only suitable for a reloading scene, for example, in a hairdressing scene, the hair of a user is lengthened on line by adopting the scheme of the patent, so that the user can experience the effect after the hair is lengthened.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The first embodiment is as follows:
fig. 2A is a flow diagram illustrating an embodiment of a method of hair treatment according to an exemplary embodiment of the present invention, as shown in fig. 2A, the method of hair treatment comprising the steps of:
step 201: the hair region and the face region in the user image are located.
The user image is a picture uploaded by a user, and can be a user photo in any form such as a user self-photograph and a certificate photo.
Before step 201 is executed, the alignment process needs to be performed on the human body in the user image, and the specific implementation process may be: and detecting key points of five sense organs of the face in the user image, and aligning the user image according to the key points of the five sense organs so as to enable the human body in the user image to be correct.
Illustratively, the key points of five sense organs can be the eyes, mouth, ears, nose, eyebrows, etc.
In one possible implementation, the hair region and the face region are located by performing semantic segmentation on the hair and the face in the user image.
The hair area is an area formed by pixels of which the label type is hair, the white area shown in a diagram (a) in fig. 2B is a hair area, the face area is an area formed by pixels of which the label type is a face, the area includes an area enclosed by an auricle and the whole face, and the white area shown in a diagram (B) in fig. 2B is a face area.
Step 202: and expanding the human face area towards the vertex direction to obtain a protection area.
Wherein, by expanding the face region towards the vertex direction, the protection region can cover the vertex region which does not need to be processed, as shown in fig. 2C.
It should be noted that, taking the changing scene as an example, if the user's hair in the image is short hair (i.e. the hair does not go over the shoulder) or long hair front cape (both the left and right sides are front cape), even if the changing process is executed, because there is a gap between the short hair itself and the garment or the long hair itself is put on the front of the garment, the original style between the hair and the garment after the changing process is still maintained, the changing effect is not affected, and at this time, the hair does not need to be deformed; if the hair of the user in the image is the long hair after-cape, for the single-side long hair after-cape, a gap is formed between the single-side hair and the clothes after the clothes are changed, and for the double-side long hair after-cape, the double-side hair and the clothes after the clothes are changed have gaps, so that the clothes changing effect can be influenced, and therefore, the deformation treatment is needed as long as one side of the hair is the after-cape.
Based on this, before step 202 is executed, it is determined whether deformation processing needs to be performed on the hair, and in one possible implementation, a classification result is obtained by classifying the hair region in the user image, and when the classification result is a long hair after-the-round type, the clothing region in the clothing template image is located, and it is determined whether deformation processing needs to be performed on the hair region according to the clothing region and the hair region, and if so, the process of step 202 is executed.
Wherein, the classification result can comprise short hair, front hair drape and back hair drape. The hair rear cape category specifically comprises a left hair rear cape, a right hair rear cape and two side hair rear cape.
Alternatively, the user image may be input into a pre-trained classification model, and the classification model may detect the hair category of the user image.
Further, in the change scene, there may be a case where the clothing template is located above the clothing in the user image, and even if the change processing is performed, the clothing template covers a part of the hair without a gap therebetween, so that it is not necessary to perform the deformation processing on the hair region, and it is necessary to further determine whether the deformation processing on the hair region is necessary according to the clothing region and the hair region in the clothing template image.
In a possible implementation manner, the vertical coordinate difference of adjacent edges between the clothes area and the hair area is traversed column by column, the maximum vertical coordinate difference is obtained from the vertical coordinate difference, if the maximum vertical coordinate difference is smaller than a preset value, the clothes area can cover a part of the hair, deformation processing of the hair area is determined to be unnecessary, and if the maximum vertical coordinate difference is larger than the preset value, deformation processing of the hair area is determined to be required.
The vertical coordinate on the clothes area is a rectangular coordinate system value established by taking the upper left corner of the image of the clothes template as an original point, the vertical coordinate on the hair area is a rectangular coordinate system value established by taking the upper left corner of the image of the user as an original point, and the size of the image of the clothes template is consistent with that of the image of the user, so that the vertical coordinate difference is the vertical coordinate of the edge of the clothes area minus the vertical coordinate of the edge of the hair area.
For example, for a column, if the ordinate of the edge of the clothing area is smaller than the ordinate of the edge of the hair area, and the coordinate difference between the two is smaller than 0, it indicates that the clothing pixels on the column will be covered on the hair, and if the ordinate of the edge of the clothing area is larger than the ordinate of the edge of the hair area, and the coordinate difference between the two is larger than 0, it indicates that the clothing pixels on the column will not be covered on the hair.
It follows that the above preset value may be set to 0.
In a possible implementation manner, before the clothes area in the clothes template image is located, the key points of the five sense organs of the virtual human face in the clothes template image can be detected, and the clothes template image is aligned according to the detected key points of the five sense organs, so that the clothes template image and the user image are aligned consistently.
Step 203: the hair region and the protective region are used to define a region to be treated.
Wherein, because the protection area covers the hair area which does not need to be treated, the area to be treated can be obtained by adopting a special subtraction operation on the complete hair area and the protection area.
In an alternative embodiment, a set subtraction operation may be performed between the hair region and the protective region to obtain the region to be treated. Fig. 2D shows a protection region obtained by performing the set subtraction operation between the diagram (a) and the diagram 2C in fig. 2B, that is, a hair region located below the auricle.
Step 204: and carrying out deformation processing on the area to be processed on the user image along the direction opposite to the direction of the top of the head to obtain the user image with deformed hair.
In a possible implementation manner, the user image with deformed hair is obtained by obtaining the deformation coefficient corresponding to each column on the region to be processed and then performing deformation processing on the region to be processed on the user image in the direction opposite to the vertex direction according to the deformation coefficient corresponding to each column on the region to be processed.
And the direction opposite to the vertex direction is the downward deformation treatment of the area to be treated. When each row of hair area is processed, the larger the deformation coefficient is, the longer the hair corresponding to the row is processed, and the smaller the deformation coefficient is, the shorter the hair corresponding to the row is processed.
In an optional embodiment, for the process of acquiring the deformation coefficient corresponding to each column on the to-be-processed area, the deformation coefficient corresponding to each column is determined by acquiring the maximum ordinate and the minimum ordinate of each column on the hair area and acquiring the minimum ordinate of each column on the clothes area, and then for each column on the hair area, using the maximum ordinate and the minimum ordinate of the column on the hair area and the minimum ordinate of the column on the clothes area.
The columns of the to-be-processed area are the same as the columns of the hair area, so that the deformation coefficient corresponding to each column of the hair area is the deformation coefficient corresponding to each column of the to-be-processed area. Or, since the area to be treated belongs to a part of the hair area, the columns of the area to be treated all belong to the columns of the hair area.
Optionally, the calculation formula of the deformation coefficient corresponding to a certain column is as follows:
scale = (min _ y _ loop-min _ y _ hair)/(max _ y _ hair-min _ y _ hair) (equation 1)
In equation 1 above, scale is the deformation coefficient, min _ y _ tone is the minimum ordinate of the column on the clothing area (i.e., the ordinate on the clothing shoulder edge), and max _ y _ hair and min _ y _ hair are the maximum and minimum ordinates of the column on the hair area (i.e., the top of the head and the end of the hair ordinate), respectively.
In an optional embodiment, in a process of performing deformation processing on a to-be-processed area on a user image in a direction opposite to a vertex direction, a maximum vertical coordinate and a minimum vertical coordinate of each column on the to-be-processed area are obtained, then, for each column on the to-be-processed area, a deformed vertical coordinate of the column is determined according to the maximum vertical coordinate, the minimum vertical coordinate and a deformation coefficient of the column, and finally, deformation processing is performed on the to-be-processed area on the user image according to the deformed vertical coordinate, the minimum vertical coordinate and the maximum vertical coordinate of each column on the to-be-processed area.
Wherein, the vertical coordinate calculation formula after deformation of a certain column is as follows:
end _ h _ new = start _ h + (end _ h-start _ h) scale (equation 2)
In the above formula 2, end _ h _ new is the deformed ordinate of the column, start _ h is the minimum ordinate of the column, end _ h is the maximum ordinate of the column, and scale is the deformation coefficient of the column.
That is, the range of the vertical coordinate of a certain column on the region to be processed is (start _ h, end _ h), and the range of the vertical coordinate of the column after processing becomes (start _ h, end _ h _ new).
In a possible implementation manner, in a process of performing deformation processing on a to-be-processed region according to a deformed ordinate, a minimum ordinate and a maximum ordinate of each column on the to-be-processed region, a first map matrix and a second map matrix are generated by respectively using the width and the height of a user image, a mapping relation between an abscissa and an ordinate of the user image is represented by the first map matrix and the second map matrix, then linear interpolation is performed according to the deformed ordinate, the minimum ordinate and the maximum ordinate of each column on the to-be-processed region to update the second map matrix, and the first map matrix and the updated second map matrix are used for performing deformation processing on the to-be-processed region on the user image.
Wherein, the first map matrix does not need to be updated because the column direction does not need to be processed.
In specific implementation, the first map matrix and the second map matrix can be obtained by initializing a coordinate mapping function of opencv remap, where the coordinate mapping function is as follows:
map_x = np.zeros((h, w), dtype=np.float32)
map_y = np.zeros((h, w), dtype=np.float32)
map_x[:, :] = np.arange(w)
map_y[:, :] = np.arange(h)[:, np.newaxis]
the first two functions are used for initializing two map matrixes with the size h x w being 0, h is the height of the user image, w is the width of the user image, and each element value type in the map matrixes is a floating point type.
As shown in fig. 2E, for the first map matrix map _ x, each row element takes on values of 0 to w-1, and for the second map matrix map _ y, each column element takes on values of 0 to h-1.
Optionally, the linear interpolation function for updating the second map matrix map _ y is as follows:
map_y[start_h:end_h_new, col]= np.interp(np.arange(start_h,end_h_new),[start_h, end_h_new], [start_h, end_h])
wherein, end _ h _ new is the deformed ordinate, start _ h is the minimum ordinate, and end _ h is the maximum ordinate.
As can be seen from the above linear interpolation function, the ordinate value representing each column of the region to be processed in the second map matrix is updated.
It should be noted that, after the user image after the hair treatment is obtained, the image of the suit is obtained by overlaying the clothing region on the user image after the hair deformation.
As shown in fig. 2F, in which a drawing (a) is an original image before the changing, and a drawing (c) is an image after the changing is performed, before the changing, a user in the original image is a cape for growing hair, and after the flow processing of the above-mentioned steps 201 to 204, the clothing area on the clothing template image is covered on the image according to the coordinate correspondence to implement the changing, and as can be seen from the drawing (c), no gap appears between the hair and the clothing.
To this end, the hair processing flow shown in fig. 2A is completed, after the hair region and the face region are located, the face region is expanded in the vertex direction to obtain a protection region, the protection region includes a hair region that does not need to be processed, so that a region to be processed is obtained through the located hair region and the protection region, and then the region to be processed on the user image is subjected to deformation processing in the direction opposite to the vertex direction, so that the user image with deformed hair is obtained.
Further, when the method is applied to a changing scene, the human face area is expanded towards the top of the head to cover hair which does not need to be processed, and the area to be processed is left, so that after the area to be processed is deformed along the downward direction subsequently, even if changing processing is executed, gaps do not occur between the hair area and the clothes, and seamless connection between the clothes and the hair after changing can be ensured.
Example two:
fig. 3 is a flowchart illustrating a specific implementation of a hair processing method according to an exemplary embodiment of the present invention, based on the embodiment illustrated in fig. 2A, as illustrated in fig. 3, the specific flow of hair processing in the reloading scenario includes:
receiving a user picture and a clothes template picture uploaded by a user, aligning the user picture and the clothes template picture to ensure that the aligned user picture and the clothes template picture are consistent in size, aligning human bodies in the two pictures, classifying hairs in the user picture to obtain a classification result, positioning a clothes area in the clothes template picture, and directly covering the clothes area on the user picture according to a coordinate corresponding relation to realize clothes changing without deformation processing if the classification result is short hairs or hairs are covered; if the classification result is a long hair after-wrapping, positioning a hair area and a face area in the user picture, further judging whether the hair area needs to be subjected to deformation processing according to the clothes area and the hair area, expanding the face area towards the top direction to obtain a protection area when the deformation processing is judged to be needed, then determining a region to be processed by using the hair area and the protection area, performing deformation processing on the region to be processed on the user picture along the direction opposite to the top direction to obtain a user picture after hair deformation, and finally covering the clothes area on the user picture after hair processing according to the coordinate corresponding relation to realize reloading.
Based on the above description, in the reloading scene, by classifying the hair region in the user picture, when the classification result is the long hair after-wrapping, whether the hair area needs to be deformed or not is judged according to the clothes area and the hair area in the clothes template picture, if the human face area is required, the hair area which does not need to be treated is covered by expanding the human face area towards the top of the head to obtain a protection area, then obtaining a region to be processed by utilizing the protection region and the hair region, carrying out deformation processing on the region to be processed on the user picture along the direction opposite to the direction of the top of the head to obtain the user picture with the hair deformed, further covering the clothes region on the user picture to realize reloading, because the hair area has already finished stretching out and warping, consequently, can not appear the gap between hair and the clothing after the dress change, the dress change effect is better.
In accordance with embodiments of the foregoing hair treatment method, embodiments of a hair treatment device are also provided.
Fig. 4 is a schematic structural diagram illustrating a hair treatment device according to an exemplary embodiment of the present invention, which is used for performing the hair treatment method provided in any of the above embodiments, as shown in fig. 4, the hair treatment device includes:
a positioning module 610, configured to position a hair region and a face region in the user image;
an obtaining module 620, configured to expand the face region in a direction of a vertex to obtain a protection region, and determine a region to be processed by using the hair region and the protection region;
a deformation processing module 630, configured to perform deformation processing on the region to be processed on the user image in a direction opposite to the vertex direction, so as to obtain a user image with deformed hair.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the invention also provides electronic equipment corresponding to the hair treatment method provided by the embodiment, so as to execute the hair treatment method.
Fig. 5 is a hardware block diagram of an electronic device according to an exemplary embodiment of the present invention, the electronic device including: a communication interface 601, a processor 602, a memory 603, and a bus 604; the communication interface 601, the processor 602 and the memory 603 communicate with each other via the bus 604. The processor 602 may perform the hair treatment method described above by reading and executing machine executable instructions in the memory 603 corresponding to the control logic of the hair treatment method, and the details of the method are described in the above embodiments and will not be described herein again.
The memory 603 referred to in this disclosure may be any electronic, magnetic, optical, or other physical storage device that can contain stored information, such as executable instructions, data, and so forth. Specifically, the Memory 603 may be a RAM (Random Access Memory), a flash Memory, a storage drive (e.g., a hard disk drive), any type of storage disk (e.g., an optical disk, a DVD, etc.), or similar storage medium, or a combination thereof. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 601 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 604 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 603 is used for storing a program, and the processor 602 executes the program after receiving the execution instruction.
The processor 602 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by instructions in the form of hardware integrated logic circuits or software in the processor 602. The Processor 602 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
The electronic equipment provided by the embodiment of the application and the hair treatment method provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
Referring to fig. 6, the computer readable storage medium is an optical disc 30, on which a computer program (i.e., a program product) is stored, and when the computer program is executed by a processor, the computer program will execute the hair treatment method provided by any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memories (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiments of the present application and the hair treatment method provided by the embodiments of the present application have the same beneficial effects as the method adopted, executed or implemented by the application program stored in the computer-readable storage medium.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (14)

1. A method of hair treatment, comprising:
positioning a hair area and a face area in the user image;
expanding the face area towards the vertex direction to obtain a protection area;
determining a region to be treated using the hair region and the protective region;
and carrying out deformation processing on the area to be processed on the user image along the direction opposite to the direction of the top of the head to obtain the user image with deformed hair.
2. The method of claim 1, further comprising:
classifying the hair area in the user image to obtain a classification result;
when the classification result is the category of the long-hair backward cape, positioning a clothing area in the clothing template image;
judging whether the hair area needs to be subjected to deformation treatment or not according to the clothes area and the hair area;
and if so, executing a process of expanding the face area towards the vertex direction.
3. The method according to claim 2, wherein the determining whether the hair region needs to be deformed according to the clothes region and the hair region comprises:
traversing column-by-column a difference in vertical coordinates of adjacent edges between the garment region and the hair region;
acquiring a maximum vertical coordinate difference from the vertical coordinate differences;
if the maximum vertical coordinate difference is smaller than a preset value, determining that deformation processing is not needed to be carried out on the hair area;
if the maximum vertical coordinate difference is larger than the preset value, determining that the hair area needs to be deformed;
wherein the difference in the ordinates is the ordinate of the edge of the garment region minus the ordinate of the edge of the hair region.
4. The method according to claim 2, wherein the deforming the region to be processed on the user image in a direction opposite to the overhead direction comprises:
acquiring a deformation coefficient corresponding to each column on the region to be processed;
and according to the deformation coefficient corresponding to each column on the area to be processed, carrying out deformation processing on the area to be processed on the user image along the direction opposite to the vertex direction to obtain the user image with deformed hair.
5. The method according to claim 4, wherein the deforming the to-be-processed region on the user image in a direction opposite to the vertex direction according to the deformation coefficient corresponding to each column on the to-be-processed region comprises:
acquiring the maximum vertical coordinate and the minimum vertical coordinate of each column on the area to be processed;
for each column on the region to be processed, determining a deformed vertical coordinate of the column according to the maximum vertical coordinate, the minimum vertical coordinate and the deformation coefficient of the column;
and carrying out deformation processing on the area to be processed on the user image according to the deformed ordinate, the minimum ordinate and the maximum ordinate of each column on the area to be processed.
6. The method according to claim 4, wherein the obtaining of the deformation coefficient corresponding to each column on the region to be processed comprises:
acquiring the maximum ordinate and the minimum ordinate of each column on the hair area and acquiring the minimum ordinate of each column on the clothes area;
for each column on the hair area, determining a deformation coefficient corresponding to the column by using the maximum ordinate and the minimum ordinate of the column on the hair area and the minimum ordinate of the column on the clothes area;
wherein the column of the area to be treated is the same as the column of the hair area.
7. The method according to claim 6, wherein the deforming the to-be-processed area on the user image according to the deformed ordinate, the minimum ordinate and the maximum ordinate of each column on the to-be-processed area comprises:
generating a first map matrix and a second map matrix by using the width and the height of the user image respectively, so that the mapping relation between the horizontal coordinate and the vertical coordinate of the user image is represented by the first map matrix and the second map matrix;
performing linear interpolation according to the deformed ordinate, the minimum ordinate and the maximum ordinate of each column on the region to be processed so as to update the second map matrix;
and performing deformation processing on the region to be processed on the user image by using the first map matrix and the updated second map matrix.
8. The method of claim 2, further comprising:
and covering the clothes area on the user image with deformed hair to obtain a reloading image.
9. The method of claim 1, wherein said determining a treatment area using said hair region and said protective region comprises:
and carrying out set subtraction operation between the hair area and the protection area to obtain an area to be treated.
10. The method of claim 1, wherein prior to locating the hair region and the face region in the user image, the method further comprises:
detecting key points of five sense organs of a face in a user image;
and carrying out alignment processing on the user image according to the key point of the five sense organs.
11. The method of claim 2, wherein prior to locating the garment region in the garment template image, the method further comprises:
detecting key points of five sense organs of a virtual face in the clothing template image;
and carrying out alignment treatment on the clothing template image according to the key points of the five sense organs.
12. A hair treatment device, characterized in that it comprises:
the positioning module is used for positioning a hair area and a face area in the user image;
the acquisition module is used for expanding the face area towards the top of the head to obtain a protection area and determining an area to be processed by using the hair area and the protection area;
and the deformation processing module is used for carrying out deformation processing on the area to be processed on the user image along the direction opposite to the vertex direction to obtain the user image with deformed hair.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-11 are implemented when the processor executes the program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
CN202210048979.4A 2022-01-17 2022-01-17 Hair processing method and device, electronic equipment and storage medium Pending CN114565507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210048979.4A CN114565507A (en) 2022-01-17 2022-01-17 Hair processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210048979.4A CN114565507A (en) 2022-01-17 2022-01-17 Hair processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114565507A true CN114565507A (en) 2022-05-31

Family

ID=81711646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210048979.4A Pending CN114565507A (en) 2022-01-17 2022-01-17 Hair processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114565507A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510500A (en) * 2018-05-14 2018-09-07 深圳市云之梦科技有限公司 A kind of hair figure layer process method and system of the virtual figure image based on face complexion detection
CN108629781A (en) * 2018-04-24 2018-10-09 成都品果科技有限公司 A kind of hair method for drafting
CN109886144A (en) * 2019-01-29 2019-06-14 深圳市云之梦科技有限公司 Virtual examination forwarding method, device, computer equipment and storage medium
WO2020019913A1 (en) * 2018-07-25 2020-01-30 腾讯科技(深圳)有限公司 Face image processing method and device, and storage medium
CN112288665A (en) * 2020-09-30 2021-01-29 北京大米科技有限公司 Image fusion method and device, storage medium and electronic equipment
CN113034349A (en) * 2021-03-24 2021-06-25 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629781A (en) * 2018-04-24 2018-10-09 成都品果科技有限公司 A kind of hair method for drafting
CN108510500A (en) * 2018-05-14 2018-09-07 深圳市云之梦科技有限公司 A kind of hair figure layer process method and system of the virtual figure image based on face complexion detection
WO2020019913A1 (en) * 2018-07-25 2020-01-30 腾讯科技(深圳)有限公司 Face image processing method and device, and storage medium
CN109886144A (en) * 2019-01-29 2019-06-14 深圳市云之梦科技有限公司 Virtual examination forwarding method, device, computer equipment and storage medium
CN112288665A (en) * 2020-09-30 2021-01-29 北京大米科技有限公司 Image fusion method and device, storage medium and electronic equipment
CN113034349A (en) * 2021-03-24 2021-06-25 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11403874B2 (en) Virtual avatar generation method and apparatus for generating virtual avatar including user selected face property, and storage medium
US9916494B2 (en) Positioning feature points of human face edge
CN107507217B (en) Method and device for making certificate photo and storage medium
CN112419170B (en) Training method of shielding detection model and beautifying processing method of face image
CN111178337A (en) Human face key point data enhancement method, device and system and model training method
CN112884637A (en) Special effect generation method, device, equipment and storage medium
US20240004477A1 (en) Keyboard perspective method and apparatus for virtual reality device, and virtual reality device
CN114283052A (en) Method and device for cosmetic transfer and training of cosmetic transfer network
CN107153806B (en) Face detection method and device
CN114565507A (en) Hair processing method and device, electronic equipment and storage medium
JP2022153857A (en) Image processing apparatus, image processing method, moving device, and computer program
US20220207917A1 (en) Facial expression image processing method and apparatus, and electronic device
JP6547244B2 (en) Operation processing apparatus, operation processing method and program
CN113610864B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN113239867B (en) Mask area self-adaptive enhancement-based illumination change face recognition method
WO2021237736A1 (en) Image processing method, apparatus and system, and computer-readable storage medium
CN110910478B (en) GIF map generation method and device, electronic equipment and storage medium
CN110781739B (en) Method, device, computer equipment and storage medium for extracting pedestrian characteristics
CN108109107B (en) Video data processing method and device and computing equipment
CN114093011B (en) Hair classification method, device, equipment and storage medium
KR20210091033A (en) Electronic device for estimating object information and generating virtual object and method for operating the same
CN113763233A (en) Image processing method, server and photographing device
CN112819937A (en) Self-adaptive multi-object light field three-dimensional reconstruction method, device and equipment
CN112949571A (en) Method for identifying age, and training method and device of age identification model
CN108171719B (en) Video crossing processing method and device based on self-adaptive tracking frame segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220531