CN111507791A - Image-based hair style transformation method and device, computer equipment and storage medium - Google Patents

Image-based hair style transformation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111507791A
CN111507791A CN201910101393.8A CN201910101393A CN111507791A CN 111507791 A CN111507791 A CN 111507791A CN 201910101393 A CN201910101393 A CN 201910101393A CN 111507791 A CN111507791 A CN 111507791A
Authority
CN
China
Prior art keywords
image
hair style
face
target face
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910101393.8A
Other languages
Chinese (zh)
Inventor
谢婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201910101393.8A priority Critical patent/CN111507791A/en
Publication of CN111507791A publication Critical patent/CN111507791A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Accounting & Taxation (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Finance (AREA)
  • Image Processing (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)
  • Computer Vision & Pattern Recognition (AREA)

Abstract

The invention discloses a hairstyle transformation method, a hairstyle transformation device, computer equipment and a storage medium based on images, wherein the method comprises the following steps: acquiring a target face image, acquiring a first hair style label matched with face features from a preset first mapping list according to the face features of a target face in the target face image, thereby acquiring a hair style image meeting a limit condition from a preset hair style database by taking the first hair style label as the limit condition as a first hair style image matched with the target face image, and splicing and synthesizing the acquired first hair style image and the target face image to finish hair style transformation operation of the target face image. According to the method and the device, the hair style image matched with the face image is quickly acquired from the preset hair style image database according to the face feature of the target face in the face image, and the effect image is generated, so that the user can be correctly guided to select the hair style which is correctly suitable for the user.

Description

Image-based hair style transformation method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a hairstyle transformation method and device based on an image, computer equipment and a storage medium.
Background
With the continuous improvement of living conditions and the continuous improvement of living quality of people, people can configure different hairstyle styles according to different clothes and occasions. At present, when people change hair style, people usually choose to wear wigs or adopt the opinion of hairstylers to change hair style. Both methods are time-consuming, labor-consuming and cumbersome to operate. Moreover, because each face is unique, the hairstyle to which each face is applied is different, people need to try on or try to cut the hairstyle to know the actual matching condition of the hairstyle with themselves, and the situation that the hairstyle is not suitable and needs to be try on or cut again after trying on or cutting may occur. Moreover, once a person selects a wrong hairstyle, it is troublesome and difficult to change the hairstyle again, which is likely to cause time waste. Therefore, it is important to guide people to choose the correct hairstyle suitable for themselves.
Disclosure of Invention
The object of the present invention is to solve at least one of the above-mentioned technical problems, in particular how to guide people to choose the right hairstyle to suit themselves.
In order to solve the technical problem, the invention provides an image-based hairstyle transformation method, which comprises the following steps:
acquiring a target face image, wherein the face image comprises face features of a target face;
acquiring a first hair style label matched with the face feature from a preset first mapping list according to the face feature, wherein the first mapping list is a mapping relation table between the face feature and the hair style label;
acquiring a hair style image meeting the limiting condition in a preset hair style image database by taking the first hair style label as the limiting condition to serve as a first hair style image matched with the target face image;
and splicing and synthesizing the target face image and the first hair style image to finish the hair style transformation operation of the target face image.
Optionally, before the step of acquiring, in a preset hair style image database, a hair style image meeting the definition condition as a first hair style image matched with the face image by using the first hair style label as the definition condition, the method further includes:
acquiring a hair style image;
and classifying the hair style image according to a preset classification rule and storing the hair style image into a preset hair style image database.
Optionally, the step of classifying the hair style image according to a preset classification rule and storing the hair style image in a preset hair style image database includes:
inputting the hair style image into a preset hair style information matching model to identify hair style type information matched with the hair style image;
performing hair style label configuration on the hair style image according to the hair style type information, and associating the acquired hair style label with the hair style image;
and classifying and storing the hair style image according to the hair style label to generate a preset hair style image database.
Optionally, after the step of acquiring, in a preset hair style image database, a hair style image meeting the definition condition with the first hair style tag as a definition condition, as a first hair style image matched with the target face image, the method further includes:
identifying the skin color characteristic of a target face in the target face image;
acquiring a hair style color matched with the skin color feature from a preset second mapping list according to the skin color feature, wherein the second mapping list is a mapping relation table between the skin color feature and the hair style color;
and performing color rendering on the first hair style image according to the hair style color to obtain a second hair style image which accords with the corresponding skin color characteristic of the target face in the target face image.
Optionally, the step of splicing and synthesizing the face image and the first hair style image to complete a hair style transformation operation on the face image includes:
acquiring a face contour of a target face in the target face image;
performing transparency mixing processing on the face contour to obtain a first face contour to be subjected to hairstyle transformation operation;
and carrying out splicing operation on the first face contour and the first hair style image so as to generate a spliced hair style effect image.
Optionally, the face contour of the target face in the target face image includes a two-dimensional face contour and a three-dimensional face contour, where the step of performing a splicing operation on the first face contour and the first hair style image to generate a hair style effect image after the splicing operation includes:
when the face contour of the target face in the target face image is a two-dimensional face contour, performing two-dimensional splicing operation on the first face contour and the first type image to generate a first type effect image corresponding to one display side of the two-dimensional face contour;
and when the face contour of the target face in the target face image is a three-dimensional face contour, performing three-dimensional splicing operation on the first face contour and the first hair style image to generate a second hair style effect image displayed along the side surface of the three-dimensional face contour in multiple angles.
Optionally, the first type effect image is a picture image.
Optionally, the second hair style effect image is a picture image set composed of a plurality of pictures taken at different angles.
Optionally, the second hairstyle effect image is a video image formed by shooting along the side face of the three-dimensional human face contour in a rotating mode.
In order to solve the above technical problem, the present invention further provides an image-based hair style transformation device, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a target face image, and the face image comprises face characteristics of a target face;
the first processing module is used for acquiring a first hair style label matched with the face feature from a preset first mapping list according to the face feature, wherein the first mapping list is a mapping relation table between the face feature and the hair style label;
the second processing module is used for acquiring a hair style image which meets the limiting condition in a preset hair style image database by taking the first hair style label as the limiting condition to serve as a first hair style image matched with the target face image;
and the execution module is used for splicing and synthesizing the target face image and the first hair style image so as to finish the hair style transformation operation of the target face image.
Optionally, the image-based hair style transformation apparatus further comprises:
the first obtaining submodule is used for obtaining a hair style image;
and the first processing submodule is used for classifying the hair style image according to a preset classification rule and storing the hair style image into a preset hair style image database.
Optionally, the image-based hair style transformation apparatus further comprises:
the first identification submodule is used for inputting the hair style image into a preset hair style information matching model so as to identify hair style type information matched with the hair style image;
the first configuration submodule is used for carrying out hair style label configuration on the hair style image according to the hair style type information and associating the acquired hair style label with the hair style image;
and the second processing submodule is used for classifying and storing the hair style image according to the hair style label so as to generate a preset hair style image database.
Optionally, the image-based hair style transformation apparatus further comprises:
the second identification submodule is used for identifying the skin color characteristic of the target face in the target face image;
a third processing submodule, configured to obtain, according to the skin color feature, a hair style color matched with the skin color feature from a preset second mapping list, where the second mapping list is a mapping relationship table between the skin color feature and the hair style color;
and the fourth processing submodule is used for performing color rendering on the first hair style image according to the hair style color so as to obtain a second hair style image which accords with the corresponding skin color characteristic of the target face in the target face image.
Optionally, the image-based hair style transformation apparatus further comprises:
the second acquisition submodule is used for acquiring the face contour of the target face in the target face image;
the fifth processing submodule is used for carrying out transparency mixing processing on the human face contour so as to obtain a first human face contour to be subjected to hairstyle transformation operation;
and the first execution submodule is used for carrying out splicing operation on the first face outline and the first hair style image so as to generate a spliced hair style effect image.
Optionally, the face contour of the target face in the target face image includes a two-dimensional face contour and a three-dimensional face contour, and the image-based hairstyle transformation apparatus further includes:
the first generation submodule is used for carrying out two-dimensional splicing operation on the first face contour and the first sending image when the face contour of a target face in the target face image is a two-dimensional face contour, and generating a first sending effect image corresponding to one display side of the two-dimensional face contour;
and the second generation submodule is used for carrying out three-dimensional splicing operation on the first face contour and the first hair style image when the face contour of the target face in the target face image is a three-dimensional face contour, and generating a second hair style effect image displayed along the side surface of the three-dimensional face contour in a multi-angle mode.
Optionally, the first hair style effect image in the image-based hair style transformation device is a picture image.
Optionally, the second hair style effect image in the image-based hair style transformation device is a picture image set composed of a plurality of pictures taken at different angles.
Optionally, the second hair style effect image in the image-based hair style transformation device is a video image formed by shooting along the side face of the three-dimensional human face contour in a rotating manner.
In order to solve the above technical problem, the present invention further provides a computer device, which includes a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the image-based hair style transformation method.
In order to solve the above technical problem, the present invention further provides a storage medium storing computer readable instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of the above image-based hair style transformation method.
The invention has the beneficial effects that:
according to the method, a target face image is obtained, a first hair style label matched with face features is obtained from a preset first mapping list according to the face features of a target face in the target face image, so that a hair style image meeting the limiting conditions is obtained from a preset hair style database by taking the first hair style label as a limiting condition and is used as a first hair style image matched with the target face image, and then the obtained first hair style image and the target face image are spliced and synthesized, so that the hair style transformation operation of the target face image is completed. Therefore, the current hairstyle suitable for the user can be obtained only by uploading the picture by one key without repeatedly trying on or cutting the hairstyle, so that the time waste of the user can be avoided, and the user can be correctly guided to select the hairstyle suitable for the user.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a basic method of an image-based hair style transformation method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for building a hair style database in the image-based hair style transformation method according to the embodiment of the present invention;
fig. 3 is a schematic flowchart of another method for establishing a hair style image database in the image-based hair style transformation method according to the embodiment of the present invention;
fig. 4 is a schematic flowchart of another method for obtaining a first hair style image matched with a target face image in the image-based hair style transformation method according to the embodiment of the present invention;
fig. 5 is a schematic flow chart illustrating a method for performing a hair style transformation operation in the image-based hair style transformation method according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a method for generating a spliced hair style effect image in the image-based hair style transformation method according to an embodiment of the present invention;
FIG. 7 is a block diagram of a basic structure of an image-based hair style transformation apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram of a basic structure of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In some of the flows described in the present specification and claims and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, and that the order of the operations is merely to distinguish between the various operations, which by themselves do not represent any order of execution. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present invention.
Examples
As will be appreciated by those skilled in the art, "terminal" as used herein includes both devices that are wireless signal receivers, devices that have only wireless signal receivers without transmit capability, and devices that include receive and transmit hardware, devices that have receive and transmit hardware capable of performing two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "terminal" or "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. As used herein, a "terminal Device" may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, or a smart tv, a set-top box, etc.
The user terminal mentioned in this embodiment is the above terminal.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a basic method of an image-based hair style transformation method according to an embodiment of the present invention.
As shown in fig. 1, the image-based hair style transformation method includes the following steps:
s100: and acquiring a target face image, wherein the face image comprises the face shape characteristics of the target face.
The image-based hair style transformation method provided by the invention is based on the face image, can be applied to the image processing function of a camera, and can also be applied to the role modeling function of a game. In this embodiment, when a user uses the terminal to perform a function operation such as the above, a face image to be subjected to a hairstyle transformation operation may be obtained by shooting with a terminal camera or by calling from a terminal memory, where the face feature information of a target face in the face image is clearly recognizable. And then, the face image is sent to a background server end corresponding to a function executed by a terminal, so that after the background server end receives the face image, the face feature of a target face in the face image can be obtained through a face recognition technology, and the face image is subjected to hairstyle transformation operation according to the face feature. Wherein the facial features include facial contour features such as facial shape, frontal angle width, cheek length, etc.; the method also comprises human face local features, such as facial features, scars, birthmarks, spots and the like, and the coordinate point position of each acquired human face local feature corresponding to the human face is recorded.
S200: and comparing the target face image with a preset first mapping list to obtain a first hair style label matched with the target face image, wherein the first mapping list is a mapping relation table between the face feature and the hair style label.
Taking a camera APP as an example, after a user uses the camera APP to shoot a face image needing hairstyle transformation operation, triggering an image processing function of the camera APP based on the face image so as to perform issue transformation operation on the face image. And after the background server receives a target face image sent by the terminal, carrying out face recognition on the face image through a face recognition technology so as to obtain the face characteristics corresponding to the target face in the face image. And a first mapping list is pre-configured in the background server, and the first mapping list records the matching relationship between the face features and the hair style labels, and is a mapping relationship table between the face features and the hair style labels. After the background server side identifies the face features of the target face in the target face image, the identified face features are compared with the first mapping list, so that a first type label matched with the identified face features is confirmed and obtained from the first mapping list.
Illustratively, the hair style labels may include length type labels for hair styles, such as short ear, clavicle, shoulder, and waist styles, among others; bang type tags such as bang, bang, french bang, air bang, and the like; hair straightening tags, hair curling tags, such as water wave curls, spiral curls, large wave curls, small wave curls, and the like. In the first mapping relationship table, a mapping relationship between the face features and the hair style label is established, for example, a mapping relationship between the qiliu and the long face is established; establishing a mapping relation between the partial Liuhai and the round face; a mapping relationship is established between the air flow and the elliptical face, and the like. Mapping relations are respectively established between the water ripple curly hair and the long face, the round face, the oval face and the like; spirally curling hair; curling hair with big waves; small waves, curly hair, and so on. Through the listed mapping relationship, after the background server identifies the face features of the target face in the target face image, the first type label matched with the target face image can be obtained according to the corresponding mapping relationship. For example, if the face features of the target face in the target face image include that the face shape information is a round face, acquiring a bias bang label as a first type label matched with the target face image; and if the face features of the target face in the target face image comprise that the face shape information is a long face, acquiring a Qiliu label as a first type label matched with the target face image, and the like. In addition, for the local features of the face, a mapping relationship can be established between the position of the local features of the face and the hair style label, for example, when the local features such as scars, birthmarks, spots and the like are located at the forehead position of the face, a mapping relationship is established between the forehead position and the Qiliu label; for another example, when local features such as scars, birthmarks, spots and the like are located at the side face position of the human face, a mapping relation is established between the side face position and the partial bang label, and the like. At this time, if a scar is identified on the forehead of the target face in the target face image, the scar is a local face feature of the target face in the target face image, and since the position of the local face feature on the face is the forehead position, in this embodiment, the bang label is obtained as the first type label matched with the target face image.
And S300, acquiring a hair style image meeting the limiting condition in a preset hair style image database by taking the first hair style label as the limiting condition to serve as the first hair style image matched with the target face image.
In this embodiment, hair style images of various types are stored in the preset hair style image database, wherein each hair style image in the preset hair style image database has a hair style tag representing hair style characteristics and/or applicable people. Therefore, after the background server side obtains the first hair style label matched with the target face image, the first hair style label can be used as a limiting condition to obtain a hair style image meeting the limiting condition in a preset hair style image database to be used as a first hair style image matched with the target face image. For example, if the first hair style label matched with the recognized face feature is determined and obtained from the first mapping list according to the recognized face feature and is a short hair style label, a bang-bang label, and a ripple curl label, the preset hair style image database is traversed by taking the three labels as defined conditions, so as to obtain at least one hair style image with the three hair style labels in the preset hair style image database as the first hair style image matched with the target face image.
In some embodiments, when the first hair style image matching the target face image is obtained from the preset hair style image database, the further selection may be performed by calculating a matching degree between the target face image and the hair style image stored in the preset hair style image database. For example, a ratio value between a face length and a face width of a target face in the target face image, a width value of a frontal angle, and the like are calculated, the calculated value is compared with an applicable value corresponding to the hair style image stored in the preset hair style image database, a matching degree between the calculated value and the applicable value is obtained, and then the hair style image with a higher matching degree with the target face image is obtained as a first hair style image matched with the target face image.
S400: and splicing and synthesizing the target face image and the first hair style image to finish the hair style transformation operation of the face image.
In this embodiment, after the first type image matched with the target face image is obtained, the target face image and the first type image are spliced and synthesized to complete the hair style transformation operation of the face image. Specifically, before the stitching, the method further comprises the steps of obtaining a head contour of the target face image, and further preprocessing the edge of the head contour to blurring the head contour line, so as to better stitch and synthesize the head contour line and the first type image without a trace.
In the image-based hair style transformation method according to the embodiment, the target face image is acquired, the first hair style label matched with the face feature is acquired from the preset first mapping list according to the face feature of the target face in the target face image, so that the hair style image meeting the restriction condition is acquired from the preset hair style database by taking the first hair style label as the restriction condition and is used as the first hair style image matched with the target face image, and then the acquired first hair style image and the target face image are spliced and synthesized, so that the hair style transformation operation of the target face image is completed. Therefore, the current hairstyle suitable for the user can be obtained only by uploading the picture by one key without repeatedly trying on or cutting the hairstyle, so that the time waste of the user can be avoided, and the user can be correctly guided to select the hairstyle suitable for the user.
In some embodiments, please refer to fig. 2, fig. 2 is a schematic flowchart illustrating a method for creating a hair style database in the image-based hair style transformation method according to an embodiment of the present invention.
As shown in fig. 2, the step S300 further includes a step S500 and a step S600. Wherein, S500: acquiring a hair style image; and S600, classifying the hair style image according to a preset classification rule and storing the hair style image into a preset hair style image database.
The hair style database stores various types of hair style images, such as waist hair style, which can be divided into straight hair style, hair without hair, and the like, wherein the hair style can be divided into water wave curly hair, spiral curly hair, big wave curly hair, small wave curly hair, and the like; the belt of the bang can also be divided into a belt with a regular bang, a belt with a middle bang, a belt with a partial long bang, a belt with a partial short bang, a belt with an air bang, a belt with a French bang, and the like. The hair style images stored in the issuing database are obtained by means of web crawlers or big data acquisition. In this embodiment, an image with a clearly recognizable hair style of a person is obtained through a web crawler or a big data acquisition mode, then the obtained image is subjected to matting processing, the hair style of the person in the image is drawn to generate a hair style image, and then the generated hair style image is classified according to a preset classification rule and stored in a preset hair style image database. The specific classification rule comprises the steps of identifying the hair style type information of the hair style image, setting a hair style label corresponding to the hair style image according to the identified issue type information, and storing the hair style image in a hair style image database according to the hair style label set corresponding to the hair style image in a classification mode.
In some embodiments, please refer to fig. 3, fig. 3 is a flowchart illustrating another method for establishing a hair style image database in the image-based hair style transformation method according to the embodiment of the present invention.
As shown in fig. 3, the step S600 may further include steps S610 to S630. Wherein, S610: inputting the hair style image into a preset hair style type information matching model to identify hair style type information matched with the hair style image; s620, performing hair style label configuration on the hair style image according to the hair style type information, and associating the acquired hair style label with the hair style image; and S630, classifying and storing the hair style image according to the hair style label to generate a preset hair style image database.
In this embodiment, before identifying the hair style information matching the hair style image, an information matching model for identifying the hair style information matching the hair style image according to the hair style image needs to be trained in advance. The information matching model is a convolutional neural network model trained to a convergence state, and the convolutional neural network model is trained to identify hair style type information matched with the hair style image according to the hair style image. The convolutional neural network model may be a CNN convolutional neural network model or a VGG convolutional neural network model. The convolutional neural network model provided by this embodiment may train the information matching model to a convergence state through a large amount of sample data (e.g., different hair style images), so that the information matching model has a function of identifying hair style type information matched with the hair style image. Therefore, after the acquired image is subjected to the cutout processing and the hairstyle of the person in the image is drawn to generate the hairstyle image, the hairstyle image is input into a preset hairstyle information matching model, so that the hairstyle type information matched with the hairstyle image can be identified by the hairstyle information matching model according to the hairstyle image. Wherein the hair style type information comprises length type information, bang type information, straight coil type information and the like of the hair style. After identifying the hair style type information matching with the hair style image, a hair style label configuration may be performed on the hair style image according to the hair style type information, and the obtained hair style label may be associated with the hair style image. For example, the hair style type information identified from the hair style image includes: the length type information is the waist length hair, the bang type information is the partial long bang, the straight type information of rolling up is big wave curly hair, the hairstyle label of this hairstyle image of corresponding configuration this moment includes and reaches the waist length hair label, partial long bang label, big wave curly hair label etc. then with this hairstyle label of configuration with hairstyle image carries out the associative setting, then according to hairstyle label is right hairstyle image is categorised and is stored to generate preset hairstyle image database, and then, can be according to hairstyle label is followed obtain corresponding hairstyle image in the preset hairstyle image database.
In some embodiments, please refer to fig. 4, and fig. 4 is a flowchart illustrating another method for obtaining a first hair style image matched with a target face image in an image-based hair style transformation method according to an embodiment of the present invention.
As shown in fig. 4, after the step S300, steps S700 to S900 may be further included. Wherein, S700: identifying the skin color characteristic of a target face in the target face image; s800: acquiring a hair style color matched with the skin color feature from a preset second mapping list according to the skin color feature, wherein the second mapping list is a mapping relation table between the skin color feature and the hair style color; s900: and performing color rendering on the first hair style image according to the hair style color to obtain a second hair style image which accords with the corresponding skin color characteristic of the target face in the target face image.
In this embodiment, the background server is further configured with a second mapping list in advance, where the second mapping list describes a matching relationship between skin color features and hair style colors, and is a mapping relationship table between skin color features and hair style colors. After the background server side obtains the first hair style image according to the face features, the skin color features of the target face in the target face image can be further identified, so that the hair style color matched with the skin color features is obtained from the second mapping list according to the skin color features, and the obtained first hair style image is subjected to color rendering according to the obtained hair style color, so that the second hair style image which accords with the corresponding skin color features of the target face in the target face image is obtained. In the second mapping list, a mapping relationship between the skin color feature and the hair style color is established, for example, a reddish skin color feature corresponds to a medium color tone, such as silver gray, light brown or light brown; yellow and grayish yellow skin color characteristics correspond to deep color tones with more vivid colors, such as purple red, dark brown and the like; olive skin tone characteristics correspond to dark shades such as black, purple, reddish brown; while milky or creamy skin tone characteristics correspond to any shade, etc.
In some embodiments, when performing color rendering on the acquired first hair style image according to the acquired hair style color to acquire a second hair style image that conforms to a skin color feature corresponding to a target face in the target face image, further selection may be performed by calculating a matching degree between a skin color and a hair style color corresponding to the target face image. For example, after obtaining a hair style color tone matched with the skin color corresponding to the target face image according to the mapping relationship between the skin color feature and the hair style color, calculating the matching degree between each hair style color in the hair style color tone and the skin color corresponding to the target face image, further obtaining a hair style color with a higher matching degree with the skin color corresponding to the target face image, and performing color rendering on the obtained first hair style image to generate a second hair style image matched with the target face image.
In some embodiments, please refer to fig. 5, fig. 5 is a flowchart illustrating a method for performing a hair style transformation operation in an image-based hair style transformation method according to an embodiment of the present invention.
As shown in fig. 5, the step S400 may include steps S410 to S430. Wherein, S410: acquiring a face contour of a target face in the target face image; s420: performing transparency mixing processing on the target face contour to obtain a first face contour to be subjected to hairstyle transformation operation; s430: and carrying out splicing operation on the first face contour and the first hair style image so as to generate a spliced hair style effect image.
In this embodiment, after obtaining a first sending image matched with the target face image, a splicing synthesis needs to be performed on the target face image and the first sending image to generate a hair style effect image after hair style conversion, the target face image is further identified to obtain a face contour of a target face in the target face image, and then transparency blending processing is performed on the face contour, specifically, by adjusting image parameters of the face contour, edge lines of the face contour are blurred to obtain a first face contour to be subjected to hair style conversion operation, then a splicing operation is performed on the first face contour and the first sending image, in the process of performing the splicing synthesis operation, a matching relationship between a contour edge of the first face contour and the first sending image can also be adjusted, to generate a spliced hair style effect image.
In some embodiments, please refer to fig. 6, and fig. 6 is a flowchart illustrating a method for generating a stitched hair style effect image in the image-based hair style transformation method according to an embodiment of the present invention.
As shown in fig. 6, the step S430 may further include a step S431 and a step S432. Wherein, S431: when the face contour of the target face in the target face image is a two-dimensional face contour, performing two-dimensional splicing operation on the first face contour and the first type image to generate a first type effect image corresponding to one display side of the two-dimensional face contour; s432: and when the face contour of the target face in the target face image is a three-dimensional face contour, performing three-dimensional splicing operation on the first face contour and the first hair style image to generate a second hair style effect image displayed along the side surface of the three-dimensional face contour in multiple angles.
The face contour of the target face in the target face image can be a two-dimensional face contour or a three-dimensional face contour. Therefore, in this embodiment, before the stitching operation is performed on the first face contour and the first type image, whether the first face contour is a two-dimensional face contour or a three-dimensional face contour in the recognition is further included, and when the first face contour is recognized as the two-dimensional face contour, the two-dimensional stitching operation is performed on the first face contour and the first type image, so as to generate a first type effect image corresponding to a display side of the two-dimensional face contour. For example, if the display side of the two-dimensional face contour is the front side, the first type effect image is a picture image obtained by splicing and synthesizing the front side of the two-dimensional face contour and the front side of the first type; and if the display side of the two-dimensional face contour is the side deviated from the left 60 degrees positively, the first type effect image is a picture image formed by splicing and synthesizing the side deviated from the left 60 degrees positively and the side deviated from the left 60 degrees positively of the two-dimensional face contour. And when the first face contour is identified to be a three-dimensional face contour, carrying out three-dimensional splicing operation on the first face contour and the first hair style image to generate a second hair style effect image displayed along the side surface of the three-dimensional face contour in multiple angles. The second hairstyle effect image may be a picture image set consisting of a plurality of pictures taken at different angles. For example, after the three-dimensional stitching operation is performed on the first face contour and the first hairstyle image, a plurality of pictures taken at different angles, such as a front picture image, a back picture image, a right left picture image, a right picture image, a left 45-degree forward picture image, a right 45-degree forward picture image, and the like, in the composite image are acquired to form a picture image set as the second hairstyle effect image of the hairstyle conversion. The second hair style effect image can also be a video image formed by shooting along the side face rotation of the three-dimensional human face contour. For example, the recording is performed by rotating 360 degrees in parallel along the side surface of the three-dimensional face contour with the front side of the three-dimensional face contour as a starting point, so as to shoot and form a video image as a second hair style effect image of the hair style transformation.
In order to solve the above technical problem, an embodiment of the present invention further provides an image-based hair style transformation apparatus. Referring to fig. 7 in detail, fig. 7 is a block diagram illustrating a basic structure of an image-based hair style transformation apparatus according to an embodiment of the present invention.
As shown in fig. 7, an image-based hairstyle changing apparatus includes: the device comprises an acquisition module, a first processing module, a second processing module and an execution module. The acquisition module is used for acquiring a target face image, wherein the face image comprises face features of a target face; the first processing module is used for acquiring a first hair style label matched with the face feature from a preset first mapping list according to the face feature, wherein the first mapping list is a mapping relation table between the face feature and the hair style label; the second processing module is used for acquiring a hair style image meeting the limiting condition in a preset hair style image database by taking the first hair style label as the limiting condition to serve as a first hair style image matched with the target face image; the execution module is used for splicing and synthesizing the target face image and the first hair style image so as to finish the hair style transformation operation of the target face image.
The image-based hair style transformation device in the above embodiment obtains a first hair style label matching with a face feature from a preset first mapping list according to a face feature of a target face in the target face image, thereby obtaining a hair style image meeting a restriction condition from a preset hair style database with the first hair style label as the restriction condition as a first hair style image matching with the target face image, and then splicing and synthesizing the obtained first hair style image and the target face image to complete a hair style transformation operation of the target face image. Therefore, the current hairstyle suitable for the user can be obtained only by uploading the picture by one key without repeatedly trying on or cutting the hairstyle, so that the time waste of the user can be avoided, and the user can be correctly guided to select the hairstyle suitable for the user.
In some embodiments, the image-based hair style transformation apparatus further comprises a first obtaining sub-module and a first processing sub-module. The first obtaining submodule is used for obtaining a hair style image; the first processing submodule is used for classifying the hair style image according to a preset classification rule and storing the hair style image into a preset hair style image database.
In some embodiments, the image-based hair style transformation apparatus further comprises a first identification sub-module, a first configuration sub-module, and a second processing sub-module. The first identification submodule is used for inputting the hair style image into a preset hair style information matching model so as to identify hair style type information matched with the hair style image; the first configuration submodule is used for carrying out hair style label configuration on the hair style image according to the hair style type information and associating the acquired hair style label with the hair style image; and the second processing submodule is used for classifying and storing the hair style image according to the hair style label so as to generate a preset hair style image database.
In some embodiments, the image-based hair style transformation apparatus further includes a second identification sub-module, a third processing sub-module, and a fourth processing sub-module. The second identification submodule is used for identifying the skin color characteristics of a target face in the target face image; the third processing submodule is used for acquiring a hair style color matched with the skin color feature from a preset second mapping list according to the skin color feature, wherein the second mapping list is a mapping relation table between the skin color feature and the hair style color; and the fourth processing submodule is used for performing color rendering on the first hair style image according to the hair style color so as to obtain a second hair style image which accords with the corresponding skin color characteristic of the target face in the target face image.
In some embodiments, the image-based hair style transformation apparatus further comprises a second obtaining sub-module, a fifth processing sub-module, and a first executing sub-module. The second obtaining submodule is used for obtaining the face contour of the target face in the target face image; the fifth processing submodule is used for carrying out transparency mixing processing on the face contour so as to obtain a first face contour to be subjected to hairstyle transformation operation; the first execution submodule is used for carrying out splicing operation on the first face outline and the first hair style image so as to generate a spliced hair style effect image.
In some embodiments, the face contour of the target face in the target face image includes a two-dimensional face contour and a three-dimensional face contour, and the image-based hairstyle transformation apparatus further includes: a first generation submodule and a second generation submodule. The first generation submodule is used for performing two-dimensional splicing operation on the first face contour and the first sending image when the face contour of a target face in the target face image is a two-dimensional face contour, and generating a first sending effect image corresponding to one display side of the two-dimensional face contour; and the second generation submodule is used for carrying out three-dimensional splicing operation on the first face contour and the first hair style image when the face contour of the target face in the target face image is a three-dimensional face contour, and generating a second hair style effect image displayed along the side surface of the three-dimensional face contour in a multi-angle mode.
In some embodiments, the first hair style effect image in the image-based hair style transformation apparatus is a picture image.
In some embodiments, the second hairstyle effect image in the image-based hairstyle transformation apparatus is a picture image set composed of a plurality of pictures taken at different angles.
In some embodiments, the second hair style effect image in the image-based hair style transformation device is a video image formed by shooting along the side rotation of the three-dimensional human face contour.
In order to solve the technical problem, an embodiment of the present invention further provides a computer device. Referring to fig. 8, fig. 8 is a block diagram of a basic structure of a computer device according to an embodiment of the present invention.
As shown in fig. 8, the internal structure of the computer device is schematically illustrated. As shown in fig. 8, the computer apparatus includes a processor, a nonvolatile storage medium, a memory, and a network interface connected through a system bus. The non-volatile storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions can enable the processor to realize an image-based hair style transformation method when being executed by the processor. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, may cause the processor to perform a method of image-based hair style transformation. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of the obtaining module 10, the first processing module 20, the second processing module 30 and the executing module 40 in fig. 7, and the memory stores program codes and various types of data required for executing the above modules. The network interface is used for data transmission between user terminals or servers. The memory in this embodiment stores program codes and data necessary for executing all the sub-modules in the image-based hair style transformation device, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
The computer device in the above embodiment acquires a target face image, acquires a first hair style label matched with a face feature from a preset first mapping list according to the face feature of a target face in the target face image, thereby acquiring a hair style image meeting a restriction condition from a preset hair style database with the first hair style label as the restriction condition as a first hair style image matched with the target face image, and then splices and synthesizes the acquired first hair style image and the target face image to complete a hair style transformation operation of the target face image. Therefore, the current hairstyle suitable for the user can be obtained only by uploading the picture by one key without repeatedly trying on or cutting the hairstyle, so that the time waste of the user can be avoided, and the user can be correctly guided to select the hairstyle suitable for the user.
The present invention also provides a storage medium storing computer readable instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of the image-based hair style transformation method according to any of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An image-based hair style transformation method, comprising the steps of:
acquiring a target face image, wherein the face image comprises face features of a target face;
acquiring a first hair style label matched with the face feature from a preset first mapping list according to the face feature, wherein the first mapping list is a mapping relation table between the face feature and the hair style label;
acquiring a hair style image meeting the limiting condition in a preset hair style image database by taking the first hair style label as the limiting condition to serve as a first hair style image matched with the target face image;
and splicing and synthesizing the target face image and the first hair style image to finish the hair style transformation operation of the target face image.
2. The image-based hair style transformation method according to claim 1, wherein before the step of obtaining the hair style image meeting the defined condition in a preset hair style image database with the first hair style label as the defined condition as the first hair style image matching the face image, the method further comprises:
acquiring a hair style image;
and classifying the hair style image according to a preset classification rule and storing the hair style image into a preset hair style image database.
3. The image-based hair style transformation method according to claim 2, wherein the step of classifying the hair style image according to a preset classification rule and storing the hair style image in a preset hair style image database comprises:
inputting the hair style image into a preset hair style information matching model to identify hair style type information matched with the hair style image;
performing hair style label configuration on the hair style image according to the hair style type information, and associating the acquired hair style label with the hair style image;
and classifying and storing the hair style image according to the hair style label to generate a preset hair style image database.
4. The image-based hair style transformation method according to claim 1, further comprising, after the step of obtaining a hair style image meeting the defined condition in a preset hair style image database with the first hair style label as a defined condition as a first hair style image matching the target face image:
identifying the skin color characteristic of a target face in the target face image;
acquiring a hair style color matched with the skin color feature from a preset second mapping list according to the skin color feature, wherein the second mapping list is a mapping relation table between the skin color feature and the hair style color;
and performing color rendering on the first hair style image according to the hair style color to obtain a second hair style image which accords with the corresponding skin color characteristic of the target face in the target face image.
5. The image-based hair style transformation method according to claim 1, wherein the step of performing stitching synthesis on the face image and the first hair style image to complete the hair style transformation operation on the face image comprises:
acquiring a face contour of a target face in the target face image;
performing transparency mixing processing on the face contour to obtain a first face contour to be subjected to hairstyle transformation operation;
and carrying out splicing operation on the first face contour and the first hair style image so as to generate a spliced hair style effect image.
6. The image-based hair style transformation method according to claim 5, wherein the face contour of the target face in the target face image comprises a two-dimensional face contour and a three-dimensional face contour, and the step of performing a stitching operation on the first face contour and the first hair style image to generate a hair style effect image after the stitching operation comprises:
when the face contour of the target face in the target face image is a two-dimensional face contour, performing two-dimensional splicing operation on the first face contour and the first type image to generate a first type effect image corresponding to one display side of the two-dimensional face contour;
and when the face contour of the target face in the target face image is a three-dimensional face contour, performing three-dimensional splicing operation on the first face contour and the first hair style image to generate a second hair style effect image displayed along the side surface of the three-dimensional face contour in multiple angles.
7. The image-based hair style transformation method according to claim 6, wherein the first hair style effect image is a picture image.
8. An image-based hair styling appliance, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a target face image, and the face image comprises face characteristics of a target face;
the first processing module is used for acquiring a first hair style label matched with the face feature from a preset first mapping list according to the face feature, wherein the first mapping list is a mapping relation table between the face feature and the hair style label;
the second processing module is used for acquiring a hair style image which meets the limiting condition in a preset hair style image database by taking the first hair style label as the limiting condition to serve as a first hair style image matched with the target face image;
and the execution module is used for splicing and synthesizing the target face image and the first hair style image so as to finish the hair style transformation operation of the target face image.
9. An electronic device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the image-based hair style transformation method of any of claims 1 to 7.
10. A storage medium having stored thereon computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the method of image-based hair style transformation of any of claims 1 to 7.
CN201910101393.8A 2019-01-31 2019-01-31 Image-based hair style transformation method and device, computer equipment and storage medium Pending CN111507791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910101393.8A CN111507791A (en) 2019-01-31 2019-01-31 Image-based hair style transformation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910101393.8A CN111507791A (en) 2019-01-31 2019-01-31 Image-based hair style transformation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111507791A true CN111507791A (en) 2020-08-07

Family

ID=71877382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910101393.8A Pending CN111507791A (en) 2019-01-31 2019-01-31 Image-based hair style transformation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111507791A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819921A (en) * 2020-11-30 2021-05-18 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for changing the hairstyle of a person

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1694110A (en) * 2004-05-07 2005-11-09 日本先锋公司 Hairstyle suggesting system, hairstyle suggesting method, and computer program product
JP2011101823A (en) * 2011-02-14 2011-05-26 Kao Corp Hairstyle advice method
CN103065360A (en) * 2013-01-16 2013-04-24 重庆绿色智能技术研究院 Generation method and generation system of hair style effect pictures
CN105117445A (en) * 2015-08-13 2015-12-02 北京建新宏业科技有限公司 Automatic hairstyle matching method, device and system
CN107194981A (en) * 2017-04-18 2017-09-22 武汉市爱米诺网络科技有限公司 Hair style virtual display system and its method
CN107545051A (en) * 2017-08-23 2018-01-05 武汉理工大学 Hair style design system and method based on image procossing
CN108090422A (en) * 2017-11-30 2018-05-29 深圳云天励飞技术有限公司 Hair style recommends method, Intelligent mirror and storage medium
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
CN108664569A (en) * 2018-04-24 2018-10-16 杭州数为科技有限公司 A kind of hair style recommends method, system, terminal and medium
CN108960167A (en) * 2018-07-11 2018-12-07 腾讯科技(深圳)有限公司 Hair style recognition methods, device, computer readable storage medium and computer equipment
CN108985890A (en) * 2018-06-29 2018-12-11 云智衣橱(深圳)科技有限责任公司 A kind of hair style matching process and system
CN109190574A (en) * 2018-09-13 2019-01-11 郑州云海信息技术有限公司 A kind of hair style recommended method, device, terminal and storage medium based on big data

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1694110A (en) * 2004-05-07 2005-11-09 日本先锋公司 Hairstyle suggesting system, hairstyle suggesting method, and computer program product
US20050251463A1 (en) * 2004-05-07 2005-11-10 Pioneer Corporation Hairstyle suggesting system, hairstyle suggesting method, and computer program product
JP2011101823A (en) * 2011-02-14 2011-05-26 Kao Corp Hairstyle advice method
CN103065360A (en) * 2013-01-16 2013-04-24 重庆绿色智能技术研究院 Generation method and generation system of hair style effect pictures
CN105117445A (en) * 2015-08-13 2015-12-02 北京建新宏业科技有限公司 Automatic hairstyle matching method, device and system
CN107194981A (en) * 2017-04-18 2017-09-22 武汉市爱米诺网络科技有限公司 Hair style virtual display system and its method
CN107545051A (en) * 2017-08-23 2018-01-05 武汉理工大学 Hair style design system and method based on image procossing
CN108090422A (en) * 2017-11-30 2018-05-29 深圳云天励飞技术有限公司 Hair style recommends method, Intelligent mirror and storage medium
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
CN108664569A (en) * 2018-04-24 2018-10-16 杭州数为科技有限公司 A kind of hair style recommends method, system, terminal and medium
CN108985890A (en) * 2018-06-29 2018-12-11 云智衣橱(深圳)科技有限责任公司 A kind of hair style matching process and system
CN108960167A (en) * 2018-07-11 2018-12-07 腾讯科技(深圳)有限公司 Hair style recognition methods, device, computer readable storage medium and computer equipment
CN109190574A (en) * 2018-09-13 2019-01-11 郑州云海信息技术有限公司 A kind of hair style recommended method, device, terminal and storage medium based on big data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819921A (en) * 2020-11-30 2021-05-18 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for changing the hairstyle of a person
CN112819921B (en) * 2020-11-30 2023-09-26 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for changing hairstyle of character

Similar Documents

Publication Publication Date Title
US11410457B2 (en) Face reenactment
CN110738595B (en) Picture processing method, device and equipment and computer storage medium
CN107771336B (en) Feature detection and masking in images based on color distribution
JP4449723B2 (en) Image processing apparatus, image processing method, and program
CN110503703A (en) Method and apparatus for generating image
CN108388878A (en) The method and apparatus of face for identification
US11501564B2 (en) Mediating apparatus and method, and computer-readable recording medium thereof
US20070052726A1 (en) Method and system for likeness reconstruction
CN110956691A (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN110598097B (en) Hair style recommendation system, method, equipment and storage medium based on CNN
CN109949207B (en) Virtual object synthesis method and device, computer equipment and storage medium
CN110650306A (en) Method and device for adding expression in video chat, computer equipment and storage medium
US20230154111A1 (en) Method and apparatus for three-dimensional reconstruction of a human head for rendering a human image
CN110516598A (en) Method and apparatus for generating image
CN114913303A (en) Virtual image generation method and related device, electronic equipment and storage medium
CN109145783A (en) Method and apparatus for generating information
CN115239857B (en) Image generation method and electronic device
CN111507791A (en) Image-based hair style transformation method and device, computer equipment and storage medium
CN111476066A (en) Image effect processing method and device, computer equipment and storage medium
CN110489634A (en) A kind of build information recommended method, device, system and terminal device
CN117350921A (en) Image generation method and face image generation method
CN115392216B (en) Virtual image generation method and device, electronic equipment and storage medium
CN117078816A (en) Virtual image generation method, device, terminal equipment and storage medium
KR20030068509A (en) Generating Method of Character Through Recognition of An Autual Picture And Service Method using same
Liu et al. Smooth image-to-image translations with latent space interpolations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination