CN113538455B - Three-dimensional hairstyle matching method and electronic equipment - Google Patents

Three-dimensional hairstyle matching method and electronic equipment Download PDF

Info

Publication number
CN113538455B
CN113538455B CN202110658998.4A CN202110658998A CN113538455B CN 113538455 B CN113538455 B CN 113538455B CN 202110658998 A CN202110658998 A CN 202110658998A CN 113538455 B CN113538455 B CN 113538455B
Authority
CN
China
Prior art keywords
hairstyle
dimensional
target
hair
dimensional hairstyle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110658998.4A
Other languages
Chinese (zh)
Other versions
CN113538455A (en
Inventor
朱家林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202110658998.4A priority Critical patent/CN113538455B/en
Publication of CN113538455A publication Critical patent/CN113538455A/en
Application granted granted Critical
Publication of CN113538455B publication Critical patent/CN113538455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a three-dimensional hairstyle matching method and electronic equipment. The method comprises the following steps: performing hair detection on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located; performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair; and inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image. Therefore, the hair of the user is detected and segmented to obtain the hair style of the user, so that the three-dimensional hair style matched with the hair style of the user is obtained, and the similarity between the three-dimensional hair style of the user and the three-dimensional hair style display effect are improved.

Description

Three-dimensional hairstyle matching method and electronic equipment
Technical Field
The invention relates to the technical field of three-dimensional numbers, in particular to a three-dimensional hairstyle matching method and electronic equipment.
Background
With the rapid growth of the internet, VR (Virtual Reality) and/or AR (Augmented Reality ) applications are becoming increasingly popular. Whether virtual social or three-dimensional digital related industries, such as: virtual concert, virtual host, virtual live delivery, virtual tour guide all begin to enter the public's view, thus the basis for AR and/or VR applications must have a realistic personalized virtual character.
In the prior art, in the scenes of VR and/or AR, the three-dimensional hairstyles of the three-dimensional virtual figures corresponding to the users are all set by using the fixed three-dimensional hairstyle templates, so that the similarity between the three-dimensional hairstyles corresponding to the users and the actual hairstyles of the users is not high, and the display effect of the three-dimensional hairstyles corresponding to the users is poor.
Disclosure of Invention
The embodiment of the disclosure provides a three-dimensional hairstyle matching method and electronic equipment, which are used for improving the display effect of a three-dimensional hairstyle corresponding to a user and improving the similarity between the three-dimensional hairstyle and the hairstyle of the user.
A first aspect of the present disclosure provides a three-dimensional hairstyle matching method, the method comprising:
performing hair detection on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located;
performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
and inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image.
According to the embodiment, the target image containing the user is subjected to hair detection processing to obtain the intermediate image, the intermediate image is segmented to obtain the foreground image containing the hair, and finally the foreground image is identified through the trained hairstyle matching network to obtain the three-dimensional hairstyle matched with the hair contained in the foreground image.
A second aspect of the present disclosure provides an electronic device comprising a processor and a display unit;
wherein the processor is configured to:
performing hair detection on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located;
performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
inputting the foreground image into a pre-trained hairstyle matching network for identification to obtain a three-dimensional hairstyle matched with hair contained in the foreground image;
The display unit is configured to display the three-dimensional hairstyle.
According to a third aspect provided by embodiments of the present disclosure, there is provided a computer storage medium storing a computer program for performing the method according to the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of an electronic device in accordance with one embodiment of the present disclosure;
FIG. 2 is one of the flow diagrams of the three-dimensional hairstyle matching method according to one embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a determining flow of intermediate images in a three-dimensional hairstyle matching method according to one embodiment of the present disclosure;
FIG. 4 is a three-dimensional hairstyle schematic of a three-dimensional hairstyle matching method according to one embodiment of the present disclosure;
FIG. 5 is a second flow chart of a three-dimensional hairstyle matching method according to one embodiment of the present disclosure;
Fig. 6 is a three-dimensional hairstyle matching device according to one embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of association objects, which indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The application scenario described in the embodiments of the present disclosure is for more clearly describing the technical solution of the embodiments of the present disclosure, and does not constitute a limitation on the technical solution provided by the embodiments of the present disclosure, and as a person of ordinary skill in the art can know that, with the appearance of a new application scenario, the technical solution provided by the embodiments of the present disclosure is equally applicable to similar technical problems. In the description of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the prior art, in the scenes such as VR and/or AR, the three-dimensional hairstyle of the three-dimensional virtual character corresponding to the user is set by using the fixed three-dimensional hairstyle template, so that the similarity between the three-dimensional hairstyle corresponding to the user and the actual hairstyle of the user is not high, and the display effect of the three-dimensional hairstyle corresponding to the user is poor.
Therefore, the present disclosure provides a three-dimensional hairstyle matching method, which obtains an intermediate image by performing hair detection processing on a target image including a user, and divides the intermediate image to obtain a foreground image including hair, and finally identifies the foreground image through a trained hairstyle matching network to obtain a three-dimensional hairstyle matched with hair included in the foreground image.
Before describing the scheme of the present disclosure in detail, the electronic device of the present disclosure is described in detail, and it should be noted that the electronic device of the present disclosure may be a terminal device, a server, or other devices, and the embodiment is not limited herein. The structure of the electronic device will be first described below.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 1, an electronic device in an embodiment of the present disclosure includes: radio Frequency (RF) circuit 110, power supply 120, processor 130, memory 140, input unit 150, display unit 160, camera 170, communication interface 180, and wireless fidelity (Wireless Fidelity, wiFi) module 190.
It will be appreciated by those skilled in the art that the structure of the electronic device shown in fig. 1 does not constitute a limitation of the electronic device, and that the electronic device provided by the embodiments of the present disclosure may include more or less components than illustrated, or may combine certain components, or may be arranged in different components.
The following describes the respective constituent elements of the electronic apparatus 100 in detail with reference to fig. 1:
the RF circuitry 110 may be used for receiving and transmitting data during a communication or session. Specifically, the RF circuit 110 receives downlink data of the base station and sends the downlink data to the processor 130 for processing; in addition, uplink data to be transmitted is transmitted to the base station. Typically, the RF circuitry 110 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like.
In addition, the RF circuit 110 may also communicate with networks and other terminals through wireless communication. The wireless communication may use any communication standard or protocol including, but not limited to, global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message service (Short Messaging Service, SMS), and the like.
The WiFi technology belongs to a short-distance wireless transmission technology, and the electronic device 100 can be connected to an Access Point (AP) through the WiFi module 190, so as to realize Access to a data network. The WiFi module 190 may be used for receiving and transmitting data during communication.
The electronic device 100 may be physically connected to other terminals through the communication interface 180. Optionally, the communication interface 180 is connected to the communication interfaces of the other terminals through a cable, so as to implement data transmission between the electronic device 100 and the other terminals.
The electronic device 100 is capable of implementing communication services, and the electronic device 100 needs to have a data transmission function, that is, a communication module needs to be included in the electronic device 100. Although fig. 1 shows the RF circuit 110, the WiFi module 190, and the communication interface 180, it is understood that at least one of the above components or other communication modules (such as a bluetooth module) for implementing communication exist in the electronic device 100 for data transmission.
For example, when the electronic device 100 is a mobile phone, the electronic device 100 may include the RF circuit 110 and may further include the WiFi module 190; when the electronic device 100 is a computer, the electronic device 100 may include the communication interface 180 and may further include the WiFi module 190; when the electronic device 100 is a tablet computer, the electronic device 100 may include the WiFi module.
The memory 140 may be used to store software programs and modules. The processor 130 executes various functional applications and data processing of the electronic device 100 by running software programs and modules stored in the memory 140, and when the processor 130 executes the program code in the memory 140, some or all of the processes of fig. 1 of the disclosed embodiments may be implemented.
Alternatively, the memory 140 may mainly include a storage program area and a storage data area. The storage program area may store an operating system, various application programs (such as a communication application), various modules for performing WLAN connection, and the like; the storage data area may store data created according to the use of the terminal, etc.
In addition, the memory 140 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 150 may be used to receive numeric or character information input by a user and to generate key signal inputs related to user settings and function controls of the electronic device 100.
Alternatively, the input unit 150 may include a touch panel 151 and other input devices 152.
The touch panel 151, also referred to as a touch screen, may collect touch operations thereon or thereabout (such as operations of a user using any suitable object or accessory such as a finger, a stylus, etc. on the touch panel 151 or thereabout) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 151 may include two parts, a touch detecting device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 130, and can receive and execute commands sent from the processor 130. Further, the touch panel 151 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
Alternatively, the other input devices 152 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 160 may be used to display information input by a user or provided to the user and various menus of the electronic device 100. The display unit 160 is a display system of the electronic device 100, and is configured to present an interface to implement man-machine interaction.
The display unit 160 may include a display panel 161. Alternatively, the display panel 161 may be configured in the form of a liquid crystal display unit (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
Further, the touch panel 151 may cover the display panel 161, and when the touch panel 151 detects a touch operation thereon or thereabout, the touch panel is transmitted to the processor 130 to determine a type of touch event, and then the processor 130 provides a corresponding visual output on the display panel 161 according to the type of touch event.
Although in fig. 1, the touch panel 151 and the display panel 161 are two independent components to implement the input and output functions of the electronic device 100, in some embodiments, the touch panel 151 and the display panel 161 may be integrated to implement the input and output functions of the electronic device 100.
The processor 130 is a control center of the electronic device 100, connects various components using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by running or executing software programs and/or modules stored in the memory 140 and calling data stored in the memory 140, thereby implementing various services based on the electronic device.
Optionally, the processor 130 may include one or more processing units. Alternatively, the processor 130 may integrate an application processor and a modem processor, wherein the application processor primarily processes operating systems, user interfaces, applications, etc., and the modem processor primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 130.
The camera 170 is configured to implement a shooting function of the electronic device 100, and shoot pictures or videos.
The electronic device 100 further comprises a power source 120, such as a battery, for powering the various components. Optionally, the power source 120 may be logically connected to the processor 130 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
Although not shown, the electronic device 100 may further include at least one sensor, which is not described herein.
The following describes the scheme in detail with reference to the drawings, and as shown in fig. 2, a flow chart of the three-dimensional hairstyle matching method of the present disclosure may include the following steps:
step 201: performing hair detection on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located;
in one embodiment, the intermediate image may be obtained by:
dividing the target image into a designated number of image blocks; and respectively carrying out hair detection processing on each image block according to a preset sequence to obtain the intermediate image.
The method for carrying out hair detection processing on any image block comprises the following steps:
extracting features of the image block to obtain a direction gradient histogram feature (HOG) and a local three-value pattern (LTP) feature of the image block; inputting the direction gradient histogram features and the local three-value mode features into a pre-trained random forest model to obtain a classification result of the image block; and if the classification result of the image block is a type which does not contain hair, setting the pixel value of the image block as a specified pixel value.
For example, as shown in fig. 3, a graph a in fig. 3 is a target image including a user, a graph b is a graph obtained by dividing the target image into a specified number of image blocks, in this embodiment, dividing an intermediate image into 25 image blocks, and then performing hair detection processing for any one of the image blocks, wherein it is determined that the classification result of the image block 1, the image block 5, the image block 6, the image block 10, the image block 11, the image block 15 to the image block 25 is a type not including hair, and the pixel values of the pixels corresponding to the image block 1, the image block 5, the image block 6, the image block 10, the image block 11, the image block 15 to the image block 25 are set to 0, and then a graph c is an intermediate image obtained after the hair detection processing.
It should be noted that, the specified number and the specified pixel value may be set according to a specific practical situation, and the specified number and the specified pixel value in the embodiment are only for explanation and are not limited to the specified number and the specified pixel value.
Step 202: performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
the preset hair segmentation algorithm may be a graph-based image segmentation algorithm Felzenszwalb for performing hair segmentation on the intermediate image.
For example, the intermediate image obtained in fig. 3 is subjected to hair segmentation, and the result may be the foreground image in fig. 4 containing the hair.
Step 203: and inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image.
In one embodiment, step 203 may be implemented as: performing feature extraction on the foreground image by utilizing a hairstyle matching network to obtain first hairstyle feature information; comparing the first hair style characteristic information with second hair style characteristic information of two-dimensional hair style images corresponding to three-dimensional hair styles to obtain similarity of the first hair style characteristic information and the second hair style characteristic information; and determining the three-dimensional hairstyle corresponding to the second hairstyle characteristic information with the highest similarity to the first hairstyle characteristic information as the three-dimensional hairstyle matched with the hair contained in the foreground image.
Wherein, each three-dimensional hairstyle is established in advance, mainly by collecting main hairstyle, classifying according to the length of hairstyle, hair curling, hair straightening and other types, and then establishing each three-dimensional hairstyle. Each three-dimensional hairstyle is then converted into a corresponding two-dimensional hairstyle image, which may include a frontal two-dimensional hairstyle image, a left two-dimensional hairstyle image, a right two-dimensional hairstyle image, and the like. And then lifting to the second hairstyle characteristic information of the three-dimensional hairstyle through the two-dimensional hairstyle image.
The first hair style characteristic information and the second hair style characteristic information may include hair length characteristic information, hair curling and straightening characteristic information, shape characteristic information, and the like, and may be set according to specific situations, and the embodiment is not limited herein.
For example, the preset three-dimensional hairstyle includes three-dimensional hairstyle 1, three-dimensional hairstyle 2 and three-dimensional hairstyle 3. And extracting features of the two-dimensional images corresponding to the three-dimensional hairstyles to obtain second hairstyle feature information 1, second hairstyle feature information 2 and second hairstyle feature information 3 corresponding to the three-dimensional hairstyles 1. And extracting the characteristics of the target image to obtain first-type characteristic information. If the similarity between the first hair style characteristic information and the second hair style characteristic information 1 is determined to be 30%, the similarity between the first hair style characteristic information and the second hair style characteristic information 2 is determined to be 70%, the similarity between the first hair style characteristic information and the second hair style characteristic information 3 is determined to be 90%, the three-dimensional hair style 3 corresponding to the second hair style characteristic information 3 is determined to be the best match with the hair style of the target image, and the three-dimensional hair style 3 is determined to be the three-dimensional hair style matched with the hair style in the target image.
In one embodiment, the hairstyle matching network may be trained by:
Acquiring three-dimensional hairstyle training samples, wherein any three-dimensional hairstyle training sample comprises a foreground image of the same hairstyle and a labeled three-dimensional hairstyle corresponding to the foreground image, and the hairstyles of the foreground images in different three-dimensional hairstyle training samples are different;
the following steps are performed for any one three-dimensional hairstyle training sample:
inputting the three-dimensional hairstyle training sample into a hairstyle matching network, and extracting features of foreground images in the three-dimensional training sample to obtain first hairstyle feature information; comparing the first hair style characteristic information with second hair style characteristic information of two-dimensional images corresponding to three-dimensional hair styles to obtain similarity of the first hair style characteristic information and the second hair style characteristic information; determining a three-dimensional hairstyle corresponding to second hairstyle characteristic information with highest similarity with the first hairstyle characteristic information; comparing the determined three-dimensional hairstyle with the marked three-dimensional hairstyle to obtain an error value; and if the error value does not meet the specified condition, after the training parameters of the hairstyle matching network are adjusted, returning to the step of inputting the three-dimensional hairstyle training sample into the hairstyle matching network and extracting the characteristics of the foreground image in the three-dimensional training sample until the error value meets the specified condition, and ending the training of the hairstyle matching network.
Wherein the specified condition may be not less than a specified value. The specified values may be set according to actual situations, and the present embodiment is not limited herein.
In order to make the determined three-dimensional hairstyle more similar to the hairstyle in the target image, in one embodiment, after step 203 is performed, the hair color in the foreground image is extracted, and the color of the three-dimensional hairstyle is filled with the hair color to obtain the target three-dimensional hairstyle.
For example, if the extracted hair color is brown, the color of the matched three-dimensional hairstyle can be filled with brown, so that the obtained target three-dimensional hairstyle has higher similarity with the user hairstyle in the target image.
In order to make the material of the target three-dimensional hairstyle more similar to that of the real hair, in one embodiment, after the target three-dimensional hairstyle is obtained, determining a three-dimensional hairstyle rendering and adjusting mode corresponding to the target three-dimensional hairstyle by utilizing a preset corresponding relation between the target three-dimensional hairstyle and the three-dimensional hairstyle rendering and adjusting mode; wherein the three-dimensional hairstyle rendering adjustment mode comprises at least one of basic color adjustment, scattering adjustment, highlight adjustment, tangent adjustment, backlight adjustment, ambient light shielding adjustment and depth offset adjustment; and rendering the target three-dimensional hairstyle by using the determined three-dimensional hairstyle rendering and adjusting mode to obtain the rendered target three-dimensional hairstyle.
The three-dimensional hairstyle rendering and adjusting modes corresponding to different target three-dimensional hairstyles are different, and the three-dimensional hairstyle rendering and adjusting modes corresponding to the length, the color, the hair curling, the hair straightening and the like of the hair are different.
The three-dimensional hairstyle rendering and adjusting mode is described as follows:
(1) And (3) basic color adjustment:
and adjusting the color of the basic color map in the target three-dimensional hairstyle by using a linear gradient function (linear gradient) to obtain a first target three-dimensional hairstyle, and adjusting the brightness of the basic color map in the first target three-dimensional hairstyle by using a preset algorithm to obtain a second target three-dimensional hairstyle.
Then extracting a three-dimensional modeling texture of the second target three-dimensional hairstyle, and carrying out sampling removal processing on the three-dimensional modeling texture to obtain a mask image; and processing the environmental shielding color map in the second target three-dimensional hairstyle by using the shade map to obtain the target three-dimensional hairstyle with the basic color adjusted.
The basic color adjustment of the target three-dimensional hairstyle in this embodiment simulates the effect of center lightening and edge gradual darkening of real hair. The three-dimensional hairstyle of the target is more realistic, the colors are mixed, and the edge color and the environmental shielding color are added, so that the color of the three-dimensional hairstyle of the target finally presents a vivid effect.
(2) Scatter adjustment:
and processing the scattering color mapping in the target three-dimensional hairstyle by using a linear gradual change function to obtain a third target three-dimensional hairstyle, adjusting the brightness of the basic color mapping in the third target three-dimensional hairstyle by using a preset algorithm to obtain a fourth target three-dimensional hairstyle, and multiplying the pixel value of each pixel point in the basic color mapping in the fourth target three-dimensional hairstyle by a first appointed scaling factor to obtain the target three-dimensional hairstyle after scattering adjustment.
The three-dimensional hairstyle of the target after scattering adjustment in the embodiment simulates the brightness change of the real head and the different penetration degrees of the light rays in the hair, and achieves more personalized effects.
(3) Highlight adjustment:
and processing the highlight map in the target three-dimensional hairstyle by using a linear gradual change function to obtain a fifth target three-dimensional hairstyle, adjusting the brightness of the highlight map in the fifth target three-dimensional hairstyle by using a preset algorithm to obtain a sixth target three-dimensional hairstyle, and multiplying the pixel value of each pixel point in the highlight map in the sixth target three-dimensional hairstyle by a second designated scaling factor to obtain the target three-dimensional hairstyle after highlight adjustment.
The target three-dimensional hairstyle after highlight adjustment in this embodiment simulates the highlight effect of real hair.
(4) Tangential line adjustment:
multiplying the basic color map in the target three-dimensional hairstyle and the sampling noise map in the target three-dimensional hairstyle, and then adding the multiplied basic color map and the sampling noise map with a preset tangent vector to obtain the target three-dimensional hairstyle with the adjusted tangent.
The target three-dimensional hairstyle after tangent adjustment in the embodiment simulates the micro-plane of real hair, and enhances the hair feel.
(5) Backlight degree adjustment:
firstly extracting the ordinate data of the hair mask map in the target three-dimensional hairstyle, carrying out reverse processing on the ordinate data, and then adding shadow scaling mixing processing on the ordinate data after the reverse processing to obtain the target three-dimensional hairstyle with the adjusted backlight degree.
The three-dimensional hairstyle of the target after backlight adjustment in this embodiment simulates the projection effect of light through the hair.
(6) Ambient light shading adjustment:
and extracting the ordinate data of the environmental shielding color map in the target three-dimensional hairstyle, and performing power transformation processing on the ordinate data to obtain the target three-dimensional hairstyle subjected to environmental light shielding adjustment.
The target three-dimensional hairstyle after the adjustment of the ambient light shielding in this embodiment enhances the detail of the hair in the three-dimensional hairstyle.
(7) Pixel depth offset adjustment:
and obtaining the rendering pixel depth of each pixel point in the target three-dimensional hairstyle, and multiplying the rendering pixel depth by a third appointed scaling factor to obtain the target three-dimensional hairstyle with the pixel depth skewness adjusted.
In this embodiment, the depth variation amplitude of the pixel is increased by multiplying the depth of the hair rendering pixel by a third designated scaling factor, so as to enhance the difference amplitude of the depth-distance variation of the hair in the target three-dimensional hairstyle, thereby enhancing the sense of different depths of each hair cluster and making the hair in the target three-dimensional hairstyle look more hierarchical.
The sequence of adjusting the adjusting methods in each three-dimensional hairstyle rendering adjusting mode is not limited in this embodiment, and may be set according to specific practical situations.
In this embodiment, the first specified scaling factor, the second specified scaling factor, and the third specified scaling factor may be the same or different, and the embodiment is not limited herein.
For further understanding of the technical solution of the present disclosure, the following detailed description with reference to fig. 5 may include the following steps:
step 501: dividing the target image containing the user into a designated number of image blocks uniformly;
Step 502: respectively carrying out hair detection treatment on each image block according to a preset sequence to obtain an intermediate image;
step 503: performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
step 504: performing feature extraction on the foreground image by utilizing a hairstyle matching network to obtain first hairstyle feature information;
step 505: comparing the first hair style characteristic information with second hair style characteristic information of two-dimensional hair style images corresponding to three-dimensional hair styles to obtain similarity of the first hair style characteristic information and the second hair style characteristic information;
step 506: determining a three-dimensional hair style corresponding to second hair style characteristic information with highest similarity to the first hair style characteristic information as a three-dimensional hair style matched with hair contained in the foreground image;
step 507: extracting hair colors in the foreground image, and filling the colors of the three-dimensional hairstyle by utilizing the hair colors to obtain a target three-dimensional hairstyle;
step 508: determining a three-dimensional hair style rendering and adjusting mode corresponding to a target three-dimensional hair style by utilizing a corresponding relation between the preset target three-dimensional hair style and the three-dimensional hair style rendering and adjusting mode; wherein the three-dimensional hairstyle rendering adjustment mode comprises at least one of basic color adjustment, scattering adjustment, highlight adjustment, tangent adjustment, backlight adjustment, ambient light shielding adjustment and depth offset adjustment;
Step 509: and rendering the target three-dimensional hairstyle by using the determined three-dimensional hairstyle rendering and adjusting mode to obtain the rendered target three-dimensional hairstyle.
Based on the same disclosed concept, the three-dimensional hairstyle matching method disclosed in the disclosure can be further realized by a three-dimensional hairstyle matching device. The effect of the three-dimensional hairstyle matching device is similar to that of the method, and is not described herein.
Fig. 6 is a schematic structural view of a three-dimensional hairstyle matching device according to one embodiment of the present disclosure.
As shown in fig. 6, a three-dimensional hairstyle matching device 600 of the present disclosure may include an intermediate image determination module 610, a hair segmentation module 620, and a matching module 630.
An intermediate image determining module 610, configured to perform hair detection processing on a target image including a user to obtain an intermediate image, where the intermediate image is an image including an area where hair is located;
the hair segmentation module 620 is configured to segment the intermediate image by using a preset hair segmentation algorithm, so as to obtain a foreground image containing the hair;
the matching module 630 is configured to input the foreground image into a pre-trained hairstyle matching network for identification, so as to obtain a three-dimensional hairstyle matched with hair contained in the foreground image;
In one embodiment, the apparatus further comprises:
and the color filling module 640 is configured to extract a hair color in the foreground image after the three-dimensional hairstyle matching with the hair contained in the foreground image is obtained, and fill the color of the three-dimensional hairstyle with the hair color to obtain a target three-dimensional hairstyle.
In one embodiment, the apparatus further comprises:
the three-dimensional hairstyle rendering adjustment mode determining module 650 is configured to determine, after the target three-dimensional hairstyle is obtained, a three-dimensional hairstyle rendering adjustment mode corresponding to the target three-dimensional hairstyle by using a preset correspondence between the target three-dimensional hairstyle and the three-dimensional hairstyle rendering adjustment mode; wherein the three-dimensional hairstyle rendering adjustment mode comprises at least one of basic color adjustment, scattering adjustment, highlight adjustment, tangent adjustment, backlight adjustment, ambient light shielding adjustment and depth offset adjustment;
the target three-dimensional hairstyle determining module 660 is configured to render the target three-dimensional hairstyle by using the determined three-dimensional hairstyle rendering adjustment mode, so as to obtain a rendered target three-dimensional hairstyle.
In one embodiment, the intermediate image determining module 610 is specifically configured to:
Dividing the target image into a designated number of image blocks;
respectively carrying out hair detection treatment on each image block according to a preset sequence to obtain the intermediate image;
the method for carrying out hair detection processing on any image block comprises the following steps:
extracting features of the image block to obtain a directional gradient histogram feature and a local three-value mode feature of the image block; and is combined with the other components of the water treatment device,
inputting the directional gradient histogram features and the local three-value mode features into a pre-trained random forest model to obtain a classification result of the image block;
and if the classification result of the image block is a type which does not contain hair, setting the pixel value of the image block as a specified pixel value.
In one embodiment, the matching module 630 is specifically configured to:
performing feature extraction on the foreground image by utilizing a hairstyle matching network to obtain first hairstyle feature information;
comparing the first hair style characteristic information with second hair style characteristic information of two-dimensional hair style images corresponding to three-dimensional hair styles to obtain similarity of the first hair style characteristic information and the second hair style characteristic information;
and determining the three-dimensional hairstyle corresponding to the second hairstyle characteristic information with the highest similarity to the first hairstyle characteristic information as the three-dimensional hairstyle matched with the hair contained in the foreground image.
In one embodiment, the apparatus further comprises:
a hairstyle matching network training module 670 for training the hairstyle matching network by:
acquiring three-dimensional hairstyle training samples, wherein any three-dimensional hairstyle training sample comprises a foreground image of the same hairstyle and a labeled three-dimensional hairstyle corresponding to the foreground image, and the hairstyles of the foreground images in different three-dimensional hairstyle training samples are different;
the following steps are performed for any one three-dimensional hairstyle training sample:
inputting the three-dimensional hairstyle training sample into a hairstyle matching network, and extracting features of foreground images in the three-dimensional training sample to obtain first hairstyle feature information; and is combined with the other components of the water treatment device,
comparing the first hair style characteristic information with second hair style characteristic information of two-dimensional images corresponding to three-dimensional hair styles to obtain similarity of the first hair style characteristic information and the second hair style characteristic information;
determining a three-dimensional hairstyle corresponding to second hairstyle characteristic information with highest similarity with the first hairstyle characteristic information;
comparing the determined three-dimensional hairstyle with the marked three-dimensional hairstyle to obtain an error value;
and if the error value does not meet the specified condition, after the training parameters of the hairstyle matching network are adjusted, returning to the step of inputting the three-dimensional hairstyle training sample into the hairstyle matching network and extracting the characteristics of the foreground image in the three-dimensional training sample until the error value meets the specified condition, and ending the training of the hairstyle matching network.
In some possible embodiments, aspects of a three-dimensional hairstyle matching method provided by the present disclosure may also be implemented in the form of a program product comprising program code for causing a computer device to carry out the steps of the three-dimensional hairstyle matching method according to various exemplary embodiments of the present disclosure as described above when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a random access computer storage medium (RAM), a read-only computer storage medium (ROM), an erasable programmable read-only computer storage medium (EPROM or flash memory), an optical fiber, a portable compact disc read-only computer storage medium (CD-ROM), an optical computer storage medium, a magnetic computer storage medium, or any suitable combination of the foregoing.
The three-dimensional hairstyle matching program product of embodiments of the present disclosure may employ a portable compact disc read-only computer storage medium (CD-ROM) and include program code and may run on an electronic device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device, partly on the remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic device may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., connected through the internet using an internet service provider).
It should be noted that although several modules of the apparatus are mentioned in the detailed description above, this division is merely exemplary and not mandatory. Indeed, the features and functions of two or more modules described above may be embodied in one module in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module described above may be further divided into a plurality of modules to be embodied.
Furthermore, although the operations of the methods of the present disclosure are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk computer storage media, CD-ROM, optical computer storage media, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable computer storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable computer storage medium produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (6)

1. An electronic device, comprising a processor and a display unit;
wherein the processor is configured to:
performing hair detection on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located;
performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image, extracting hair color in the foreground image, filling the color of the three-dimensional hairstyle by using the hair color to obtain a target three-dimensional hairstyle, determining a three-dimensional hairstyle rendering adjustment mode corresponding to the target three-dimensional hairstyle by using a preset corresponding relation between the target three-dimensional hairstyle and the three-dimensional hairstyle rendering adjustment mode, and rendering the target three-dimensional hairstyle by using the determined three-dimensional hairstyle rendering adjustment mode to obtain a rendered target three-dimensional hairstyle; the three-dimensional hairstyle rendering adjustment mode comprises at least one of basic color adjustment, scattering adjustment, highlight adjustment, tangent adjustment, backlight adjustment, ambient light shielding adjustment and pixel depth offset adjustment;
The basic color is adjusted by utilizing a linear gradual change function to adjust the color of a basic color map in the target three-dimensional hairstyle, so as to obtain a first target three-dimensional hairstyle; the brightness of the basic color map in the first target three-dimensional hairstyle is adjusted by using a preset algorithm, so that a second target three-dimensional hairstyle is obtained; extracting a three-dimensional modeling texture of the second target three-dimensional hairstyle, and performing sampling removal processing on the three-dimensional modeling texture to obtain a mask image; processing the environmental shielding color map in the second target three-dimensional hairstyle by using the shade map to obtain a target three-dimensional hairstyle with the basic color adjusted;
the scattering adjustment is to process the scattering color mapping in the three-dimensional target hairstyle by using a linear gradual change function to obtain a three-dimensional target hairstyle, and adjust the brightness of the basic color mapping in the three-dimensional target hairstyle by using a preset algorithm to obtain a three-dimensional target hairstyle; multiplying the pixel value of each pixel point in the basic color map in the fourth target three-dimensional hairstyle by a first appointed scaling factor to obtain a target three-dimensional hairstyle after scattering adjustment;
the highlight adjustment is to process the highlight map in the target three-dimensional hairstyle by using a linear gradual change function to obtain a fifth target three-dimensional hairstyle, and adjust the brightness of the highlight map in the fifth target three-dimensional hairstyle by using a preset algorithm to obtain a sixth target three-dimensional hairstyle; multiplying the pixel value of each pixel point in the highlight map in the sixth target three-dimensional hairstyle by a second designated scaling factor to obtain a target three-dimensional hairstyle after highlight adjustment;
The tangent line is adjusted to multiply a basic color map in the target three-dimensional hairstyle and a sampling noise map in the target three-dimensional hairstyle, and then the multiplied basic color map and the sampling noise map are added with a preset tangent vector to obtain the target three-dimensional hairstyle with the adjusted tangent line;
the backlight degree is adjusted to firstly extract the ordinate data of the hair mask map in the target three-dimensional hairstyle, and the ordinate data is reversely processed; adding shadow scaling mixing treatment to the ordinate data subjected to the reverse treatment to obtain a target three-dimensional hairstyle subjected to backlight degree adjustment;
the environmental light shielding is adjusted to extract the ordinate data of the environmental shielding color map in the target three-dimensional hairstyle, and the ordinate data is subjected to power transformation processing to obtain the target three-dimensional hairstyle after the environmental light shielding adjustment;
the pixel depth offset is adjusted to obtain the rendering pixel depth of each pixel point in the target three-dimensional hairstyle, and the rendering pixel depth is multiplied by a third appointed scaling factor to obtain the target three-dimensional hairstyle with the adjusted pixel depth offset;
the display unit is configured to display the three-dimensional hairstyle.
2. The electronic device of claim 1, wherein the processor, when executing the hair detection processing on the target image including the user, is configured to obtain an intermediate image:
Dividing the target image into a designated number of image blocks;
respectively carrying out hair detection treatment on each image block according to a preset sequence to obtain the intermediate image;
the method for carrying out hair detection processing on any image block comprises the following steps:
extracting features of the image block to obtain a directional gradient histogram feature and a local three-value mode feature of the image block; and is combined with the other components of the water treatment device,
inputting the directional gradient histogram features and the local three-value mode features into a pre-trained random forest model to obtain a classification result of the image block;
and if the classification result of the image block is a type which does not contain hair, setting the pixel value of the image block as a specified pixel value.
3. The electronic device of claim 1, wherein the processor, upon performing the inputting the foreground image into a pre-trained hairstyle matching network for identification, is configured to obtain a three-dimensional hairstyle matching hair contained in the foreground image, and is specifically configured to:
performing feature extraction on the foreground image by utilizing a hairstyle matching network to obtain first hairstyle feature information;
comparing the first hair style characteristic information with second hair style characteristic information of two-dimensional hair style images corresponding to three-dimensional hair styles to obtain similarity of the first hair style characteristic information and the second hair style characteristic information;
And determining the three-dimensional hairstyle corresponding to the second hairstyle characteristic information with the highest similarity to the first hairstyle characteristic information as the three-dimensional hairstyle matched with the hair contained in the foreground image.
4. The electronic device of claim 1, wherein the processor is further configured to:
training the hairstyle matching network by:
acquiring three-dimensional hairstyle training samples, wherein any three-dimensional hairstyle training sample comprises a foreground image of the same hairstyle and a labeled three-dimensional hairstyle corresponding to the foreground image, and the hairstyles of the foreground images in different three-dimensional hairstyle training samples are different;
the following steps are performed for any one three-dimensional hairstyle training sample:
inputting the three-dimensional hairstyle training sample into a hairstyle matching network, and extracting features of foreground images in the three-dimensional hairstyle training sample to obtain first hairstyle feature information; and is combined with the other components of the water treatment device,
comparing the first hair style characteristic information with second hair style characteristic information of two-dimensional images corresponding to three-dimensional hair styles to obtain similarity of the first hair style characteristic information and the second hair style characteristic information;
determining a three-dimensional hairstyle corresponding to second hairstyle characteristic information with highest similarity with the first hairstyle characteristic information;
Comparing the determined three-dimensional hairstyle with the marked three-dimensional hairstyle to obtain an error value;
and if the error value does not meet the specified condition, after the training parameters of the hairstyle matching network are adjusted, returning to the step of inputting the three-dimensional hairstyle training sample into the hairstyle matching network and extracting the characteristics of the foreground image in the three-dimensional hairstyle training sample until the error value meets the specified condition, and finishing the training of the hairstyle matching network.
5. A method of three-dimensional hairstyle matching, the method comprising:
performing hair detection on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located;
performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image, extracting hair color in the foreground image, filling the color of the three-dimensional hairstyle by using the hair color to obtain a target three-dimensional hairstyle, determining a three-dimensional hairstyle rendering adjustment mode corresponding to the target three-dimensional hairstyle by using a preset corresponding relation between the target three-dimensional hairstyle and the three-dimensional hairstyle rendering adjustment mode, and rendering the target three-dimensional hairstyle by using the determined three-dimensional hairstyle rendering adjustment mode to obtain a rendered target three-dimensional hairstyle; the three-dimensional hairstyle rendering adjustment mode comprises at least one of basic color adjustment, scattering adjustment, highlight adjustment, tangent adjustment, backlight adjustment, ambient light shielding adjustment and pixel depth offset adjustment;
The basic color is adjusted by utilizing a linear gradual change function to adjust the color of a basic color map in the target three-dimensional hairstyle, so as to obtain a first target three-dimensional hairstyle; the brightness of the basic color map in the first target three-dimensional hairstyle is adjusted by using a preset algorithm, so that a second target three-dimensional hairstyle is obtained; extracting a three-dimensional modeling texture of the second target three-dimensional hairstyle, and performing sampling removal processing on the three-dimensional modeling texture to obtain a mask image; processing the environmental shielding color map in the second target three-dimensional hairstyle by using the shade map to obtain a target three-dimensional hairstyle with the basic color adjusted;
the scattering adjustment is to process the scattering color mapping in the three-dimensional target hairstyle by using a linear gradual change function to obtain a three-dimensional target hairstyle, and adjust the brightness of the basic color mapping in the three-dimensional target hairstyle by using a preset algorithm to obtain a three-dimensional target hairstyle; multiplying the pixel value of each pixel point in the basic color map in the fourth target three-dimensional hairstyle by a first appointed scaling factor to obtain a target three-dimensional hairstyle after scattering adjustment;
the highlight adjustment is to process the highlight map in the target three-dimensional hairstyle by using a linear gradual change function to obtain a fifth target three-dimensional hairstyle, and adjust the brightness of the highlight map in the fifth target three-dimensional hairstyle by using a preset algorithm to obtain a sixth target three-dimensional hairstyle; multiplying the pixel value of each pixel point in the highlight map in the sixth target three-dimensional hairstyle by a second designated scaling factor to obtain a target three-dimensional hairstyle after highlight adjustment;
The tangent line is adjusted to multiply a basic color map in the target three-dimensional hairstyle and a sampling noise map in the target three-dimensional hairstyle, and then the multiplied basic color map and the sampling noise map are added with a preset tangent vector to obtain the target three-dimensional hairstyle with the adjusted tangent line;
the backlight degree is adjusted to firstly extract the ordinate data of the hair mask map in the target three-dimensional hairstyle, and the ordinate data is reversely processed; adding shadow scaling mixing treatment to the ordinate data subjected to the reverse treatment to obtain a target three-dimensional hairstyle subjected to backlight degree adjustment;
the environmental light shielding is adjusted to extract the ordinate data of the environmental shielding color map in the target three-dimensional hairstyle, and the ordinate data is subjected to power transformation processing to obtain the target three-dimensional hairstyle after the environmental light shielding adjustment;
and the pixel depth offset is adjusted to obtain the rendering pixel depth of each pixel point in the target three-dimensional hairstyle, and the rendering pixel depth is multiplied by a third appointed scaling factor to obtain the target three-dimensional hairstyle with the adjusted pixel depth offset.
6. The method of claim 5, wherein said inputting said foreground image into a pre-trained hairstyle matching network for identification to obtain a three-dimensional hairstyle matching hair contained in said foreground image, comprising:
Performing feature extraction on the foreground image by utilizing a hairstyle matching network to obtain first hairstyle feature information;
comparing the first hair style characteristic information with second hair style characteristic information of two-dimensional hair style images corresponding to three-dimensional hair styles to obtain similarity of the first hair style characteristic information and the second hair style characteristic information;
and determining the three-dimensional hairstyle corresponding to the second hairstyle characteristic information with the highest similarity to the first hairstyle characteristic information as the three-dimensional hairstyle matched with the hair contained in the foreground image.
CN202110658998.4A 2021-06-15 2021-06-15 Three-dimensional hairstyle matching method and electronic equipment Active CN113538455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110658998.4A CN113538455B (en) 2021-06-15 2021-06-15 Three-dimensional hairstyle matching method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110658998.4A CN113538455B (en) 2021-06-15 2021-06-15 Three-dimensional hairstyle matching method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113538455A CN113538455A (en) 2021-10-22
CN113538455B true CN113538455B (en) 2023-12-12

Family

ID=78095954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110658998.4A Active CN113538455B (en) 2021-06-15 2021-06-15 Three-dimensional hairstyle matching method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113538455B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198515A (en) * 2013-04-18 2013-07-10 北京尔宜居科技有限责任公司 Method for immediately adjusting object illumination rendering effect in 3D scene
CN103617426A (en) * 2013-12-04 2014-03-05 东北大学 Pedestrian target detection method under interference by natural environment and shelter
CN105488490A (en) * 2015-12-23 2016-04-13 天津天地伟业数码科技有限公司 Judge dressing detection method based on video
CN105701853A (en) * 2014-12-15 2016-06-22 三星电子株式会社 3D rendering method and apparatus
CN106372333A (en) * 2016-08-31 2017-02-01 北京维盛视通科技有限公司 Method and device for displaying clothes based on face model
WO2018094653A1 (en) * 2016-11-24 2018-05-31 华为技术有限公司 User hair model re-establishment method and apparatus, and terminal
CN108334823A (en) * 2018-01-19 2018-07-27 中国公路工程咨询集团有限公司 High-resolution remote sensing image container area area detecting method based on machine learning
CN108513089A (en) * 2017-02-24 2018-09-07 腾讯科技(深圳)有限公司 The method and device of group's video session
CN110060324A (en) * 2019-03-22 2019-07-26 北京字节跳动网络技术有限公司 Image rendering method, device and electronic equipment
CN110796721A (en) * 2019-10-31 2020-02-14 北京字节跳动网络技术有限公司 Color rendering method and device of virtual image, terminal and storage medium
CN111182350A (en) * 2019-12-31 2020-05-19 广州华多网络科技有限公司 Image processing method, image processing device, terminal equipment and storage medium
CN111291765A (en) * 2018-12-07 2020-06-16 北京京东尚科信息技术有限公司 Method and device for determining similar pictures
CN111612820A (en) * 2020-05-15 2020-09-01 北京百度网讯科技有限公司 Multi-target tracking method, and training method and device of feature extraction model

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8531450B2 (en) * 2008-08-28 2013-09-10 Adobe Systems Incorporated Using two dimensional image adjustment operations on three dimensional objects
WO2017181332A1 (en) * 2016-04-19 2017-10-26 浙江大学 Single image-based fully automatic 3d hair modeling method
CN109408653B (en) * 2018-09-30 2022-01-28 叠境数字科技(上海)有限公司 Human body hairstyle generation method based on multi-feature retrieval and deformation

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198515A (en) * 2013-04-18 2013-07-10 北京尔宜居科技有限责任公司 Method for immediately adjusting object illumination rendering effect in 3D scene
CN103617426A (en) * 2013-12-04 2014-03-05 东北大学 Pedestrian target detection method under interference by natural environment and shelter
CN105701853A (en) * 2014-12-15 2016-06-22 三星电子株式会社 3D rendering method and apparatus
CN105488490A (en) * 2015-12-23 2016-04-13 天津天地伟业数码科技有限公司 Judge dressing detection method based on video
CN106372333A (en) * 2016-08-31 2017-02-01 北京维盛视通科技有限公司 Method and device for displaying clothes based on face model
WO2018094653A1 (en) * 2016-11-24 2018-05-31 华为技术有限公司 User hair model re-establishment method and apparatus, and terminal
CN108513089A (en) * 2017-02-24 2018-09-07 腾讯科技(深圳)有限公司 The method and device of group's video session
CN108334823A (en) * 2018-01-19 2018-07-27 中国公路工程咨询集团有限公司 High-resolution remote sensing image container area area detecting method based on machine learning
CN111291765A (en) * 2018-12-07 2020-06-16 北京京东尚科信息技术有限公司 Method and device for determining similar pictures
CN110060324A (en) * 2019-03-22 2019-07-26 北京字节跳动网络技术有限公司 Image rendering method, device and electronic equipment
CN110796721A (en) * 2019-10-31 2020-02-14 北京字节跳动网络技术有限公司 Color rendering method and device of virtual image, terminal and storage medium
CN111182350A (en) * 2019-12-31 2020-05-19 广州华多网络科技有限公司 Image processing method, image processing device, terminal equipment and storage medium
CN111612820A (en) * 2020-05-15 2020-09-01 北京百度网讯科技有限公司 Multi-target tracking method, and training method and device of feature extraction model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DeepSketchHair: Deep Sketch-based 3D Hair Modeling;Yuefan Shen 等;《IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS》;1-14 *
数据驱动的三维人体头部重建;何华赟;《中国优秀硕士学位论文全文数据库 信息科技辑》(第12期);I138-1745 *

Also Published As

Publication number Publication date
CN113538455A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN110276344B (en) Image segmentation method, image recognition method and related device
CN110689500B (en) Face image processing method and device, electronic equipment and storage medium
US9741137B2 (en) Image-based color palette generation
US9552656B2 (en) Image-based color palette generation
US9177391B1 (en) Image-based color palette generation
US9311889B1 (en) Image-based color palette generation
WO2016165615A1 (en) Expression specific animation loading method in real-time video and electronic device
CN108875594B (en) Face image processing method, device and storage medium
CN110796721A (en) Color rendering method and device of virtual image, terminal and storage medium
CN110689479B (en) Face makeup method, device, equipment and medium
CN110443769A (en) Image processing method, image processing apparatus and terminal device
CN108701355A (en) GPU optimizes and the skin possibility predication based on single Gauss online
CN108694719A (en) image output method and device
CN112532882B (en) Image display method and device
CN109255784B (en) Image processing method and device, electronic equipment and storage medium
CN108494996A (en) Image processing method, device, storage medium and mobile terminal
CN113453027B (en) Live video and virtual make-up image processing method and device and electronic equipment
CN115375835A (en) Three-dimensional model establishing method based on two-dimensional key points, computer and storage medium
CN112581395A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108683845A (en) Image processing method, device, storage medium and mobile terminal
CN117455753B (en) Special effect template generation method, special effect generation device and storage medium
CN114463212A (en) Image processing method and device, electronic equipment and storage medium
CN107798716A (en) Image effect extracts
CN114120413A (en) Model training method, image synthesis method, device, equipment and program product
WO2022072197A1 (en) Object relighting using neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant