CN113538455A - Three-dimensional hairstyle matching method and electronic equipment - Google Patents

Three-dimensional hairstyle matching method and electronic equipment Download PDF

Info

Publication number
CN113538455A
CN113538455A CN202110658998.4A CN202110658998A CN113538455A CN 113538455 A CN113538455 A CN 113538455A CN 202110658998 A CN202110658998 A CN 202110658998A CN 113538455 A CN113538455 A CN 113538455A
Authority
CN
China
Prior art keywords
dimensional
hairstyle
hair
image
hair style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110658998.4A
Other languages
Chinese (zh)
Other versions
CN113538455B (en
Inventor
朱家林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202110658998.4A priority Critical patent/CN113538455B/en
Publication of CN113538455A publication Critical patent/CN113538455A/en
Application granted granted Critical
Publication of CN113538455B publication Critical patent/CN113538455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a three-dimensional hairstyle matching method and electronic equipment. The method comprises the following steps: carrying out hair detection processing on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located; performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair; and inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image. Therefore, the hairstyle of the user is obtained by detecting and segmenting the hair of the user, the three-dimensional hairstyle matched with the hairstyle of the user is obtained, and the similarity between the three-dimensional hairstyle of the user and the three-dimensional hairstyle display effect are improved.

Description

Three-dimensional hairstyle matching method and electronic equipment
Technical Field
The invention relates to the technical field of three-dimensional numbers, in particular to a three-dimensional hairstyle matching method and electronic equipment.
Background
With the rapid development of the internet, VR (Virtual Reality technology) and/or AR (Augmented Reality technology) applications are becoming more popular. Whether virtual social or three-dimensional digital related, various industries, such as: virtual concerts, virtual broadcasters, virtual live tape, virtual tour guides all start to enter the public view, whereby the basis of AR and/or VR applications must have realistic personalized virtual characters.
In the prior art, in scenes such as VR and/or AR, three-dimensional hairstyles of three-dimensional virtual characters corresponding to a user are all set by using fixed three-dimensional hairstyle templates, so that the similarity between the three-dimensional hairstyles corresponding to the user and actual hairstyles of the user is not high, and the display effect of the three-dimensional hairstyles corresponding to the user is poor.
Disclosure of Invention
The exemplary embodiment of the present disclosure provides a three-dimensional hairstyle matching method and an electronic device, which are used for improving a display effect of a three-dimensional hairstyle corresponding to a user and improving a similarity between the three-dimensional hairstyle and the hairstyle of the user.
A first aspect of the present disclosure provides a three-dimensional hairstyle matching method, the method comprising:
carrying out hair detection processing on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located;
performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
and inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image.
In this embodiment, a target image including a user is subjected to hair detection processing to obtain an intermediate image, the intermediate image is segmented to obtain a foreground image including hair, and finally, the foreground image is identified through a trained hair style matching network to obtain a three-dimensional hair style matched with the hair included in the foreground image.
A second aspect of the present disclosure provides an electronic device comprising a processor and a display unit;
wherein the processor is configured to:
carrying out hair detection processing on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located;
performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image;
the display unit is configured to display the three-dimensional hairstyle.
According to a third aspect provided by embodiments of the present disclosure, there is provided a computer storage medium storing a computer program for executing the method according to the first aspect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 2 is one of the flow diagrams of a three-dimensional hairstyle matching method according to one embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a procedure for determining an intermediate image in a three-dimensional hair style matching method according to an embodiment of the present disclosure;
FIG. 4 is a schematic three-dimensional hairstyle view of a three-dimensional hairstyle matching method according to an embodiment of the present disclosure;
FIG. 5 is a second flowchart of a three-dimensional hair style matching method according to an embodiment of the present disclosure;
fig. 6 is a three-dimensional hair styling attachment according to one embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of associated objects, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The application scenario described in the embodiment of the present disclosure is for more clearly illustrating the technical solution of the embodiment of the present disclosure, and does not form a limitation on the technical solution provided in the embodiment of the present disclosure, and as a person having ordinary skill in the art knows, with the occurrence of a new application scenario, the technical solution provided in the embodiment of the present disclosure is also applicable to similar technical problems. In the description of the present disclosure, the term "plurality" means two or more unless otherwise specified.
In the prior art, in scenes such as VR and/or AR, three-dimensional hairstyles of three-dimensional virtual persons corresponding to users are all set by using fixed three-dimensional hairstyle templates, so that the similarity between the three-dimensional hairstyles corresponding to the users and actual hairstyles of the users is not high, and the display effect of the three-dimensional hairstyles corresponding to the users is poor.
Therefore, the present disclosure provides a three-dimensional hairstyle matching method, which obtains an intermediate image by performing hair detection processing on a target image including a user, and segments the intermediate image to obtain a foreground image including hair, and finally identifies the foreground image through a trained hairstyle matching network to obtain a three-dimensional hairstyle matching the hair included in the foreground image.
Before describing the scheme of the present disclosure in detail, the electronic device of the present disclosure is described in detail, it should be noted that the electronic device of the present disclosure may be a terminal device or a server, and the present embodiment is not limited herein. First, the structure of the electronic device will be explained.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 1, an electronic device in an embodiment of the present disclosure includes: a Radio Frequency (RF) circuit 110, a power supply 120, a processor 130, a memory 140, an input unit 150, a display unit 160, a camera 170, a communication interface 180, and a Wireless Fidelity (WiFi) module 190.
Those skilled in the art will appreciate that the configuration of the electronic device shown in fig. 1 does not constitute a limitation of the electronic device, and that embodiments of the present disclosure provide electronic devices that may include more or fewer components than those shown, or that certain components may be combined, or that a different arrangement of components may be provided.
The following describes each component of the electronic device 100 in detail with reference to fig. 1:
the RF circuit 110 may be used for receiving and transmitting data during a communication or conversation. Specifically, the RF circuit 110 sends the downlink data of the base station to the processor 130 for processing after receiving the downlink data; and in addition, sending the uplink data to be sent to the base station. Generally, the RF circuit 110 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
In addition, the RF circuitry 110 may also communicate with networks and other terminals via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The WiFi technology belongs to a short-distance wireless transmission technology, and the electronic device 100 realizes Access to a data network through an Access Point (AP) to which the WiFi module 190 can be connected. The WiFi module 190 may be used for receiving and transmitting data during communication.
The electronic device 100 may be physically connected to other terminals through the communication interface 180. Optionally, the communication interface 180 is connected to the communication interface of the other terminal through a cable, so as to implement data transmission between the electronic device 100 and the other terminal.
The electronic device 100 is capable of implementing communication services, and the electronic device 100 needs to have a data transmission function, that is, the electronic device 100 needs to include a communication module inside. Although fig. 1 shows communication modules such as the RF circuit 110, the WiFi module 190, and the communication interface 180, it is understood that at least one of the above components or other communication modules (such as a bluetooth module) for implementing communication exists in the electronic device 100 for data transmission.
For example, when the electronic device 100 is a mobile phone, the electronic device 100 may include the RF circuit 110 and may further include the WiFi module 190; when the electronic device 100 is a computer, the electronic device 100 may include the communication interface 180 and may further include the WiFi module 190; when the electronic device 100 is a tablet computer, the electronic device 100 may include the WiFi module.
The memory 140 may be used to store software programs and modules. The processor 130 executes various functional applications and data processing of the electronic device 100 by executing the software programs and modules stored in the memory 140, and after the processor 130 executes the program codes in the memory 140, part or all of the processes in fig. 1 of the embodiments of the present disclosure can be implemented.
Alternatively, the memory 140 may mainly include a program storage area and a data storage area. Wherein, the storage program area can store an operating system, various application programs (such as communication application), various modules for WLAN connection, and the like; the storage data area may store data created according to the use of the terminal, and the like.
Further, the memory 140 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 150 may be used to receive numeric or character information input by a user and to generate key signal inputs related to user settings and function control of the electronic apparatus 100.
Optionally, the input unit 150 may include a touch panel 151 and other input devices 152.
The touch panel 151, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 151 (for example, an operation performed by the user on or near the touch panel 151 using any suitable object or accessory such as a finger, a stylus, etc.), and drive a corresponding connection device according to a preset program. Alternatively, the touch panel 151 may include two parts, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 130, and can receive and execute commands sent by the processor 130. In addition, the touch panel 151 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave.
Optionally, the other input devices 152 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 160 may be used to display information input by a user or information provided to a user and various menus of the electronic apparatus 100. The display unit 160 is a display system of the electronic device 100, and is used for presenting an interface to implement human-computer interaction.
The display unit 160 may include a display panel 161. Alternatively, the Display panel 161 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Further, the touch panel 151 may cover the display panel 161, and when the touch panel 151 detects a touch operation on or near the touch panel, the touch panel transmits the touch operation to the processor 130 to determine the type of the touch event, and then the processor 130 provides a corresponding visual output on the display panel 161 according to the type of the touch event.
Although the touch panel 151 and the display panel 161 are shown in fig. 1 as two separate components to implement the input and output functions of the electronic device 100, in some embodiments, the touch panel 151 and the display panel 161 may be integrated to implement the input and output functions of the electronic device 100.
The processor 130 is a control center of the electronic device 100, connects various components using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by running or executing software programs and/or modules stored in the memory 140 and calling data stored in the memory 140, thereby implementing various services based on the electronic device.
Optionally, the processor 130 may include one or more processing units. Optionally, the processor 130 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, and the like, and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 130.
The camera 170 is configured to implement a shooting function of the electronic device 100, and shoot pictures or videos.
The electronic device 100 also includes a power source 120 (such as a battery) for powering the various components. Optionally, the power supply 120 may be logically connected to the processor 130 through a power management system, so as to implement functions of managing charging, discharging, power consumption, and the like through the power management system.
Although not shown, the electronic device 100 may further include at least one sensor, which is not described in detail herein.
In the following, the scheme in the present disclosure is described in detail with reference to the accompanying drawings, as shown in fig. 2, which is a schematic flow chart of a three-dimensional hairstyle matching method of the present disclosure, and may include the following steps:
step 201: carrying out hair detection processing on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located;
in one embodiment, the intermediate image may be obtained by:
equally dividing the target image into a specified number of image blocks; and respectively carrying out hair detection processing on each image block according to a preset sequence to obtain the intermediate image.
The method for carrying out hair detection processing on any image block comprises the following steps:
extracting the characteristics of the image block to obtain a directional gradient Histogram (HOG) characteristic and a Local Ternary Pattern (LTP) characteristic of the image block; inputting the histogram feature of the directional gradient and the local ternary pattern feature into a pre-trained random forest model to obtain a classification result of the image block; and if the classification result of the image block is the type without hair, setting the pixel value of the image block as a specified pixel value.
For example, as shown in fig. 3, a diagram a in fig. 3 is a target image including a user, a diagram b is a diagram dividing the target image into a specified number of image blocks, in this embodiment, an intermediate image is divided into 25 image blocks, and then hair detection processing is performed on any image block, where it is determined that the classification result of the image block 1, the image block 5, the image block 6, the image block 10, the image block 11, the image block 15 to the image block 25 is a type not including hair, the pixel values of the pixel points corresponding to the image block 1, the image block 5, the image block 6, the image block 10, the image block 11, the image block 15 to the image block 25 may be set to 0, and a diagram c is an intermediate image obtained after hair detection processing.
It should be noted that the specified number and the specified pixel value may be set according to a specific actual situation, and the specified number and the specified pixel value in this embodiment are only used for explanation and are not limited to the specified number and the specified pixel value.
Step 202: performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
the preset hair segmentation algorithm may be a graph-based image segmentation algorithm felzenzwalb to perform hair segmentation on the intermediate image.
For example, the intermediate image obtained in fig. 3 is subjected to hair segmentation, and the result can be the foreground image containing the hair in fig. 4.
Step 203: and inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image.
In one embodiment, step 203 may be implemented as: performing feature extraction on the foreground image by using a hair style matching network to obtain first hair style feature information; comparing the first hair style characteristic information with second hair style characteristic information of a two-dimensional hair style image corresponding to each three-dimensional hair style to obtain the similarity between the first hair style characteristic information and each second hair style characteristic information; and determining the three-dimensional hairstyle corresponding to the second hairstyle characteristic information with the highest similarity to the first hairstyle characteristic information as the three-dimensional hairstyle matched with the hair contained in the foreground image.
The three-dimensional hairstyles are established in advance, and the three-dimensional hairstyles are mainly created by collecting main flow hairstyles, classifying the main flow hairstyles according to the length of the hairstyles, the types of curly hair, straight hair and the like. And then, each three-dimensional hair style is converted into a corresponding two-dimensional hair style image, which can comprise a front two-dimensional hair style image, a left two-dimensional hair style image, a right two-dimensional hair style image and the like. And then, second hair style characteristic information of the three-dimensional hair style is extracted through the two-dimensional hair style image.
The first and second hair style characteristics information may include hair length characteristics information, hair curling and straightening characteristics information, shape characteristics information, and the like, and may be set according to specific situations, which is not limited herein.
For example, the preset three-dimensional hairstyle includes a three-dimensional hairstyle 1, a three-dimensional hairstyle 2, and a three-dimensional hairstyle 3. Feature extraction is performed on the two-dimensional image corresponding to each three-dimensional hairstyle, and second hairstyle feature information 1, second hairstyle feature information 2, and second hairstyle feature information 3 corresponding to the three-dimensional hairstyle 1 are obtained. And performing feature extraction on the target image to obtain first type feature information. If the similarity between the first hair style characteristic information and the second hair style characteristic information 1 is determined to be 30%, the similarity between the first hair style characteristic information and the second hair style characteristic information 2 is 70%, and the similarity between the first hair style characteristic information and the second hair style characteristic information 3 is 90%, determining that the three-dimensional hair style 3 corresponding to the second hair style characteristic information 3 is most matched with the hair style of the target image, and determining that the three-dimensional hair style 3 is the three-dimensional hair style matched with the hair style in the target image.
In one embodiment, the hair style matching network may be trained by:
obtaining three-dimensional hairstyle training samples, wherein any one three-dimensional hairstyle training sample comprises a foreground image of the same hairstyle and a marked three-dimensional hairstyle corresponding to the foreground image, and the hairstyles of the foreground images in different three-dimensional hairstyle training samples are different;
the following steps are carried out for any one three-dimensional hair style training sample:
inputting the three-dimensional hairstyle training sample into a hairstyle matching network, and performing feature extraction on a foreground image in the three-dimensional training sample to obtain first hair style feature information; comparing the first hair style characteristic information with second hair style characteristic information of the two-dimensional image corresponding to each three-dimensional hair style to obtain the similarity of the first hair style characteristic information and each second hair style characteristic information; determining a three-dimensional hair style corresponding to second hair style characteristic information with the highest similarity to the first hair style characteristic information; comparing the determined three-dimensional hairstyle with the marked three-dimensional hairstyle to obtain an error value; and if the error value does not meet the specified condition, adjusting the training parameters of the hairstyle matching network, returning to execute the steps of inputting the three-dimensional hairstyle training sample into the hairstyle matching network, and performing feature extraction on the foreground image in the three-dimensional training sample until the error value meets the specified condition, and finishing the training of the hairstyle matching network.
Wherein the specified condition may be not less than a specified value. The designated value can be set according to actual conditions, and the embodiment is not limited herein.
In order to make the determined three-dimensional hairstyle more similar to the hairstyle in the target image, in an embodiment, after step 203 is performed, the hair color in the foreground image is extracted, and the color of the three-dimensional hairstyle is filled with the hair color to obtain the target three-dimensional hairstyle.
For example, if the extracted hair color is brown, the color of the matched three-dimensional hairstyle can be filled with brown, so that the obtained target three-dimensional hairstyle has higher similarity with the hairstyle of the user in the target image.
In order to make the material of the target three-dimensional hairstyle more similar to the material of the real hair, in one embodiment, after the target three-dimensional hairstyle is obtained, a three-dimensional hairstyle rendering and adjusting mode corresponding to the target three-dimensional hairstyle is determined by using a preset corresponding relationship between the target three-dimensional hairstyle and the three-dimensional hairstyle rendering and adjusting mode; wherein the three-dimensional hairstyle rendering adjustment mode comprises at least one of basic color adjustment, scattering adjustment, highlight adjustment, tangent adjustment, backlight degree adjustment, ambient light shading adjustment and depth deviation adjustment; and rendering the target three-dimensional hairstyle by using the determined three-dimensional hairstyle rendering and adjusting mode to obtain the rendered target three-dimensional hairstyle.
The three-dimensional hairstyle rendering and adjusting modes corresponding to different target three-dimensional hairstyles are different, and the three-dimensional hairstyles corresponding to different lengths, colors, curly hair, straight hair and the like of hair are different.
The following introduces a three-dimensional hair style rendering and adjusting mode:
(1) basic color adjustment:
and adjusting the color of the basic color map in the target three-dimensional hairstyle by utilizing a linear gradient function (Linear gradient) to obtain a first target three-dimensional hairstyle, and adjusting the brightness of the basic color map in the first target three-dimensional hairstyle by utilizing a preset algorithm to obtain a second target three-dimensional hairstyle.
Then extracting the three-dimensional modeling texture of the second target three-dimensional hairstyle, and performing sampling removal processing on the three-dimensional modeling texture to obtain a mask map; and processing the environment shielding color map in the second target three-dimensional hairstyle by using the mask map to obtain the target three-dimensional hairstyle after the basic color is adjusted.
The basic color adjustment of the target three-dimensional hairstyle in the embodiment simulates the effects of bright center and gradually-darkened edge of real hair. The target three-dimensional hairstyle is more real, color mixing is carried out, edge colors and environment shielding colors are added, and the color of the target three-dimensional hairstyle finally presents a vivid effect.
(2) Scattering adjustment:
processing a scattering color map in a target three-dimensional hairstyle by using a linear gradient function to obtain a third target three-dimensional hairstyle, adjusting the brightness of a basic color map in the third target three-dimensional hairstyle by using a preset algorithm to obtain a fourth target three-dimensional hairstyle, and multiplying the pixel value of each pixel point in the basic color map in the fourth target three-dimensional hairstyle by a first specified scaling factor to obtain the target three-dimensional hairstyle after scattering adjustment.
The scattering-adjusted target three-dimensional hairstyle in the embodiment simulates the brightness change of real hair and different penetration degrees of light in the hair, and achieves a more personalized effect.
(3) Highlight adjustment:
processing the highlight map in the target three-dimensional hairstyle by using a linear gradient function to obtain a fifth target three-dimensional hairstyle, adjusting the brightness of the highlight map in the fifth target three-dimensional hairstyle by using a preset algorithm to obtain a sixth target three-dimensional hairstyle, and multiplying the pixel value of each pixel point in the highlight map in the sixth target three-dimensional hairstyle by a second specified scaling factor to obtain the highlight-adjusted target three-dimensional hairstyle.
The highlight-adjusted target three-dimensional hairstyle in this embodiment simulates the highlight effect of real hair.
(4) Tangent line adjustment:
multiplying a basic color mapping in the target three-dimensional hairstyle and a sampling noise point diagram in the target three-dimensional hairstyle, and adding the multiplied basic color mapping and the sampling noise point diagram with a preset tangent vector to obtain the tangent-adjusted target three-dimensional hairstyle.
The tangent-adjusted target three-dimensional hairstyle in the embodiment simulates the micro-plane of real hair, and the silky feeling of the hair is enhanced.
(5) Adjusting the backlight degree:
the method comprises the steps of firstly extracting ordinate data of a hair shade map in a target three-dimensional hairstyle, carrying out reverse processing on the ordinate data, and then adding shadow scaling mixing processing to the ordinate data after the reverse processing to obtain the target three-dimensional hairstyle with adjusted backlight degree.
The backlit adjusted target three-dimensional hairstyle in this embodiment simulates the effect of light projected through the hair.
(6) Ambient light shielding adjustment:
and extracting ordinate data of the environment shielding color map in the target three-dimensional hairstyle, and performing power transformation processing on the ordinate data to obtain the target three-dimensional hairstyle after the environment light shielding adjustment.
The ambient light obscuration in this embodiment of the adjusted target three-dimensional hairstyle enhances the detail of the hair in the three-dimensional hairstyle.
(7) Pixel depth offset adjustment:
and obtaining the rendering pixel depth of each pixel point in the target three-dimensional hairstyle, and multiplying the rendering pixel depth by a third specified scaling factor to obtain the target three-dimensional hairstyle with the pixel depth skewness adjusted.
In the embodiment, the depth variation range of the pixel is increased by multiplying a third specified scaling factor on the basis of the depth of the hair rendering pixel, and the variation difference range of the depth distance of the hair in the target three-dimensional hairstyle is enhanced, so that the different depth feeling of each cluster of hair is enhanced, and the hair in the target three-dimensional hairstyle looks more layered.
It should be noted that, the sequential adjustment order of each adjustment method in each three-dimensional hairstyle rendering adjustment manner is not limited in this embodiment, and may be set according to a specific actual situation.
In this embodiment, the first specified scaling factor, the second specified scaling factor and the third specified scaling factor may be the same or different, and this embodiment is not limited herein.
For further understanding of the technical solution of the present disclosure, the following detailed description with reference to fig. 5 may include the following steps:
step 501: dividing the target image containing the user into a specified number of image blocks;
step 502: respectively carrying out hair detection processing on each image block according to a preset sequence to obtain an intermediate image;
step 503: performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
step 504: performing feature extraction on the foreground image by using a hair style matching network to obtain first hair style feature information;
step 505: comparing the first hair style characteristic information with second hair style characteristic information of a two-dimensional hair style image corresponding to each three-dimensional hair style to obtain the similarity between the first hair style characteristic information and each second hair style characteristic information;
step 506: determining a three-dimensional hairstyle corresponding to second hairstyle characteristic information with the highest similarity to the first hairstyle characteristic information as a three-dimensional hairstyle matched with hair contained in the foreground image;
step 507: extracting the hair color in the foreground image, and filling the color of the three-dimensional hairstyle by using the hair color to obtain a target three-dimensional hairstyle;
step 508: determining a three-dimensional hairstyle rendering and adjusting mode corresponding to a target three-dimensional hairstyle by utilizing a preset corresponding relation between the target three-dimensional hairstyle and the three-dimensional hairstyle rendering and adjusting mode; wherein the three-dimensional hairstyle rendering adjustment mode comprises at least one of basic color adjustment, scattering adjustment, highlight adjustment, tangent adjustment, backlight degree adjustment, ambient light shading adjustment and depth deviation adjustment;
step 509: and rendering the target three-dimensional hairstyle by using the determined three-dimensional hairstyle rendering and adjusting mode to obtain the rendered target three-dimensional hairstyle.
Based on the same disclosure concept, the three-dimensional hairstyle matching method disclosed above can also be realized by a three-dimensional hairstyle matching device. The effect of the three-dimensional hairstyle matching device is similar to that of the method, and the detailed description is omitted.
Fig. 6 is a schematic structural view of a three-dimensional hair style matching device according to an embodiment of the present disclosure.
As shown in fig. 6, the three-dimensional hairstyle matching apparatus 600 of the present disclosure may include an intermediate image determining module 610, a hair segmenting module 620 and a matching module 630.
An intermediate image determining module 610, configured to perform hair detection processing on a target image including a user to obtain an intermediate image, where the intermediate image is an image including an area where hair is located;
a hair segmentation module 620, configured to perform hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image including the hair;
a matching module 630, configured to input the foreground image into a pre-trained hair style matching network for recognition, so as to obtain a three-dimensional hair style matched with hair included in the foreground image;
in one embodiment, the apparatus further comprises:
and a color filling module 640, configured to extract a hair color in the foreground image after obtaining the three-dimensional hairstyle matched with the hair included in the foreground image, and fill the color of the three-dimensional hairstyle with the hair color to obtain a target three-dimensional hairstyle.
In one embodiment, the apparatus further comprises:
a three-dimensional hairstyle rendering and adjusting mode determining module 650, configured to determine, after the target three-dimensional hairstyle is obtained, a three-dimensional hairstyle rendering and adjusting mode corresponding to the target three-dimensional hairstyle by using a preset correspondence between the target three-dimensional hairstyle and the three-dimensional hairstyle rendering and adjusting mode; wherein the three-dimensional hairstyle rendering adjustment mode comprises at least one of basic color adjustment, scattering adjustment, highlight adjustment, tangent adjustment, backlight degree adjustment, ambient light shading adjustment and depth deviation adjustment;
and a target three-dimensional hairstyle determining module 660, configured to render the target three-dimensional hairstyle by using the determined three-dimensional hairstyle rendering and adjusting manner, so as to obtain a rendered target three-dimensional hairstyle.
In an embodiment, the intermediate image determining module 610 is specifically configured to:
equally dividing the target image into a specified number of image blocks;
respectively carrying out hair detection processing on each image block according to a preset sequence to obtain an intermediate image;
the method for carrying out hair detection processing on any image block comprises the following steps:
extracting the characteristics of the image block to obtain the directional gradient histogram characteristics and the local ternary mode characteristics of the image block; and the number of the first and second electrodes,
inputting the histogram feature of the directional gradient and the local ternary pattern feature into a pre-trained random forest model to obtain a classification result of the image block;
and if the classification result of the image block is the type without hair, setting the pixel value of the image block as a specified pixel value.
In an embodiment, the matching module 630 is specifically configured to:
performing feature extraction on the foreground image by using a hair style matching network to obtain first hair style feature information;
comparing the first hair style characteristic information with second hair style characteristic information of a two-dimensional hair style image corresponding to each three-dimensional hair style to obtain the similarity between the first hair style characteristic information and each second hair style characteristic information;
and determining the three-dimensional hairstyle corresponding to the second hairstyle characteristic information with the highest similarity to the first hairstyle characteristic information as the three-dimensional hairstyle matched with the hair contained in the foreground image.
In one embodiment, the apparatus further comprises:
a hair style matching network training module 670 for training the hair style matching network by:
obtaining three-dimensional hairstyle training samples, wherein any one three-dimensional hairstyle training sample comprises a foreground image of the same hairstyle and a marked three-dimensional hairstyle corresponding to the foreground image, and the hairstyles of the foreground images in different three-dimensional hairstyle training samples are different;
the following steps are carried out for any one three-dimensional hair style training sample:
inputting the three-dimensional hairstyle training sample into a hairstyle matching network, and performing feature extraction on a foreground image in the three-dimensional training sample to obtain first hair style feature information; and the number of the first and second electrodes,
comparing the first hair style characteristic information with second hair style characteristic information of the two-dimensional image corresponding to each three-dimensional hair style to obtain the similarity of the first hair style characteristic information and each second hair style characteristic information;
determining a three-dimensional hair style corresponding to second hair style characteristic information with the highest similarity to the first hair style characteristic information;
comparing the determined three-dimensional hairstyle with the marked three-dimensional hairstyle to obtain an error value;
and if the error value does not meet the specified condition, adjusting the training parameters of the hairstyle matching network, returning to execute the steps of inputting the three-dimensional hairstyle training sample into the hairstyle matching network, and performing feature extraction on the foreground image in the three-dimensional training sample until the error value meets the specified condition, and finishing the training of the hairstyle matching network.
In some possible embodiments, various aspects of a three-dimensional hair style matching method provided by the present disclosure may also be implemented in the form of a program product, which includes program code for causing a computer device to perform the steps of the three-dimensional hair style matching method according to various exemplary embodiments of the present disclosure described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a random access computer storage media (RAM), a read-only computer storage media (ROM), an erasable programmable read-only computer storage media (EPROM or flash memory), an optical fiber, a portable compact disc read-only computer storage media (CD-ROM), an optical computer storage media piece, a magnetic computer storage media piece, or any suitable combination of the foregoing.
The program product for three-dimensional hairstyle matching of embodiments of the present disclosure may employ a portable compact disc read-only computer storage medium (CD-ROM) and include program code, and may be executable on an electronic device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
It should be noted that although several modules of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the modules described above may be embodied in one module, in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module described above may be further divided into embodiments by a plurality of modules.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk computer storage media, CD-ROMs, optical computer storage media, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the present disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable computer storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable computer storage medium produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (10)

1. An electronic device comprising a processor and a display unit;
wherein the processor is configured to:
carrying out hair detection processing on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located;
performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image;
the display unit is configured to display the three-dimensional hairstyle.
2. The electronic device of claim 1, wherein the processor is further configured to:
and after the three-dimensional hairstyle matched with the hair contained in the foreground image is obtained, extracting the hair color in the foreground image, and filling the color of the three-dimensional hairstyle by using the hair color to obtain the target three-dimensional hairstyle.
3. The electronic device of claim 2, wherein the processor is further configured to:
after the target three-dimensional hairstyle is obtained, determining a three-dimensional hairstyle rendering and adjusting mode corresponding to the target three-dimensional hairstyle by utilizing a preset corresponding relation between the target three-dimensional hairstyle and the three-dimensional hairstyle rendering and adjusting mode; wherein the three-dimensional hairstyle rendering adjustment mode comprises at least one of basic color adjustment, scattering adjustment, highlight adjustment, tangent adjustment, backlight degree adjustment, ambient light shading adjustment and depth deviation adjustment;
and rendering the target three-dimensional hairstyle by using the determined three-dimensional hairstyle rendering and adjusting mode to obtain the rendered target three-dimensional hairstyle.
4. The electronic device according to claim 1, wherein the processor, in performing the hair detection processing on the target image including the user to obtain an intermediate image, is specifically configured to:
equally dividing the target image into a specified number of image blocks;
respectively carrying out hair detection processing on each image block according to a preset sequence to obtain an intermediate image;
the method for carrying out hair detection processing on any image block comprises the following steps:
extracting the characteristics of the image block to obtain the directional gradient histogram characteristics and the local ternary mode characteristics of the image block; and the number of the first and second electrodes,
inputting the histogram feature of the directional gradient and the local ternary pattern feature into a pre-trained random forest model to obtain a classification result of the image block;
and if the classification result of the image block is the type without hair, setting the pixel value of the image block as a specified pixel value.
5. The electronic device according to claim 1, wherein the processor, in performing the recognition of the foreground image input into a pre-trained hair style matching network, obtains a three-dimensional hair style matching the hair contained in the foreground image, and is specifically configured to:
performing feature extraction on the foreground image by using a hair style matching network to obtain first hair style feature information;
comparing the first hair style characteristic information with second hair style characteristic information of a two-dimensional hair style image corresponding to each three-dimensional hair style to obtain the similarity between the first hair style characteristic information and each second hair style characteristic information;
and determining the three-dimensional hairstyle corresponding to the second hairstyle characteristic information with the highest similarity to the first hairstyle characteristic information as the three-dimensional hairstyle matched with the hair contained in the foreground image.
6. The electronic device of claim 1, wherein the processor is further configured to:
training the hair style matching network by:
obtaining three-dimensional hairstyle training samples, wherein any one three-dimensional hairstyle training sample comprises a foreground image of the same hairstyle and a marked three-dimensional hairstyle corresponding to the foreground image, and the hairstyles of the foreground images in different three-dimensional hairstyle training samples are different;
the following steps are carried out for any one three-dimensional hair style training sample:
inputting the three-dimensional hairstyle training sample into a hairstyle matching network, and performing feature extraction on a foreground image in the three-dimensional training sample to obtain first hair style feature information; and the number of the first and second electrodes,
comparing the first hair style characteristic information with second hair style characteristic information of the two-dimensional image corresponding to each three-dimensional hair style to obtain the similarity of the first hair style characteristic information and each second hair style characteristic information;
determining a three-dimensional hair style corresponding to second hair style characteristic information with the highest similarity to the first hair style characteristic information;
comparing the determined three-dimensional hairstyle with the marked three-dimensional hairstyle to obtain an error value;
and if the error value does not meet the specified condition, adjusting the training parameters of the hairstyle matching network, returning to execute the steps of inputting the three-dimensional hairstyle training sample into the hairstyle matching network, and performing feature extraction on the foreground image in the three-dimensional training sample until the error value meets the specified condition, and finishing the training of the hairstyle matching network.
7. A method of three-dimensional hairstyle matching, the method comprising:
carrying out hair detection processing on a target image containing a user to obtain an intermediate image, wherein the intermediate image is an image containing an area where hair is located;
performing hair segmentation on the intermediate image by using a preset hair segmentation algorithm to obtain a foreground image containing the hair;
and inputting the foreground image into a pre-trained hairstyle matching network for recognition to obtain a three-dimensional hairstyle matched with hair contained in the foreground image.
8. The method according to claim 7, wherein after obtaining the three-dimensional hairstyle matching the hair contained in the foreground image, the method further comprises:
and extracting the hair color in the foreground image, and filling the color of the three-dimensional hairstyle by using the hair color to obtain the target three-dimensional hairstyle.
9. The method of claim 8, wherein after obtaining the target three-dimensional hairstyle, the method further comprises:
determining a three-dimensional hairstyle rendering and adjusting mode corresponding to a target three-dimensional hairstyle by utilizing a preset corresponding relation between the target three-dimensional hairstyle and the three-dimensional hairstyle rendering and adjusting mode; wherein the three-dimensional hairstyle rendering adjustment mode comprises at least one of basic color adjustment, scattering adjustment, highlight adjustment, tangent adjustment, backlight degree adjustment, ambient light shading adjustment and depth deviation adjustment;
and rendering the target three-dimensional hairstyle by using the determined three-dimensional hairstyle rendering and adjusting mode to obtain the rendered target three-dimensional hairstyle.
10. The method according to claim 7, wherein the inputting the foreground image into a pre-trained hair style matching network for recognition to obtain a three-dimensional hair style matching with the hair contained in the foreground image comprises:
performing feature extraction on the foreground image by using a hair style matching network to obtain first hair style feature information;
comparing the first hair style characteristic information with second hair style characteristic information of a two-dimensional hair style image corresponding to each three-dimensional hair style to obtain the similarity between the first hair style characteristic information and each second hair style characteristic information;
and determining the three-dimensional hairstyle corresponding to the second hairstyle characteristic information with the highest similarity to the first hairstyle characteristic information as the three-dimensional hairstyle matched with the hair contained in the foreground image.
CN202110658998.4A 2021-06-15 2021-06-15 Three-dimensional hairstyle matching method and electronic equipment Active CN113538455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110658998.4A CN113538455B (en) 2021-06-15 2021-06-15 Three-dimensional hairstyle matching method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110658998.4A CN113538455B (en) 2021-06-15 2021-06-15 Three-dimensional hairstyle matching method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113538455A true CN113538455A (en) 2021-10-22
CN113538455B CN113538455B (en) 2023-12-12

Family

ID=78095954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110658998.4A Active CN113538455B (en) 2021-06-15 2021-06-15 Three-dimensional hairstyle matching method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113538455B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130120354A1 (en) * 2008-08-28 2013-05-16 Peter F. Falco, Jr. Using Two Dimensional Image Adjustment Operations on Three Dimensional Objects
CN103198515A (en) * 2013-04-18 2013-07-10 北京尔宜居科技有限责任公司 Method for immediately adjusting object illumination rendering effect in 3D scene
CN103617426A (en) * 2013-12-04 2014-03-05 东北大学 Pedestrian target detection method under interference by natural environment and shelter
CN105488490A (en) * 2015-12-23 2016-04-13 天津天地伟业数码科技有限公司 Judge dressing detection method based on video
CN105701853A (en) * 2014-12-15 2016-06-22 三星电子株式会社 3D rendering method and apparatus
CN106372333A (en) * 2016-08-31 2017-02-01 北京维盛视通科技有限公司 Method and device for displaying clothes based on face model
WO2018094653A1 (en) * 2016-11-24 2018-05-31 华为技术有限公司 User hair model re-establishment method and apparatus, and terminal
CN108334823A (en) * 2018-01-19 2018-07-27 中国公路工程咨询集团有限公司 High-resolution remote sensing image container area area detecting method based on machine learning
CN108513089A (en) * 2017-02-24 2018-09-07 腾讯科技(深圳)有限公司 The method and device of group's video session
US20190051048A1 (en) * 2016-04-19 2019-02-14 Zhejiang University Method for single-image-based fully automatic three-dimensional hair modeling
CN110060324A (en) * 2019-03-22 2019-07-26 北京字节跳动网络技术有限公司 Image rendering method, device and electronic equipment
CN110796721A (en) * 2019-10-31 2020-02-14 北京字节跳动网络技术有限公司 Color rendering method and device of virtual image, terminal and storage medium
CN111182350A (en) * 2019-12-31 2020-05-19 广州华多网络科技有限公司 Image processing method, image processing device, terminal equipment and storage medium
CN111291765A (en) * 2018-12-07 2020-06-16 北京京东尚科信息技术有限公司 Method and device for determining similar pictures
CN111612820A (en) * 2020-05-15 2020-09-01 北京百度网讯科技有限公司 Multi-target tracking method, and training method and device of feature extraction model
US20200401842A1 (en) * 2018-09-30 2020-12-24 Plex-Vr Digital Technology (Shanghai) Co., Ltd. Human Hairstyle Generation Method Based on Multi-Feature Retrieval and Deformation

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130120354A1 (en) * 2008-08-28 2013-05-16 Peter F. Falco, Jr. Using Two Dimensional Image Adjustment Operations on Three Dimensional Objects
CN103198515A (en) * 2013-04-18 2013-07-10 北京尔宜居科技有限责任公司 Method for immediately adjusting object illumination rendering effect in 3D scene
CN103617426A (en) * 2013-12-04 2014-03-05 东北大学 Pedestrian target detection method under interference by natural environment and shelter
CN105701853A (en) * 2014-12-15 2016-06-22 三星电子株式会社 3D rendering method and apparatus
CN105488490A (en) * 2015-12-23 2016-04-13 天津天地伟业数码科技有限公司 Judge dressing detection method based on video
US20190051048A1 (en) * 2016-04-19 2019-02-14 Zhejiang University Method for single-image-based fully automatic three-dimensional hair modeling
CN106372333A (en) * 2016-08-31 2017-02-01 北京维盛视通科技有限公司 Method and device for displaying clothes based on face model
WO2018094653A1 (en) * 2016-11-24 2018-05-31 华为技术有限公司 User hair model re-establishment method and apparatus, and terminal
CN108513089A (en) * 2017-02-24 2018-09-07 腾讯科技(深圳)有限公司 The method and device of group's video session
CN108334823A (en) * 2018-01-19 2018-07-27 中国公路工程咨询集团有限公司 High-resolution remote sensing image container area area detecting method based on machine learning
US20200401842A1 (en) * 2018-09-30 2020-12-24 Plex-Vr Digital Technology (Shanghai) Co., Ltd. Human Hairstyle Generation Method Based on Multi-Feature Retrieval and Deformation
CN111291765A (en) * 2018-12-07 2020-06-16 北京京东尚科信息技术有限公司 Method and device for determining similar pictures
CN110060324A (en) * 2019-03-22 2019-07-26 北京字节跳动网络技术有限公司 Image rendering method, device and electronic equipment
CN110796721A (en) * 2019-10-31 2020-02-14 北京字节跳动网络技术有限公司 Color rendering method and device of virtual image, terminal and storage medium
CN111182350A (en) * 2019-12-31 2020-05-19 广州华多网络科技有限公司 Image processing method, image processing device, terminal equipment and storage medium
CN111612820A (en) * 2020-05-15 2020-09-01 北京百度网讯科技有限公司 Multi-target tracking method, and training method and device of feature extraction model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUEFAN SHEN 等: "DeepSketchHair: Deep Sketch-based 3D Hair Modeling", 《IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS》, pages 1 - 14 *
何华赟: "数据驱动的三维人体头部重建", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 12, pages 138 - 1745 *

Also Published As

Publication number Publication date
CN113538455B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
CN110276344B (en) Image segmentation method, image recognition method and related device
CN109191410B (en) Face image fusion method and device and storage medium
CN110689500B (en) Face image processing method and device, electronic equipment and storage medium
US20210097715A1 (en) Image generation method and device, electronic device and storage medium
CN106919918B (en) Face tracking method and device
CN110827378A (en) Virtual image generation method, device, terminal and storage medium
CN108875594B (en) Face image processing method, device and storage medium
CN108566516A (en) Image processing method, device, storage medium and mobile terminal
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN110689479B (en) Face makeup method, device, equipment and medium
CN110796721A (en) Color rendering method and device of virtual image, terminal and storage medium
CN111209423B (en) Image management method and device based on electronic album and storage medium
CN112560540B (en) Cosmetic wearing recommendation method and device
CN112532882B (en) Image display method and device
CN108551552A (en) Image processing method, device, storage medium and mobile terminal
CN110555171A (en) Information processing method, device, storage medium and system
CN108494996A (en) Image processing method, device, storage medium and mobile terminal
CN111091610A (en) Image processing method and device, electronic equipment and storage medium
CN112839223A (en) Image compression method, image compression device, storage medium and electronic equipment
CN114463212A (en) Image processing method and device, electronic equipment and storage medium
CN114120413A (en) Model training method, image synthesis method, device, equipment and program product
CN113554741B (en) Method and device for reconstructing object in three dimensions, electronic equipment and storage medium
CN113902869A (en) Three-dimensional head grid generation method and device, electronic equipment and storage medium
CN113538455B (en) Three-dimensional hairstyle matching method and electronic equipment
WO2023045946A1 (en) Image processing method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant