CN112734633A - Virtual hair style replacing method, electronic equipment and storage medium - Google Patents

Virtual hair style replacing method, electronic equipment and storage medium Download PDF

Info

Publication number
CN112734633A
CN112734633A CN202110018056.XA CN202110018056A CN112734633A CN 112734633 A CN112734633 A CN 112734633A CN 202110018056 A CN202110018056 A CN 202110018056A CN 112734633 A CN112734633 A CN 112734633A
Authority
CN
China
Prior art keywords
hair
image
area
user
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110018056.XA
Other languages
Chinese (zh)
Inventor
杜志宏
魏书琪
欧歌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202110018056.XA priority Critical patent/CN112734633A/en
Publication of CN112734633A publication Critical patent/CN112734633A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a virtual hair style replacing method, electronic equipment and a storage medium. In the method for replacing the virtual hairstyle provided by the embodiment of the application, the hair area of the user is cut from the first image containing the head portrait of the user, then the scalp area missing from the first image after the hair is cut is supplemented, and then the head image of the user after the virtual hairstyle is replaced is determined according to the virtual hairstyle model image and the first image after the scalp area is supplemented, so that the interference between the virtual hairstyle model image and the hair area of the user in the process of trying on the virtual hairstyle of the user is avoided, and the try-on experience of the user is improved. Meanwhile, the missing scalp areas in the first image after the hair is cut are supplemented, so that the influence of the missing scalp areas on the visual sensation of the user is avoided, the display effect of the virtual hair style model image is improved, and the try-on experience of the user is further improved.

Description

Virtual hair style replacing method, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a virtual hair style replacement method, an electronic device, and a storage medium.
Background
With the continuous improvement of living standard, more and more people begin to pay attention to the aesthetic problem of hairstyle. At present, the main flow of the hair salon industry is to let the user select the desired hair style according to the picture or video of the hair style model, but it is difficult for the user to determine the proper hair style according to the picture or video, so that the hair style is not in line with the expectation of the user after the hair styling is finished, and the user experience is affected.
In the prior art, AR (Augmented Reality) technology is often adopted to fuse pictures of various virtual hairstyles with the head of a user, so that the user can see the presentation effect of trying on different hairstyles. However, in the fitting process, because the length of the hair in the virtual hairstyle is often smaller than the existing length of the hair of the user, the existing hair of the user interferes with the virtual hairstyle, that is, at least part of the hair of the user cannot be shielded by the virtual hairstyle, so that the fitting experience of the user is affected.
Disclosure of Invention
The application provides a virtual hair style replacing method, electronic equipment and a storage medium aiming at the defects of the existing mode, and aims to solve the technical problem that the existing hair of a user interferes with the virtual hair style in the virtual hair style try-on process in the prior art, so that the try-on experience of the user is influenced.
In a first aspect, an embodiment of the present application provides a method for replacing a virtual hair style, including:
acquiring a first image containing a user head graph;
identifying a hair region of a user from the first image;
cutting a hair-removed area from the first image to obtain a first image after hair cutting;
supplementing the missing scalp area in the first image after hair cutting;
and determining the head image of the user after replacing the virtual hair style according to the virtual hair style model image and the first image after supplementing the scalp area.
In a second aspect, an embodiment of the present application provides an electronic device, including:
a processor; and
a memory, communicatively connected to the processor, configured to store machine readable instructions, which when executed by the processor, implement a method of replacing a virtual hair style as provided by the first aspect.
In a third aspect, embodiments of the present application provide a computer-readable storage medium for storing computer instructions, which, when executed on an electronic device, implement the method for replacing a virtual hair style as provided in the first aspect.
The beneficial technical effects brought by the technical scheme provided by the embodiment of the application comprise:
in the method for replacing the virtual hairstyle provided by the embodiment of the application, the hair area of the user is cut from the first image containing the head portrait of the user, then the scalp area missing from the first image after the hair is cut is supplemented, and then the head image of the user after the virtual hairstyle is replaced is determined according to the virtual hairstyle model image and the first image after the scalp area is supplemented, so that the interference between the virtual hairstyle model image and the hair area of the user in the process of trying on the virtual hairstyle of the user is avoided, and the try-on experience of the user is improved. Meanwhile, the missing scalp areas in the first image after the hair is cut are supplemented, so that the influence of the missing scalp areas on the visual sensation of the user is avoided, the display effect of the virtual hair style model image is improved, and the try-on experience of the user is further improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an alternative method for virtual hair style provided in an embodiment of the present application;
fig. 2a to 2f are schematic state diagrams of first images of respective processes in a virtual hair style replacement method provided in an embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating a process for identifying a hair region of a user from a first image according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for determining a first hair zone in a face detection region based on face pixels in the face detection region and background pixels outside the face detection region according to an embodiment of the present application;
fig. 5 is a schematic distribution diagram of key points of a human face in a face detection area in a first image in the method for replacing a virtual hairstyle provided in the embodiment of the present application;
fig. 6 is a frame diagram illustrating the structure of a virtual hair replacement device according to an embodiment of the present application;
fig. 7 is a schematic frame diagram of a structure of an electronic device according to an embodiment of the present disclosure.
Description of reference numerals:
110-face detection area; 120-background area; 111-first hairstyle contour line 112-second hairstyle contour line;
60-virtual hair style replacement devices;
601-an obtaining module; 602-an identification module; 603-a cutting module; 604-a completion module; 605-a superposition module;
70-an electronic device;
701-a processor; 702-a memory; 703-a bus; 704-a transceiver; 705-an input unit; 706-output unit.
Detailed Description
Reference will now be made in detail to the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar parts or parts having the same or similar functions throughout. In addition, if a detailed description of the known art is not necessary for illustrating the features of the present application, it is omitted. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments.
The embodiment of the present application provides a method for replacing a virtual hair style, a flowchart of the method is shown in fig. 1, and the method includes steps S101 to S105:
s101, acquiring a first image containing a user head graph.
Alternatively, the first image containing the user head graphic as shown in fig. 2a may be obtained by a camera, and the first image containing the user head graphic stored in advance may also be obtained from a memory.
Optionally, before the first image including the head graph of the user is acquired by the camera, the camera may further acquire an environmental background where the user is located, or acquire the environmental background where the user is located first and then acquire the first image including the head graph of the user. Therefore, the first image containing the head graph of the user and the environment background not containing the user are respectively collected, and the processing efficiency of the subsequent steps can be improved.
S102, identifying a hair area of the user from the first image.
Through the step S102, a hair region corresponding to the existing hair style of the user in the first image can be determined, so as to facilitate the subsequent steps. The method for identifying the hair region of the user from the first image in step S102 will be described in detail later, and will not be described herein.
S103, cutting the hair-removed area from the first image to obtain a first image after hair cutting.
After the above step S103, a first image after hair cutting as shown in fig. 2d is obtained.
And S104, compensating the missing scalp area in the first image after the hair cutting.
After the step S104, a first image of the scalp region with the missing part is obtained as shown in fig. 2e, and the method of the step S104 will be described in detail later and will not be described herein again.
And S105, determining the head image of the user after the virtual hair style is replaced according to the virtual hair style model image and the first image after the scalp area is filled.
Optionally, the method for determining the head image of the user after replacing the virtual hair style according to the virtual hair style model image and the first image after filling up the scalp area in step S105 includes: the image of the user's head after replacing the virtual hairstyle as shown in fig. 2f is obtained by superimposing the virtual hairstyle model image on the designed position in the first image after filling up the scalp area as shown in fig. 2 e.
In the method for replacing the virtual hairstyle provided by the embodiment of the application, the hair area of the user is cut from the first image containing the head portrait of the user, then the scalp area missing from the first image after the hair is cut is supplemented, and then the head image of the user after the virtual hairstyle is replaced is determined according to the virtual hairstyle model image and the first image after the scalp area is supplemented, so that the interference between the virtual hairstyle model image and the hair area of the user in the process of trying on the virtual hairstyle of the user is avoided, and the try-on experience of the user is improved.
Meanwhile, the missing scalp areas in the first image after the hair is cut are supplemented, so that the influence of the missing scalp areas on the visual sensation of the user is avoided, the display effect of the virtual hair style model image is improved, and the try-on experience of the user is further improved.
In an embodiment of the present application, in step S102, a flowchart of a method for identifying a hair region of a user from a first image is shown in fig. 3, where the method includes:
s1021, the face detection area 110 of the user is determined from the first image.
Optionally, the method for determining the face detection area 110 of the user from the first image in step S1021 includes: identifying a plurality of face key points from a first image; according to the plurality of face key points and the design extension strategy, a face detection area 110 shown in fig. 2b is determined. In the embodiment of the present application, according to the existing face recognition technology, 72 face key points of a user are recognized from a first image, and then a face detection area 110 of the user in the first image is determined.
S1022, a first hair zone in the face detection region 110 is determined based on the face pixels in the face detection region 110 and the background pixels outside the face detection region 110.
The method for determining the first hair zone in the face detection region 110 in step S1022 based on the face pixel points in the face detection region 110 and the background pixel points outside the face detection region 110 will be described in detail later, and will not be described herein again.
And S1023, determining a hair area based on the hair pixel points in the first hair subarea.
The method for determining the hair region based on the hair pixel points in the first hair partition in step S1023 includes: selecting pixel points in the face detection area 110 which accord with hair characteristics as seed points; gradually externally expanding and finding out a plurality of similar pixel points by taking the seed points as starting points to obtain a hair area; the hair characteristics include at least one of: hair color, light reflecting characteristics, appearance characteristics. The second hair partition is determined in the background area 120 based on the hair pixel points in the first hair partition, and the hair area is determined according to the first hair partition and the second hair partition.
In an embodiment of the application, a flowchart of the method for determining the first hair section in the face detection region 110 based on the face pixel points in the face detection region 110 and the background pixel points outside the face detection region 110 in step S1022 is shown in fig. 4, where the method includes:
s10221, selecting pixel points which accord with the face complexion in the face detection area 110 as seed points; and with the seed points as starting points, gradually expanding and finding out a plurality of similar pixel points to obtain a face region.
Optionally, in step S10221, a pixel point in the face detection region 110 that meets the skin color of the face is selected as a seed point; the method for finding out a plurality of similar pixel points by successive external expansion by taking the seed points as starting points to obtain the face region comprises the following steps:
scanning each pixel point of the face detection area 110 in the first image, determining pixel points meeting a first similarity condition from each pixel point adjacent to the seed point, and classifying the pixel points into the face area; the first similarity condition is that the similarity between the adjacent pixel points and the corresponding seed points exceeds a design threshold; and taking the pixel points newly classified to the face area as the latest seed points, and continuously expanding and searching the pixel points meeting the first similarity condition until no pixel points meeting the first similarity condition exist.
It should be noted that, in the embodiment of the present application, the design threshold is a design pixel gray scale value.
S10222, the contour of the face region is determined as the first hair style contour line 111 of the first hair subarea.
Alternatively, as shown in fig. 2c, the contour of the determined face region is used as the first outline profile 111 of the first head region in the face detection region 110.
S10223, selecting pixel points outside the face detection area 110 and conforming to the background color as seed points; with the seed point as a starting point, a plurality of similar pixel points are found by successive outward expansion, and a background region 120 is obtained.
Optionally, according to an environment background collected by the camera and not including the user, an environment background pixel point is extracted as a pixel point of a background color in the first image, so that the background region 120 can be obtained in the first image according to the pixel point of the background color.
S10224, a second hairstyle contour line 112 of the first hairstyle segment is determined according to the background region 120 and the face detection region 110.
Alternatively, the method for determining the second hairstyle contour line 112 of the first hairstyle segment according to the background region and the face detection region in the step S10224 includes: determining whether each pixel in the inner contour of the background region 120 is located within the bounding box of the face detection region 110; a second hairstyle contour line 112 that attributes a pixel to the first hairstyle segment when the pixel is within the border of the face detection region 110; when a pixel is not located within the border of the face detection area 110, the pixel of the border corresponding to the pixel is attributed to the second hairstyle contour line 112 of the first hairstyle partition, so as to obtain the second hairstyle contour line 112 as shown in fig. 2 c.
In the embodiment of the present application, in order to distinguish the first hairstyle contour line 111 from the second hairstyle contour line 112, as shown in fig. 2c, the first hairstyle contour line 111 is represented by a thin thick line, and the second hairstyle contour line 112 is represented by a thick black line.
S10225, according to the first hairstyle contour line and the second hairstyle contour line, the first hairstyle section is determined.
Alternatively, as shown in fig. 2c, the area enclosed by the first hairstyle contour line 111 and the second hairstyle contour line 112 is the first hair zone.
Because the face detection area 110 contains pixel points with the same color as the background area 120, the second hairstyle contour line 112 of the first hairstyle partition is determined by combining the background area 120 and the face detection area 110, the detection precision of the first hairstyle partition in the face detection area 110 can be improved, and the cutting precision of the hair area is further improved, so that interference between a virtual hairstyle model image and the hair area of a user is avoided in the process of trying on a virtual hairstyle of the user, and the trial experience of the user is improved.
In an embodiment of the present application, the method for supplementing the scalp area missing in the first image after hair cutting in step S104 includes: determining at least two edge points according to a plurality of face key points of the first image; determining a curve representing the scalp of the face region according to a connecting line of key points of the face between at least two edge points, and determining a scalp region missing in the first image according to the curve; and coloring the missing scalp area according to the skin color of the rest part of the human face area in the first image after hair cutting.
Optionally, as shown in fig. 5, the positions of the edge points 1 and 13 are temple points of the face detection area 110 in the first image, in this embodiment, the edge point 1 is selected as a starting point, the edge point 13 is selected as an end point, and connecting lines of the edge points 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 and 13 are connected to determine a curve representing the scalp of the face area; the curve representing the scalp is a lower curve of a face area, namely a face area curve below the temple height, and an upper curve of the face area is determined according to the lower curve and a formula (1), so that a scalp area missing in the first image is determined; and then, coloring the missing scalp area according to the skin color of the rest part of the face area in the first image after hair cutting, so that the color of the missing scalp area is adaptive to the skin color of the face area. The upper curve of the face region is represented by a bold line in fig. 2 c.
Figure BDA0002887698740000081
Wherein x is each coordinate point of a curve representing the scalp; α is a radian coefficient, and optionally, a value in this embodiment of the present application is 1.8; and y is each coordinate point in the upper curve.
In an embodiment of the present application, since a part of the hair of the user is located in the background area 120 and a part of the hair may fall on the clothes worn by the user, after cutting off the hair area, the corresponding background area 120 and the clothes area where the ornament is located are lost, and after the virtual hair style model image is superimposed on the first image after filling up the scalp area, the virtual hair style model image may not cover all of the lost part of the background area 120 and the lost part of the clothes area. Therefore, after the head image of the user with the virtual hair style replaced is determined according to the virtual hair style model image and the first image with the scalp area filled up in step S105, the method further includes:
determining a missing background area in the first image after the scalp area is filled; filling up the missing background area according to the background image in the first image; and/or determining the missing clothing region in the first image after the scalp region is filled up, and filling up the missing clothing region according to the clothing image in the first image.
In an embodiment of the present application, in order to increase the identification speed of the hair region and increase the identification accuracy of the hair region, in the embodiment of the present application, the method for acquiring the first image including the head graph of the user in step S101 includes:
and displaying the beam-sending prompt information containing the beam-sending content for reminding the user.
Optionally, the hair-bundling prompt message is sent to the user by means of sound, light and other signals, so as to remind the user to bundle the hair, prevent the hair of the user from falling on the clothes worn by the user, and prevent the hair from covering the face area such as the ears, cheeks and the like of the user.
When the beam sending feedback information aiming at the beam sending prompt information is received, a first image containing the head figure of the user is obtained.
Optionally, according to the prompt of the hair bundle prompt message, after the user has bundled the head, the user can send the hair bundle feedback message through the touch display screen, and when the hair bundle feedback message is received, the first image containing the head graph of the user is acquired through the camera as shown in fig. 2 a.
In an embodiment of the present application, in order to reduce the influence of pixel points of other colors in the background region 120 on the identification accuracy of the hair region and improve the identification speed of the hair region, before acquiring the first image containing the head graph of the user in step S101, the method further includes:
and displaying the background change prompt information including placing a monochromatic background baffle.
Optionally, the background change prompt message is sent to the user by means of a signal such as sound, light and the like.
And when background change feedback information aiming at the background change prompt information is received, acquiring an environment image containing a monochromatic background baffle graph.
Optionally, after the user changes the monochrome background baffle, the user may send the background change feedback information through the touch display screen, and when the background change feedback information is received, the user acquires the environment image containing the monochrome background baffle graph through the camera.
And, after determining the head image of the user after replacing the virtual hair style according to the virtual hair style model image and the first image after filling up the scalp area in the step S105, the method further includes:
determining a missing background area in the first image after the scalp area is filled; and filling up the missing background area according to the environment image. Here, the environment image is a monochrome background mask pattern, and pixel points having the same color as the monochrome background mask pattern are supplemented to the missing background region in the first image after the scalp region is filled up.
Optionally, the method for determining the head image of the user after replacing the virtual hair style according to the virtual hair style model image and the first image after filling up the scalp area in step S105 includes:
in the embodiment of the present application, the number 1 edge point, the number 7 edge point, and the number 13 edge point are selected as positioning points, and the distance between the center point of the virtual hair style model image and the positioning points is adjusted to determine the position to be superimposed of the virtual hair style model image in the first image after the scalp area is filled up, in which the position to be superimposed of the virtual hair style model image is determined; the image of the head of the user after replacing the virtual hair style as shown in fig. 2f is obtained by superimposing the virtual hair style model image on the position to be superimposed in the first image after the scalp area is filled up as shown in fig. 2 e.
Based on the same inventive concept, the present embodiment provides a virtual hair style replacing device 60, which has a structural frame diagram as shown in fig. 6, and includes:
an obtaining module 601, configured to obtain a first image including a user head figure;
an identifying module 602, configured to identify a hair region of a user from the first image;
a cutting module 603, configured to cut a hair-removed region from the first image to obtain a first image after hair cutting;
a supplementing module 604, configured to supplement a scalp region missing in the first image after the hair is cut;
and the superimposing module 605 is configured to determine the head image of the user after replacing the virtual hair style according to the virtual hair style model image and the first image after completing the scalp area.
The virtual hair style replacing device 60 of this embodiment can execute any virtual hair style replacing method provided in this embodiment, and the implementation principle is similar, and will not be described herein again.
Based on the same inventive concept, the present application provides an electronic device 70, as shown in fig. 7, where the electronic device 70 includes: a processor 701; and
the memory 702, communicatively coupled to the processor 701, is configured to store machine readable instructions, which when executed by the processor 701, implement various alternative implementations of the virtual hair style replacement method provided by the embodiments of the present application.
Those skilled in the art will appreciate that the electronic devices provided by the embodiments of the present application may be specially designed and manufactured for the required purposes, or may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium or in any type of medium suitable for storing electronic instructions and respectively coupled to a bus.
Alternatively, the Processor 701 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 701 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others.
The processor 701 and the memory 702 are connected by a bus 703 to implement communication. Bus 703 may include a path that transfers information between the above components. The bus 703 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 703 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 7, but this is not intended to represent only one bus or type of bus.
The Memory 702 may be, but is not limited to, a ROM (Read-Only Memory) or other type of static storage device that can store static information and instructions, a RAM (random access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read-Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Optionally, the electronic device 70 may also include a transceiver 704. The transceiver may be used for reception and transmission of signals. The transceiver 704 may allow the electronic device 70 to communicate wirelessly or wiredly with other devices to exchange data. It should be noted that the number of the transceivers in practical application is not limited to one. For example, in an embodiment of the present application, the transceiver 704 may comprise a bluetooth transceiver, through which the user may transmit a first image containing a user head graphic to the electronic device 70 for processing.
Optionally, the electronic device 70 may further include an input unit 705. The input unit 705 may be used to receive input numeric, character, image and/or sound information or to generate key signal inputs related to user settings and function control of the electronic device. The input unit 705 may include, but is not limited to, one or more of a touch screen, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, a camera, a microphone, and the like, and in this embodiment, the input unit 705 is a camera through which a first image including a head figure of a user may be acquired.
Optionally, the electronic device 70 may further include an output unit 706. The output unit 706 may be used to output or display information processed by the processor 701. The output unit may include, but is not limited to, one or more of a display device, a speaker, a vibration device, and the like, and in the embodiment of the present application, the output unit 706 includes a display device, and the image of the head of the user after replacing the virtual hairstyle may be displayed through the display device.
It should be noted that FIG. 7 is merely a schematic diagram illustrating a computing system that may be used to implement various embodiments of the present application. Those skilled in the art will appreciate that the electronic device 70 may be implemented by the introduction of additional computing devices.
Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium is used to store computer instructions, and when the computer instructions are executed on an electronic device, the method for replacing a virtual hair style provided in the embodiments of the present application is implemented.
The computer readable storage medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs, RAMs, EPROMs (Erasable Programmable Read-Only Memory), EEPROMs, flash Memory, magnetic cards, or fiber optic cards. That is, a readable storage medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
The computer-readable storage medium provided in the embodiments of the present application has the same inventive concept as the embodiments described above, and contents not shown in detail in the computer-readable storage medium may refer to the embodiments described above, and are not described herein again.
By applying the embodiment of the application, at least the following beneficial effects can be realized:
in the embodiment of the application, the hair area of the user is cut from the first image containing the head portrait of the user, the scalp area missing from the first image after the hair is cut is filled, and the head image of the user after the virtual hair style is replaced is determined according to the virtual hair style model image and the first image after the scalp area is filled, so that the interference between the virtual hair style model image and the hair area of the user in the process of trying on the virtual hair style of the user is avoided, and the try-on experience of the user is improved.
Meanwhile, the missing scalp areas in the first image after the hair is cut are supplemented, so that the influence of the missing scalp areas on the visual sensation of the user is avoided, the display effect of the virtual hair style model image is improved, and the try-on experience of the user is further improved.
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (11)

1. A method of replacing a virtual hairstyle, comprising:
acquiring a first image containing a user head graph;
identifying a hair region of the user from the first image;
cutting off the hair region from the first image to obtain a first image after hair cutting;
supplementing the missing scalp area in the first image after the hair is cut;
and determining the head image of the user after the virtual hairstyle is replaced according to the virtual hairstyle model image and the first image after the scalp area is filled.
2. The method for replacing a virtual hair style according to claim 1, wherein the identifying the hair region of the user from the first image comprises:
determining a face detection region of the user from the first image;
determining a first hair zone in the face detection area based on face pixel points in the face detection area and background pixel points outside the face detection area;
and determining the hair area based on the hair pixel points in the first hair subarea.
3. The method for replacing a virtual hair style according to claim 2, wherein the determining a first hair section in the face detection area based on the face pixels in the face detection area and the background pixels outside the face detection area comprises:
selecting pixel points which accord with the complexion of the face in the face detection area as seed points; gradually externally expanding and finding out a plurality of similar pixel points by taking the seed points as starting points to obtain a face area;
determining the outline of the face area as a first sending outline of the first sending subarea;
selecting pixel points outside the face detection area and conforming to the background color as seed points; gradually externally expanding and finding out a plurality of similar pixel points by taking the seed points as starting points to obtain a background area;
and determining a second hairstyle contour line of the first hairstyle subarea according to the background area and the face detection area.
4. The method for replacing a virtual hair style according to claim 3, wherein the step of finding out similar pixel points by successive searching with the seed point as a starting point to obtain a face region comprises:
determining pixel points meeting a first similarity condition from all the pixel points adjacent to the seed point, and classifying the pixel points to a face area; the first similarity condition is that the similarity between the adjacent pixel points and the corresponding seed points exceeds a design threshold;
and taking the pixel points newly classified to the face area as the latest seed points, and continuously expanding and searching the pixel points meeting the first similarity condition until no pixel points meeting the first similarity condition exist.
5. The method for replacing a virtual hair style according to claim 3, wherein the determining a second hair style contour line of the first hair section according to the background region and the face detection region comprises:
determining whether each pixel in an inner contour of the background region is located within a border of the face detection region;
attributing one of said pixels to said second hairstyle contour line of said first hairstyle segment when said one of said pixels is located within a border of said face detection area;
when one of the pixels is not located within the border of the face detection area, attributing the pixel of the border corresponding to the one of the pixels to the second hairstyle contour line of the first hairstyle partition.
6. The method for replacing a virtual hair style according to claim 2, wherein the determining the hair region based on the hair pixels in the first hair zone comprises:
selecting pixel points which accord with hair characteristics in the face detection area as seed points; gradually externally expanding and finding out a plurality of similar pixel points by taking the seed points as starting points to obtain the hair area;
the hair characteristics include at least one of: hair color, light reflecting characteristics, appearance characteristics.
7. The method for replacing a virtual hairstyle according to claim 1, wherein the step of supplementing the scalp area missing from the first image after the hair is cut comprises:
determining at least two edge points according to a plurality of face key points of the first image;
determining a curve representing the scalp of the face region according to a connecting line of key points of the face between at least two edge points, and determining the missing scalp region in the first image according to the curve;
and coloring the missing scalp area according to the skin color of the rest part of the face area in the first image after the hair is cut.
8. The method for replacing a virtual hair style according to claim 1, wherein said obtaining a first image containing a graphic of a user's head comprises:
displaying a beam-sending prompt message containing a beam-sending content reminding user;
when the beam sending feedback information aiming at the beam sending prompt information is received, the first image containing the head figure of the user is obtained.
9. The method for replacing a virtual hair style according to claim 8, further comprising, prior to said obtaining the first image containing the head graphic of the user:
displaying background change prompt information including placing a monochromatic background baffle;
when background change feedback information aiming at the background change prompt information is received, acquiring an environment image containing the monochromatic background baffle graph;
and after determining the head image of the user after replacing the virtual hair style according to the virtual hair style model image and the first image after supplementing the scalp area, the method further comprises the following steps:
determining a background area which is missed in the first image after the scalp area is supplemented;
and filling up the missing background area according to the environment image.
10. An electronic device, comprising:
a processor; and
a memory, communicatively connected to the processor, configured to store machine readable instructions, which when executed by the processor, implement a replacement method for a virtual hair style as claimed in any one of claims 1 to 9.
11. A computer-readable storage medium for storing computer instructions for implementing a method for replacing a virtual hair style according to any one of claims 1 to 9 when the computer instructions are run on an electronic device.
CN202110018056.XA 2021-01-07 2021-01-07 Virtual hair style replacing method, electronic equipment and storage medium Pending CN112734633A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110018056.XA CN112734633A (en) 2021-01-07 2021-01-07 Virtual hair style replacing method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110018056.XA CN112734633A (en) 2021-01-07 2021-01-07 Virtual hair style replacing method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112734633A true CN112734633A (en) 2021-04-30

Family

ID=75591020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110018056.XA Pending CN112734633A (en) 2021-01-07 2021-01-07 Virtual hair style replacing method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112734633A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938603A (en) * 2021-09-09 2022-01-14 联想(北京)有限公司 Image processing method and device and electronic equipment
CN114187633A (en) * 2021-12-07 2022-03-15 北京百度网讯科技有限公司 Image processing method and device, and training method and device of image generation model
CN114565521A (en) * 2022-01-17 2022-05-31 北京新氧科技有限公司 Image restoration method, device, equipment and storage medium based on virtual reloading
CN116503924A (en) * 2023-03-31 2023-07-28 广州翼拍联盟网络技术有限公司 Portrait hair edge processing method and device, computer equipment and storage medium
WO2023239299A1 (en) * 2022-06-10 2023-12-14 脸萌有限公司 Image processing method and apparatus, electronic device, and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image
US20140176565A1 (en) * 2011-02-17 2014-06-26 Metail Limited Computer implemented methods and systems for generating virtual body models for garment fit visualisation
US20160154993A1 (en) * 2014-12-01 2016-06-02 Modiface Inc. Automatic segmentation of hair in images
CN107742273A (en) * 2017-10-13 2018-02-27 广州帕克西软件开发有限公司 A kind of virtual try-in method of 2D hair styles and device
CN108053366A (en) * 2018-01-02 2018-05-18 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN108564526A (en) * 2018-03-30 2018-09-21 北京金山安全软件有限公司 Image processing method and device, electronic equipment and medium
CN109408653A (en) * 2018-09-30 2019-03-01 叠境数字科技(上海)有限公司 Human body hair style generation method based on multiple features retrieval and deformation
CN109493160A (en) * 2018-09-29 2019-03-19 口碑(上海)信息技术有限公司 A kind of virtual examination forwarding method, apparatus and system
CN109903257A (en) * 2019-03-08 2019-06-18 上海大学 A kind of virtual hair-dyeing method based on image, semantic segmentation
CN109949207A (en) * 2019-01-31 2019-06-28 深圳市云之梦科技有限公司 Virtual objects synthetic method, device, computer equipment and storage medium
CN110111246A (en) * 2019-05-15 2019-08-09 北京市商汤科技开发有限公司 A kind of avatars generation method and device, storage medium
CN110689546A (en) * 2019-09-25 2020-01-14 北京字节跳动网络技术有限公司 Method, device and equipment for generating personalized head portrait and storage medium
CN110782515A (en) * 2019-10-31 2020-02-11 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
US20200175729A1 (en) * 2018-12-04 2020-06-04 Nhn Corporation Deep learning based virtual hair dyeing method and method for providing virual hair dyeing service using the same
CN111738910A (en) * 2020-06-12 2020-10-02 北京百度网讯科技有限公司 Image processing method and device, electronic equipment and storage medium
WO2020207997A1 (en) * 2019-04-09 2020-10-15 Koninklijke Philips N.V. Modifying an appearance of hair

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140176565A1 (en) * 2011-02-17 2014-06-26 Metail Limited Computer implemented methods and systems for generating virtual body models for garment fit visualisation
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image
US20140233849A1 (en) * 2012-06-20 2014-08-21 Zhejiang University Method for single-view hair modeling and portrait editing
US20160154993A1 (en) * 2014-12-01 2016-06-02 Modiface Inc. Automatic segmentation of hair in images
CN107742273A (en) * 2017-10-13 2018-02-27 广州帕克西软件开发有限公司 A kind of virtual try-in method of 2D hair styles and device
CN108053366A (en) * 2018-01-02 2018-05-18 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN108564526A (en) * 2018-03-30 2018-09-21 北京金山安全软件有限公司 Image processing method and device, electronic equipment and medium
CN109493160A (en) * 2018-09-29 2019-03-19 口碑(上海)信息技术有限公司 A kind of virtual examination forwarding method, apparatus and system
CN109408653A (en) * 2018-09-30 2019-03-01 叠境数字科技(上海)有限公司 Human body hair style generation method based on multiple features retrieval and deformation
US20200401842A1 (en) * 2018-09-30 2020-12-24 Plex-Vr Digital Technology (Shanghai) Co., Ltd. Human Hairstyle Generation Method Based on Multi-Feature Retrieval and Deformation
US20200175729A1 (en) * 2018-12-04 2020-06-04 Nhn Corporation Deep learning based virtual hair dyeing method and method for providing virual hair dyeing service using the same
CN109949207A (en) * 2019-01-31 2019-06-28 深圳市云之梦科技有限公司 Virtual objects synthetic method, device, computer equipment and storage medium
CN109903257A (en) * 2019-03-08 2019-06-18 上海大学 A kind of virtual hair-dyeing method based on image, semantic segmentation
WO2020207997A1 (en) * 2019-04-09 2020-10-15 Koninklijke Philips N.V. Modifying an appearance of hair
CN110111246A (en) * 2019-05-15 2019-08-09 北京市商汤科技开发有限公司 A kind of avatars generation method and device, storage medium
CN110689546A (en) * 2019-09-25 2020-01-14 北京字节跳动网络技术有限公司 Method, device and equipment for generating personalized head portrait and storage medium
CN110782515A (en) * 2019-10-31 2020-02-11 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN111738910A (en) * 2020-06-12 2020-10-02 北京百度网讯科技有限公司 Image processing method and device, electronic equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938603A (en) * 2021-09-09 2022-01-14 联想(北京)有限公司 Image processing method and device and electronic equipment
CN113938603B (en) * 2021-09-09 2023-02-03 联想(北京)有限公司 Image processing method and device and electronic equipment
CN114187633A (en) * 2021-12-07 2022-03-15 北京百度网讯科技有限公司 Image processing method and device, and training method and device of image generation model
CN114565521A (en) * 2022-01-17 2022-05-31 北京新氧科技有限公司 Image restoration method, device, equipment and storage medium based on virtual reloading
WO2023239299A1 (en) * 2022-06-10 2023-12-14 脸萌有限公司 Image processing method and apparatus, electronic device, and storage medium
CN116503924A (en) * 2023-03-31 2023-07-28 广州翼拍联盟网络技术有限公司 Portrait hair edge processing method and device, computer equipment and storage medium
CN116503924B (en) * 2023-03-31 2024-01-26 广州翼拍联盟网络技术有限公司 Portrait hair edge processing method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112734633A (en) Virtual hair style replacing method, electronic equipment and storage medium
CN108010112B (en) Animation processing method, device and storage medium
US10599914B2 (en) Method and apparatus for human face image processing
CN112288665B (en) Image fusion method and device, storage medium and electronic equipment
CN110419061B (en) Mixed reality system and method for generating virtual content using the same
CN110766777A (en) Virtual image generation method and device, electronic equipment and storage medium
EP3839879B1 (en) Facial image processing method and apparatus, image device, and storage medium
CN111435433B (en) Information processing device, information processing method, and storage medium
US20080165187A1 (en) Face Image Synthesis Method and Face Image Synthesis Apparatus
EP2958035A1 (en) Simulation system, simulation device, and product description assistance method
CN112419170A (en) Method for training occlusion detection model and method for beautifying face image
CN111950056B (en) BIM display method and related equipment for building informatization model
CN112419144B (en) Face image processing method and device, electronic equipment and storage medium
KR20200107957A (en) Image processing method and device, electronic device and storage medium
CN110796721A (en) Color rendering method and device of virtual image, terminal and storage medium
CN114175113A (en) Electronic device for providing head portrait and operation method thereof
CN114187633A (en) Image processing method and device, and training method and device of image generation model
JP2000322588A (en) Device and method for image processing
US20220292690A1 (en) Data generation method, data generation apparatus, model generation method, model generation apparatus, and program
CN110070481A (en) Image generating method, device, terminal and the storage medium of virtual objects for face
CN110414345A (en) Cartoon image generation method, device, equipment and storage medium
CN110189252A (en) The method and apparatus for generating average face image
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
CN114743252A (en) Feature point screening method, device and storage medium for head model
CN114299270A (en) Special effect prop generation and application method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination