CN117389676B - Intelligent hairstyle adaptive display method based on display interface - Google Patents

Intelligent hairstyle adaptive display method based on display interface Download PDF

Info

Publication number
CN117389676B
CN117389676B CN202311705889.9A CN202311705889A CN117389676B CN 117389676 B CN117389676 B CN 117389676B CN 202311705889 A CN202311705889 A CN 202311705889A CN 117389676 B CN117389676 B CN 117389676B
Authority
CN
China
Prior art keywords
head
user
hairstyle
model
terminal equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311705889.9A
Other languages
Chinese (zh)
Other versions
CN117389676A (en
Inventor
杨佳霖
陶权义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Baize Zhihui Technology Co ltd
Original Assignee
Chengdu Baize Zhihui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Baize Zhihui Technology Co ltd filed Critical Chengdu Baize Zhihui Technology Co ltd
Priority to CN202311705889.9A priority Critical patent/CN117389676B/en
Publication of CN117389676A publication Critical patent/CN117389676A/en
Application granted granted Critical
Publication of CN117389676B publication Critical patent/CN117389676B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

The invention belongs to the technical field of artificial intelligent algorithms, and discloses an intelligent hairstyle adaptation display method based on a display interface. According to the invention, the head model can be obtained through the terminal equipment, and the head model is optimized in various steps, so that the optimal matching effect is determined, and compared with the existing direct matching mode, the three-dimensional point set model is used for data processing through positioning and identifying a plurality of characteristic points, so that the three-dimensional point set model is displayed on a display interface in a three-dimensional image form, and a user can conveniently adjust different angles to view the virtual head image close to the actual head image.

Description

Intelligent hairstyle adaptive display method based on display interface
Technical Field
The invention belongs to the technical field of artificial intelligence algorithm assistance, and particularly relates to an intelligent hairstyle adaptation display method based on a display interface.
Background
While the life quality of modern society is gradually improved, people are also paying more attention to the hairstyle of individuals, the hairstyle is also different from the great judgment feature of the makeup appearance for the aesthetic of people, and the hairstyle is also a manifestation showing the aesthetic appearance and pursuit of individuals. People mostly see the hairstyle of others based on the existing pictures or videos, then the hairstyle design of the people is determined after the hairstyle is discussed by a barber shop and a hairstylist, meanwhile, most people do not have visual hairstyle requirements before hairstyle cosmetology, but have stronger hairstyle adaptation requirements, namely, for the people, the adaptation degree of various findings and the face of the people can be known before hairstyle cosmetology, popular points want to know which hairstyle is more suitable for the people before hairstyle cosmetology, and the poor aesthetic requirements can be avoided after hairstyle cosmetology.
The above-mentioned needs are the needs that people will mostly have before haircut, so there are some hairstyle matching methods provided in the prior art, and before, whether the hairstyles match or not is simply checked by sleeving different hairstyle drawings on the front face of different users. And then, with wigs, part of users can judge whether the hairstyle matches with the face of the user by wearing the wigs. With the continuous development of the AI technology and the camera shooting technology, in the prior art, part of cosmetic auxiliary equipment, or special equipment, or software attached to terminal equipment with display and camera shooting functions, can be used for collecting and identifying the face of a user, and then directly matching the hairstyle in the model library to the head portrait of the user for display on a display screen. The software changes the original mode, realizes a better matching effect under the condition of low cost, and enables a user to roughly know the fitting conditions of various hairstyles.
However, in the existing software matching-based method, only two-dimensional image display is adopted, namely, the front photograph of the hairstyle is matched with the head portrait. However, the hair is different from the makeup, the hair style comprises design characteristics of a plurality of angles in the circumferential direction, and the user cannot obtain a relatively real effect diagram only with the problem of front or distortion. And once the user wants to see the characteristics of the hairstyle of the back brain spoon or the side, the user cannot obtain the hairstyle matching image in the whole circumferential direction without good matching effect.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an intelligent hairstyle adaptive display method based on a display interface, which guides a user to obtain an optimal hairstyle matching head portrait based on the existing terminal equipment, synthesizes a three-dimensional image in the terminal equipment for circumferential display, and is convenient for the user to more intuitively check the appearance of different hairstyles on the user's head.
The technical scheme adopted by the invention is as follows:
in a first aspect, the invention discloses an intelligent hair style adaptation display method based on a display interface, which guides a user to perform hair style adaptation based on terminal equipment with a camera assembly and a display interface, and comprises the following steps:
s100, firstly, aligning a camera component to a user head entering a recognition range of terminal equipment to confirm, and guiding the user to acquire a head image through the terminal equipment after confirming the head position;
s200, the terminal equipment acquires a plurality of head images with different angles, then performs data processing, establishes a head model of a user by using the acquired head images through a preset modeling program, and displays the head model on a display interface in a three-dimensional animation surface after generation;
s300, after the displayed head model is confirmed by a user, the terminal equipment acquires the range of the hairstyle area of the head model according to the characteristic points, peels off the hairstyle area in the head model, forms a epidermis layer image in the hairstyle area in the head model according to the characteristic points, defines the head model with the epidermis layer image as an optical head model, and displays the head model on a display interface after being independently stored;
s400, after confirming the displayed optical head model by a user, performing head pre-matching in a hairstyle database preset in terminal equipment according to a plurality of characteristic points in a hairstyle area on the optical head model, after selecting a corresponding one or a plurality of hairstyles in the hairstyle database on the terminal equipment, matching the selected hairstyles in the optical head model by the terminal equipment to form a hairstyle matching model, and displaying the hairstyle matching model on a display interface;
s500, when the hairstyle matching model is displayed, the terminal equipment obtains the relative spatial position of the head of the current user compared with the plane of the display interface through the head image, the displayed hairstyle matching model is displayed on the display interface in a mirror image mode, and the client finally determines the hairstyle matching model corresponding to the hairstyle and then stores and transmits the data of the hairstyle matching model to the user mobile terminal equipment.
With reference to the first aspect, the present invention provides a first implementation manner of the first aspect, and the specific steps of S100 are as follows:
s101, firstly, after a terminal device is started, keeping an acquisition state, confirming whether a complete head exists in an identification range or not by using a head pre-positioning feature point of a user, and when the head of the user does not completely exist in the identification range, guiding the user to move the head into the identification area by the terminal device;
s102, after the head of the user is completely in the identification area, the terminal equipment identifies and determines the coordinates of the relative display interface based on the head pre-positioning feature points, then the head gesture of the current user is displayed on the display interface by using a general head model, and the position correction and guide are carried out on the head of the user through the terminal equipment, so that the head is maintained after being adjusted to the standard gesture in the identification area by the user;
s103, after the head gesture of the user is adjusted to the standard gesture, the terminal equipment guides the user to move the head according to the set standard, the terminal equipment analyzes and confirms the head image frame by frame after the head image of 360 degrees is acquired once, and if the head image is missing in a certain area range, the user is guided to move the head to the corresponding gesture to perform image compensation acquisition until the user is prompted to acquire the complete head image.
With reference to the first aspect, in a second implementation manner of the first aspect, in S200, after the terminal device obtains the head images in a 360-degree circumferential scanning manner, all the obtained head images are divided into a plurality of image groups by using head pre-positioning feature points, and on a pre-stored head model board, the head external images in the area determined by each pre-positioning feature point are sequentially generated according to the images of the corresponding image groups by using the pre-positioning feature points as sequencing references, and then all the point set information is processed to form the head model.
With reference to the second implementation manner of the first aspect, the present invention provides a third implementation manner of the first aspect, and the specific steps of S200 are as follows:
s201, grouping the acquired head images by the terminal equipment through the acquired head preset feature points of the user, dividing the head areas of the user through the preset feature points, and matching the divided head areas with the head images corresponding to the grouping through the head template preset feature points pre-stored in the terminal equipment;
s202, the terminal equipment sequentially processes each group of head images by using pre-positioning feature points, obtains space coordinates and color values of all pixel points in the head area, and matches the pixel points in the head area on a head template in a point set mode;
s203, attaching the color value and smoothing the pixel points by means of a smoothing algorithm after all the pixel points on the whole head template are matched, and displaying the head model of the user on a display interface after the processing.
With reference to the first aspect, the present invention provides a fourth implementation manner of the first aspect, in S300, the step of obtaining the hair styling area range includes the following steps:
the terminal equipment displays a three-dimensional image on a display interface by using a head model of a user, then the terminal equipment guides the user to gather and lift the hair upwards to expose the bottommost circumferential hairline, and then guides the user to rotate the head for 360 degrees to acquire an image of the circumferential hairline, the circumferential hairline is optimized after being acquired by image processing on the coordinates of the head model, and the hairstyle area range is determined by taking the circumferential hairline as the bottommost boundary.
With reference to the first aspect, the present invention provides a fifth implementation manner of the first aspect, in S300, the step of obtaining the feature point is as follows:
the terminal equipment displays an image simulating scalp in a peeling area by using an original head-shaped template on a head model after the hairstyle area is peeled, then lines or a plurality of points are used as guide marks on the head model, a plurality of guide marks symmetrical to the center line of the head are formed on the head model, then the user is guided to compact or pull out the head at the guide marks from one side face part to the other side face part in the same direction in sequence, the head image at the moment is acquired by an image pickup assembly after the hair is compacted or pulled out for 1 second to 3 seconds, a plurality of feature points which are equidistant along the guide marks are acquired by the compacted or pulled out position in the acquired head image at the moment, and the feature point set of the scalp is formed after all the feature points are acquired is simulated to form the head model.
With reference to the first aspect, the present invention provides a sixth embodiment of the first aspect,
in S300, the step of obtaining the feature points is as follows:
the terminal equipment forms an image simulating scalp in the stripping area on a head model with the original head model plate after the hairstyle area is stripped to be displayed, then takes lines or a plurality of points as guide marks on the head model, and forms a plurality of guide marks symmetrical with the head center line on the head model;
the terminal equipment guides a user to hold the state of the hair at the current guiding mark for 1-3 seconds after compacting by using a flexible rule with the width larger than 2cm, the head image at the moment is acquired by the camera component, a plurality of characteristic points which are equidistant along the guiding mark are acquired at the compacted or poked position in the head image acquired at the moment, and a characteristic point set which forms a scalp after acquiring all the characteristic points is simulated to form an optical head model.
With reference to the first aspect, the present invention provides a seventh implementation manner of the first aspect, in S300, the step of obtaining the feature point is as follows:
the terminal equipment displays an image simulating scalp in the stripping area by using the original head-shaped template on the head model after the hairstyle area is stripped, then guides a user to use a hard roller with positioning marks, guides the user to roll in a direction perpendicular to the normal direction of the face along one side face to the other side face in a unidirectional fixed speed through the terminal equipment, continuously shoots by a shooting assembly to obtain the corresponding positions of the positioning marks on the hard roller during the process, processes and determines a plurality of characteristic points of the scalp according to the coordinates of the positioning marks, and simulates the characteristic point set of the scalp after all the characteristic points are obtained to form the optical head model.
With reference to the seventh implementation manner of the first aspect, the present invention provides an eighth implementation manner of the first aspect, wherein the positioning mark on the hard roller is an optical identification mark, an end portion of the optical identification mark, which is in contact with the scalp, has a fixed length, guides the user to keep a hard rolling posture when the user rolls along the hairstyle area and is clung to the scalp, and the optical identification mark is always in the picture of the image capturing assembly when rolling.
With reference to the several embodiments of the first aspect, the present invention provides a ninth implementation manner of the first aspect, where the image capturing component is one of a binocular camera and a structured light image capturing component.
The beneficial effects of the invention are as follows:
(1) According to the invention, the head model can be obtained through the terminal equipment, and the head model is optimized in various steps, so that the optimal matching effect is determined, and compared with the existing direct matching mode, the three-dimensional point set model is used for data processing through positioning and identifying a plurality of characteristic points, so that the three-dimensional point set model is displayed on a display interface in a three-dimensional image form, and a user can conveniently adjust different angles to view the virtual head image close to the actual head image;
(2) According to the invention, the characteristic points of the scalp layer are obtained through various means, so that a more real head model is obtained in a lossless manner on the premise of not changing the hair state of a user, the original hairstyle is completely stripped, and the scalp data of the user is restored, so that the hairstyle in the database can be adapted according to the position of the scalp, and a better adaptation effect is achieved;
(3) According to the invention, the virtual or real head model is used for displaying, and the real head portrait position of the user is corresponded on the display interface in a mirror image mode, so that a better display effect is realized by relatively real position feedback and the degree of freedom of automatically identifying and adjusting the gesture without operation, the user can directly turn the head or the lateral head to check the hairstyle condition of the lateral surface, the matched image can not be correspondingly processed and adjusted in the process of adjusting the head gesture like the prior art, and the terminal equipment only needs to detect the head gesture and the position of the user, so that the calculation and optimization of image display in the process of adjusting the gesture are avoided.
Detailed Description
The invention will be further illustrated with reference to specific examples.
For the purposes of clarity, technical solutions, and advantages of embodiments of the present application, the described embodiments are some, but not all, embodiments of the present application. The components of the embodiments of the present application, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations.
Example 1:
the embodiment discloses an intelligent hairstyle adaptation display method based on a display interface, which guides a user to perform hairstyle adaptation based on terminal equipment with a camera assembly and the display interface, wherein the terminal equipment in the embodiment does not particularly refer to a specific equipment, and comprises various application types, such as a high-performance mobile phone or a tablet personal computer with the software built therein, and the mobile phone or the tablet personal computer is provided with a front double camera or a structured light acquisition module, and an internal system open port acquires data of a plurality of sensors for processing.
In this embodiment, an integrated desk type device is adopted, and the desk type device has a display screen with a plane of a reflecting surface, in particular to a three-sided cabinet structure, and three plates with equal areas and equal included angles are arranged on the upper portion of the desk type device. The middle plate body is faced towards the user, and is provided with a display screen on which a display interface is displayed. The plates on two sides are arranged on the reflecting surface and can reflect natural light. The device also has a camera assembly, inside which a processing module, typically a desktop computer, is arranged, with the capability of processing stored data and graphics processing.
Based on the desktop device, the provided adaptation display method comprises the following steps:
firstly, the equipment is always in a standby state, and is provided with an activation sensor, when a user starts the activation sensor, the whole equipment is started, the camera assembly confirms the head of the user entering the recognition range of the terminal equipment, and after the head position is confirmed, the user is guided to acquire head images through the terminal equipment.
And then, the device acquires a plurality of head images of different angles of the user, performs data processing, establishes a head model of the user by using the acquired head images through a preset modeling program, and displays the head model on a display interface in a three-dimensional animation surface after the head model is generated.
After the displayed head model is confirmed by a user, the device determines the characteristic points on the head model, acquires the hairstyle area range of the head model according to the characteristic points, peels off the hairstyle area in the head model, forms an epidermis layer image in the hairstyle area in the head model according to the characteristic points, defines the head model with the epidermis layer image as an optical head model, and displays the head model on a display interface after being independently stored.
After confirming the displayed optical head model by a user, determining a plurality of optical head characteristic points in a hairstyle area on the optical head model according to facial features, performing head type pre-matching in a hairstyle database preset in terminal equipment according to the facial characteristic points and the optical head characteristic points, after selecting one or a plurality of corresponding hairstyles in the hairstyle database on the terminal equipment by the user, matching the selected hairstyles in the optical head model by the terminal equipment to form a hairstyle matching model, and displaying the hairstyle matching model on a display interface.
When the hairstyle matching model is displayed, the terminal equipment obtains the relative space position of the head of the current user compared with the plane of the display interface through the head image, the displayed hairstyle matching model is displayed on the display interface in a mirror image mode, and the client finally determines the hairstyle matching model corresponding to the hairstyle and then stores and transmits the data of the hairstyle matching model to the user mobile terminal equipment.
The modeling program is a prior art, and the principle is that the device is used to guide the user to adjust the head images obtained by gesture, the feature analysis is performed according to the images, the depth data of the binocular camera is used to determine the outline data in each image, one datum point is used as a reference point, and a point set formed by taking the actual size as a reference is established on a virtual three-dimensional coordinate system. Each unit point in the point set has a corresponding coordinate, so that the equipment can conveniently perform data adjustment and coloring on each unit point in the following data processing process.
Wherein the several head images are data that are guided according to the minimum number and corresponding pose required by the model. The head model used in the device in this embodiment is a program that is continuously optimized after machine learning is performed on a plurality of samples through the existing calculation model, and because it performs a great amount of optimization training on the head features of the person in the region where the user is located, all feature point data of the whole three-dimensional head model can be smoothly obtained according to the images of part of specific poses, in this embodiment, the accuracy is not limited, and different existing mature technology providers can be adopted to perform purchasing according to requirements.
The feature points of the head model are mainly defined by depending on the facial features, bones and other physical features of normal people (such as the points with concave-convex features of ears, eyebrows, submaxillary bones and the like) so as to determine the facial area of the person, then the separation line of the hair and the facial area is determined by the data of the bright-dark boundaries in the gray level image, so that the hair is peeled off, and the head model is automatically formed by smooth fitting according to the facial features and the original hair area. The head characteristic points are defined by uniformly arranging a certain number of points in the head area on the automatically formed head model according to equal intervals or uniformly, and are mainly used for matching hairstyles, the situation that the sizes and proportions of hairstyle characteristics cannot be directly matched on the head model is avoided, and a plurality of characteristic points are used as references for positioning calculation in the modes of equal proportion calculation, stretching and the like, so that the hairstyle matching model on a display interface can be more real.
Further, in this embodiment, the steps are optimized, when the head image is acquired, the terminal device is turned on, and then the acquisition state is maintained, and whether the complete head appears in the recognition range is confirmed by the head pre-positioning feature point of the user, and when the head of the user does not appear in the recognition range completely, the terminal device guides the user to move the head into the recognition area.
When the head of the user is completely in the identification area, the terminal equipment identifies and determines the coordinates of the relative display interface based on the head pre-positioning feature points, then the head gesture of the current user is displayed on the display interface by using a general head model, and the position correction and guide are carried out on the head of the user through the terminal equipment, so that the head is maintained after being adjusted to the standard gesture in the identification range by the user;
after the head posture of the user is adjusted to the standard posture, the terminal equipment guides the user to move the head according to the set standard, the terminal equipment analyzes and confirms the head images frame by frame after the head images of 360 degrees are acquired once, and if the head images are missing in a certain area range, the user is guided to move the head to the corresponding posture for image compensation until the user is prompted to acquire the complete head images.
If the head modeling model with higher training degree is adopted, the sampling times can be reduced during sampling, and a more accurate head model can be obtained. Or after the same user uses the device for multiple times, the device can confirm that the device acquires the real-time image according to the data recorded last time, and if the face is not changed greatly, the device can perform small data acquisition optimization by depending on the head model last time. In the present embodiment, although the head shape is replaced only by the head as the main target, the user needs to confirm whether the hairstyle is proper on his/her face, so that the feature acquisition of the face is also highly accurate.
It should be noted that, because the hair styles of users are relatively large, and there are many hairs that mask many features of the head and cannot be modeled successfully, the device generally requires the user to expose his face during guiding, especially for long-hair users, and the device usually scans and confirms the ratio of the hairs first, and if the ratio exceeds a set threshold, guides the user to bind or bind the hairs in other ways to expose the facial features.
Further, after the terminal equipment acquires the head images in a 360-degree circumferential scanning mode, dividing all acquired head images into a plurality of image groups by head pre-positioning feature points by the pre-positioning feature points, sequentially generating head external images in the area determined by each pre-positioning feature point according to the images of the corresponding image groups on a pre-stored head model template by taking the pre-positioning feature points as sequencing references, and then processing information of all points to form a head model.
The head is generally cut in a vertical direction in a plane into a plurality of regions, each region cutting a plane coincident with at least one pre-positioning feature of the head. For example, the ear portion has a plurality of pre-positioning feature points, so after the ear portion captures a plurality of images at different angles, the images corresponding to the pre-positioning feature points belonging to the same side of the ear are divided into an image group, and then a local point set is formed by the points with depth information in the feature groups, and the three-dimensional coordinates of the partial points in the local point set are obtained by smoothing processing by means of a topological algorithm.
In addition, in some embodiments, the terminal device groups the acquired head images with the acquired head pre-positioning feature points of the user, divides the head areas of the user with the pre-positioning feature points, and matches the divided head areas with the head images corresponding to the groups with the head template pre-positioning feature points pre-stored in the terminal device; the terminal equipment sequentially processes each group of head images by using preset characteristic points, obtains the space coordinates and the color values of all the pixel points in the head area, and matches the pixel points in the head area on the head template in a point set mode. After all the pixel points on the whole head template are matched, the color value is attached, the pixel point smoothing processing is carried out by means of a smoothing algorithm, and the head model of the user is obtained after the processing and displayed on a display interface.
Further, in order to obtain the hairstyle area more accurately, the steps are optimized, and the specific hairstyle area is obtained as follows:
the terminal equipment displays a three-dimensional image on a display interface by using a head model of a user, then the terminal equipment guides the user to gather and lift the hair upwards to expose the bottommost circumferential hairline, and then guides the user to rotate the head for 360 degrees to acquire an image of the circumferential hairline, the circumferential hairline is optimized after being acquired by image processing on the coordinates of the head model, and the hairstyle area range is determined by taking the circumferential hairline as the bottommost boundary. The method can facilitate that the hair can not obtain a more accurate hairstyle area range when a large part of the head is covered by the hair, and the device can be conveniently and quickly identified by simple step guidance, and has smaller calculation amount.
The terminal device then displays an image simulating the scalp in the peeled area with the original head form on the head form after the hair style area is peeled off. And forming a plurality of guide marks symmetrical with the center line of the head on the head model by taking lines or a plurality of points as guide marks on the head model, then sequentially guiding a user in the same direction from one side face part to the other side face part to compact or pull out the hair at the guide marks, keeping the hair compact or pull out for 1-3 seconds, acquiring a head image at the moment by the camera component, acquiring a plurality of characteristic points at equal intervals along the guide marks at the compacted or pulled out position in the acquired head image, and simulating a characteristic point set of the scalp formed after all the characteristic points are acquired to form the optical head model.
As one embodiment, the feature points are obtained as follows:
the terminal equipment forms an image simulating scalp in the stripping area on a head model with the original head model plate after the hairstyle area is stripped to be displayed, then takes lines or a plurality of points as guide marks on the head model, and forms a plurality of guide marks symmetrical with the head center line on the head model;
the terminal equipment guides a user to hold the state of the hair at the current guiding mark for 1-3 seconds after compacting by using a flexible rule with the width larger than 2cm, the head image at the moment is acquired by the camera component, a plurality of characteristic points which are equidistant along the guiding mark are acquired at the compacted or poked position in the head image acquired at the moment, and a characteristic point set which forms a scalp after acquiring all the characteristic points is simulated to form an optical head model.
In another embodiment, the feature points are obtained as follows:
the terminal equipment displays an image simulating scalp in the stripping area by using the original head-shaped template on the head model after the hairstyle area is stripped, then guides a user to use a hard roller with positioning marks, guides the user to roll in a direction perpendicular to the normal direction of the face along one side face to the other side face in a unidirectional fixed speed through the terminal equipment, continuously shoots by a shooting assembly to obtain the corresponding positions of the positioning marks on the hard roller during the process, processes and determines a plurality of characteristic points of the scalp according to the coordinates of the positioning marks, and simulates the characteristic point set of the scalp after all the characteristic points are obtained to form the optical head model.
The positioning mark on the hard roller is an optical identification mark, the end part of the optical identification mark, which is contacted with the scalp, of the hard roller is provided with a fixed length, the optical identification mark guides a user to keep a hard rolling gesture when the user rolls along the hairstyle area and clings to the scalp, and the optical identification mark is always positioned in a picture of the camera assembly when the user rolls.
The invention is not limited to the alternative embodiments described above, but any person may derive other various forms of products in the light of the present invention. The above detailed description should not be construed as limiting the scope of the invention, which is defined in the claims and the description may be used to interpret the claims.

Claims (6)

1. The intelligent hairstyle adaptation display method based on the display interface guides a user to perform hairstyle adaptation based on terminal equipment with a camera assembly and a display interface, and is characterized in that: the method comprises the following steps:
s100, firstly, aligning a camera component to a user head entering a recognition range of terminal equipment to confirm, and guiding the user to acquire a head image through the terminal equipment after confirming the head position;
s200, the terminal equipment acquires a plurality of head images with different angles, then performs data processing, establishes a head model of a user by using the acquired head images through a preset modeling program, and displays the head model on a display interface in a three-dimensional animation surface after generation;
s300, after the displayed head model is confirmed by a user, the terminal equipment acquires the range of the hairstyle area of the head model according to the characteristic points, peels off the hairstyle area in the head model, forms a epidermis layer image in the hairstyle area in the head model according to the characteristic points, defines the head model with the epidermis layer image as an optical head model, and displays the head model on a display interface after being independently stored;
s400, after confirming the displayed optical head model by a user, performing head pre-matching in a hairstyle database preset in terminal equipment according to a plurality of characteristic points in a hairstyle area on the optical head model, after selecting a corresponding one or a plurality of hairstyles in the hairstyle database on the terminal equipment, matching the selected hairstyles in the optical head model by the terminal equipment to form a hairstyle matching model, and displaying the hairstyle matching model on a display interface;
s500, when the hairstyle matching model is displayed, the terminal equipment obtains the relative spatial position of the head of the current user compared with the plane of the display interface through the head image, the displayed hairstyle matching model is displayed on the display interface in a mirror image mode, and after the hairstyle matching model corresponding to the hairstyle is finally determined by a client, the data of the hairstyle matching model is stored and sent to the user mobile terminal equipment;
the specific steps of the S100 are as follows:
s101, firstly, after a terminal device is started, keeping an acquisition state, confirming whether a complete head exists in an identification range or not by using a head pre-positioning feature point of a user, and when the head of the user does not completely exist in the identification range, guiding the user to move the head into the identification area by the terminal device;
s102, after the head of the user is completely in the identification area, the terminal equipment identifies and determines the coordinates of the relative display interface based on the head pre-positioning feature points, then the head gesture of the current user is displayed on the display interface by using a general head model, and the position correction and guide are carried out on the head of the user through the terminal equipment, so that the head is maintained after being adjusted to the standard gesture in the identification area by the user;
s103, after the head gesture of the user is adjusted to the standard gesture, guiding the user to move the head according to the set standard by the terminal equipment, analyzing and confirming the head image frame by the terminal equipment after the head image of 360 degrees is acquired once, and if the head image is missing in a certain area range, guiding the user to move the head to the corresponding gesture to perform image compensation acquisition until the user is prompted to acquire the complete head image;
in the step S200, after the terminal device obtains the head images in a 360-degree circumferential scanning manner, dividing all the obtained head images into a plurality of image groups by using head pre-positioning feature points, sequentially generating head external images in the area determined by each pre-positioning feature point according to the images of the corresponding image groups on a pre-stored head model template by using the pre-positioning feature points as sequencing references, and then processing all point set information to form a head model;
the specific steps of S200 are as follows:
s201, grouping the acquired head images by the terminal equipment through the acquired head preset feature points of the user, dividing the head areas of the user through the preset feature points, and matching the divided head areas with the head images corresponding to the grouping through the head template preset feature points pre-stored in the terminal equipment;
s202, the terminal equipment sequentially processes each group of head images by using pre-positioning feature points, obtains space coordinates and color values of all pixel points in the head area, and matches the pixel points in the head area on a head template in a point set mode;
s203, attaching color values and smoothing the pixel points by means of a smoothing algorithm after all the pixel points on the whole head template are matched, and obtaining a head model of a user to be displayed on a display interface after the processing;
in S300, the step of obtaining the hairstyle area range is as follows:
the terminal equipment displays a three-dimensional image on a display interface by using a head model of a user, then the terminal equipment guides the user to gather and lift the hair upwards to expose the bottommost circumferential hairline, and then guides the user to rotate the head for 360 degrees to acquire an image of the circumferential hairline, the circumferential hairline is optimized after being acquired by image processing on the coordinates of the head model, and the hairstyle area range is determined by taking the circumferential hairline as the bottommost boundary.
2. The intelligent hair style adaptation display method based on the display interface as claimed in claim 1, wherein: in S300, the step of obtaining the feature points is as follows:
the terminal equipment displays an image simulating scalp in the stripped hairstyle area by using an original head-shaped template on the head model after the hairstyle area is stripped, then uses lines or a plurality of points as guide marks on the head model, forms a plurality of guide marks symmetrical by the center line of the head on the head model, then sequentially guides a user in the same direction from one side face part to the other side face part to compact or pull out the head at the guide marks, acquires the head image at the moment by a camera component in a state of keeping the hair compact or pull-out for 1 second to 3 seconds, acquires a plurality of feature points which are equidistant along the guide marks in the acquired head image at the moment by the compacting or pull-out position, and simulates the feature point set of the scalp to form the head model after all the feature points are acquired.
3. The intelligent hair style adaptation display method based on the display interface as claimed in claim 1, wherein:
in S300, the step of obtaining the feature points is as follows:
the terminal equipment forms an image simulating scalp in the stripped hairstyle area on the head model with the original head model plate after the hairstyle area is stripped for display, then forms a plurality of guide marks symmetrical with the head center line on the head model with lines or a plurality of points as guide marks on the head model;
the terminal equipment guides a user to hold the state of the hair at the current guiding mark for 1-3 seconds after compacting by using a flexible rule with the width larger than 2cm, the head image at the moment is acquired by the camera component, a plurality of characteristic points which are equidistant along the guiding mark are acquired at the compacted or poked position in the head image acquired at the moment, and a characteristic point set which forms a scalp after acquiring all the characteristic points is simulated to form an optical head model.
4. The intelligent hair style adaptation display method based on the display interface as claimed in claim 1, wherein: in S300, the step of obtaining the feature points is as follows:
the terminal equipment displays an image simulating scalp in the stripped hairstyle area by using an original head-shaped template on the head model after the hairstyle area is stripped, then guides a user to use a hard roller with positioning marks, guides the user to roll in a direction perpendicular to the normal direction of the face along one side face to the other side face in a unidirectional fixed speed through the terminal equipment, continuously shoots by a shooting assembly to obtain the corresponding positions of the positioning marks on the hard roller during the process, processes and determines a plurality of characteristic points of the scalp according to the coordinates of the positioning marks, and simulates the characteristic point set of the scalp to form the optical head model after all the characteristic points are obtained.
5. The intelligent hair style adaptation display method based on the display interface as claimed in claim 4, wherein: the positioning mark on the hard roller is an optical identification mark, the end part of the optical identification mark, which is contacted with the scalp, of the hard roller is provided with a fixed length, the optical identification mark guides a user to keep a hard rolling gesture when the user rolls along the hairstyle area and clings to the scalp, and the optical identification mark is always positioned in the picture of the camera assembly when the user rolls.
6. The intelligent hair style adaptation display method based on the display interface according to claim 1 or 5, wherein: the camera shooting component is one of a binocular camera and a structured light camera shooting component.
CN202311705889.9A 2023-12-13 2023-12-13 Intelligent hairstyle adaptive display method based on display interface Active CN117389676B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311705889.9A CN117389676B (en) 2023-12-13 2023-12-13 Intelligent hairstyle adaptive display method based on display interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311705889.9A CN117389676B (en) 2023-12-13 2023-12-13 Intelligent hairstyle adaptive display method based on display interface

Publications (2)

Publication Number Publication Date
CN117389676A CN117389676A (en) 2024-01-12
CN117389676B true CN117389676B (en) 2024-02-13

Family

ID=89468859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311705889.9A Active CN117389676B (en) 2023-12-13 2023-12-13 Intelligent hairstyle adaptive display method based on display interface

Country Status (1)

Country Link
CN (1) CN117389676B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001297338A (en) * 2000-04-13 2001-10-26 Sony Corp Device and method for processing image and recording medium
CN102419868A (en) * 2010-09-28 2012-04-18 三星电子株式会社 Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
CN106650654A (en) * 2016-12-15 2017-05-10 天津大学 Three-dimensional hairline extraction method based on colorful point cloud model of human head
CN107194981A (en) * 2017-04-18 2017-09-22 武汉市爱米诺网络科技有限公司 Hair style virtual display system and its method
CN107274493A (en) * 2017-06-28 2017-10-20 河海大学常州校区 A kind of three-dimensional examination hair style facial reconstruction method based on mobile platform
CN107783686A (en) * 2016-08-24 2018-03-09 南京乐朋电子科技有限公司 Vanity mirror based on virtual technology
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
CN109408653A (en) * 2018-09-30 2019-03-01 叠境数字科技(上海)有限公司 Human body hair style generation method based on multiple features retrieval and deformation
CN109493160A (en) * 2018-09-29 2019-03-19 口碑(上海)信息技术有限公司 A kind of virtual examination forwarding method, apparatus and system
CN109885704A (en) * 2019-02-21 2019-06-14 杭州数为科技有限公司 It is a kind of based on hair style identification intelligent hair style arrange care method and system
CN110110118A (en) * 2017-12-27 2019-08-09 广东欧珀移动通信有限公司 Dressing recommended method, device, storage medium and mobile terminal
CN112906585A (en) * 2021-02-25 2021-06-04 商楚苘 Intelligent hairdressing auxiliary system, method and readable medium based on machine learning
CN113379889A (en) * 2021-04-06 2021-09-10 闫月光 Hairstyle adapting device and adapting system based on 3D recognition
CN115761124A (en) * 2022-11-14 2023-03-07 柳州职业技术学院 Head type three-dimensional modeling method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001297338A (en) * 2000-04-13 2001-10-26 Sony Corp Device and method for processing image and recording medium
CN102419868A (en) * 2010-09-28 2012-04-18 三星电子株式会社 Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
CN107783686A (en) * 2016-08-24 2018-03-09 南京乐朋电子科技有限公司 Vanity mirror based on virtual technology
CN106650654A (en) * 2016-12-15 2017-05-10 天津大学 Three-dimensional hairline extraction method based on colorful point cloud model of human head
CN107194981A (en) * 2017-04-18 2017-09-22 武汉市爱米诺网络科技有限公司 Hair style virtual display system and its method
CN107274493A (en) * 2017-06-28 2017-10-20 河海大学常州校区 A kind of three-dimensional examination hair style facial reconstruction method based on mobile platform
CN110110118A (en) * 2017-12-27 2019-08-09 广东欧珀移动通信有限公司 Dressing recommended method, device, storage medium and mobile terminal
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
CN109493160A (en) * 2018-09-29 2019-03-19 口碑(上海)信息技术有限公司 A kind of virtual examination forwarding method, apparatus and system
CN109408653A (en) * 2018-09-30 2019-03-01 叠境数字科技(上海)有限公司 Human body hair style generation method based on multiple features retrieval and deformation
CN109885704A (en) * 2019-02-21 2019-06-14 杭州数为科技有限公司 It is a kind of based on hair style identification intelligent hair style arrange care method and system
CN112906585A (en) * 2021-02-25 2021-06-04 商楚苘 Intelligent hairdressing auxiliary system, method and readable medium based on machine learning
CN113379889A (en) * 2021-04-06 2021-09-10 闫月光 Hairstyle adapting device and adapting system based on 3D recognition
CN115761124A (en) * 2022-11-14 2023-03-07 柳州职业技术学院 Head type three-dimensional modeling method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Single-view hair modeling using a hairstyle database";Liwen Hu 等;《ACM Transactions on Graphics (TOG)》;20150727;第34卷(第4期);第1-9页 *
"基于单幅图像的三维发型建模技术及其应用";柴蒙磊;《中国博士学位论文全文数据库 (信息科技辑)》;20180115;第I138-78页 *
"数据驱动的三维发型建模技术研究";张萌;《中国博士学位论文全文数据库 (信息科技辑)》;20190815;第I138-81页 *
基于移动平台的三维虚拟试发型系统实现及应用;邹晓;陈正鸣;朱红强;童晶;;图学学报;20180415(第02期);第133-140页 *

Also Published As

Publication number Publication date
CN117389676A (en) 2024-01-12

Similar Documents

Publication Publication Date Title
JP3984191B2 (en) Virtual makeup apparatus and method
CN109690617B (en) System and method for digital cosmetic mirror
KR101190686B1 (en) Image processing apparatus, image processing method, and computer readable recording medium
JP4435809B2 (en) Virtual makeup apparatus and method
JP3779570B2 (en) Makeup simulation apparatus, makeup simulation control method, and computer-readable recording medium recording makeup simulation program
US10945514B2 (en) Information processing apparatus, information processing method, and computer-readable storage medium
JP2024028390A (en) An electronic device that generates an image including a 3D avatar that reflects facial movements using a 3D avatar that corresponds to the face.
US10799010B2 (en) Makeup application assist device and makeup application assist method
US9058765B1 (en) System and method for creating and sharing personalized virtual makeovers
US20100189357A1 (en) Method and device for the virtual simulation of a sequence of video images
JP5261586B2 (en) Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
CN108537126B (en) Face image processing method
JP2009064423A (en) Makeup simulation system, makeup simulation device, makeup simulation method, and makeup simulation program
WO2022095721A1 (en) Parameter estimation model training method and apparatus, and device and storage medium
US10512321B2 (en) Methods, systems and instruments for creating partial model of a head for use in hair transplantation
JP2001109913A (en) Picture processor, picture processing method, and recording medium recording picture processing program
WO2021197186A1 (en) Auxiliary makeup method, terminal device, storage medium and program product
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
JP2000151985A (en) Picture processing method and recording medium
CN112508777A (en) Beautifying method, electronic equipment and storage medium
CN111179411B (en) Visual facial cosmetology plastic simulation method, system and equipment based on social platform
CN117389676B (en) Intelligent hairstyle adaptive display method based on display interface
CN115761124A (en) Head type three-dimensional modeling method
CN104715505A (en) Three-dimensional head portrait generating system and generating device and generating method thereof
JP2003030684A (en) Face three-dimensional computer graphic generation method and device, face three-dimensional computer graphic generation program and storage medium storing face three-dimensional computer graphic generation program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Tao Quanyi

Inventor after: Yang Jialin

Inventor before: Yang Jialin

Inventor before: Tao Quanyi