CN116597121A - Method, device, apparatus and storage medium for styling hair - Google Patents

Method, device, apparatus and storage medium for styling hair Download PDF

Info

Publication number
CN116597121A
CN116597121A CN202310506926.7A CN202310506926A CN116597121A CN 116597121 A CN116597121 A CN 116597121A CN 202310506926 A CN202310506926 A CN 202310506926A CN 116597121 A CN116597121 A CN 116597121A
Authority
CN
China
Prior art keywords
hair
virtual
entity
information
virtual model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310506926.7A
Other languages
Chinese (zh)
Inventor
刘宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310506926.7A priority Critical patent/CN116597121A/en
Publication of CN116597121A publication Critical patent/CN116597121A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a hair styling treatment method, a hair styling treatment device, hair styling treatment equipment and a hair styling treatment storage medium. The method comprises the following steps: performing light sensing detection on an entity object covered with entity hair in a physical environment, generating a first virtual model in the virtual environment, and determining basic information of the entity hair, wherein the first virtual model comprises a virtual object corresponding to the entity object and first virtual hair corresponding to the entity hair, and the basic information at least comprises hair quality and hair style; modifying the first virtual hair in the first virtual model based on the basic information of the entity hair to generate a second virtual model, wherein the second virtual model comprises a virtual object and second virtual hair obtained by modifying the first virtual hair; the styling process is performed on the solid hair based on the difference information between the first virtual hair and the second virtual hair. The self-service haircut machine solves the technical problem that the haircut effect of haircut work performed by the self-service haircut machine is poor, so that the haircut experience of a user is poor.

Description

Method, device, apparatus and storage medium for styling hair
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to a hair styling process method, apparatus, device, and storage medium.
Background
Currently, a user may be given a haircut job by a self-service haircut machine. The self-service haircut machine can perform hairstyle matching based on the facial image of the user and automatically control the mechanical arm to finish haircut work. However, the collection of facial images of the user is affected by a number of factors, which results in a reduced accuracy of the hairstyle being matched, and thus affects the hair cutting effect of the self-service hair cutting machine.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
At least some embodiments of the present disclosure provide a hair styling processing method, apparatus, device, and storage medium, so as to at least solve the technical problem that a user has poor hair cutting experience due to poor hair cutting effect of a self-service hair cutting machine.
According to one embodiment of the present disclosure, there is provided a hair styling treatment method including: performing light sensing detection on an entity object covered with entity hair in a physical environment, generating a first virtual model in the virtual environment, and determining basic information of the entity hair, wherein the first virtual model comprises a virtual object corresponding to the entity object and first virtual hair corresponding to the entity hair, and the basic information at least comprises hair quality and hair style; modifying the first virtual hair in the first virtual model based on the basic information of the entity hair to generate a second virtual model, wherein the second virtual model comprises a virtual object and second virtual hair obtained by modifying the first virtual hair; the styling process is performed on the solid hair based on the difference information between the first virtual hair and the second virtual hair.
There is also provided, in accordance with an embodiment of the present disclosure, a hair styling treatment, including: the detection module is used for carrying out light sensing detection on the entity object covered with the entity hair in the physical environment, generating a first virtual model in the virtual environment, and determining basic information of the entity hair, wherein the first virtual model comprises a virtual object corresponding to the entity object and the first virtual hair corresponding to the entity hair, and the basic information at least comprises a hair quality and a hair style; the generation module is used for modifying the first virtual hair in the first virtual model based on the basic information of the entity hair to generate a second virtual model, wherein the second virtual model comprises a virtual object and second virtual hair obtained after the modification of the first virtual hair; and the styling module is used for styling the solid hair based on the difference information between the first virtual hair and the second virtual hair.
There is also provided, in accordance with an embodiment of the present disclosure, a hair styling treatment apparatus, including: the device comprises a scanner, a first virtual model and a second virtual model, wherein the scanner is used for carrying out light sensing detection on an entity object covered with entity hair in a physical environment, generating the first virtual model in the virtual environment and determining basic information of the entity hair, the first virtual model comprises a virtual object corresponding to the entity object and the first virtual hair corresponding to the entity hair, and the basic information at least comprises a hair quality and a hair style; the modeling device is connected with the scanner and is used for modifying the first virtual hair in the first virtual model based on the basic information of the entity hair to generate a second virtual model, wherein the second virtual model comprises a virtual object and second virtual hair obtained after the modification of the first virtual hair; and the styling device is connected with the scanner and is used for styling the solid hair based on the difference information between the first virtual hair and the second virtual hair.
According to one embodiment of the present disclosure, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to execute the hair styling method of any of the above mentioned methods when run.
There is further provided in accordance with an embodiment of the present disclosure an electronic device comprising a memory having a computer program stored therein and a processor configured to run the computer program to perform the hair styling method of any of the above.
In at least some embodiments of the present disclosure, a light detection technology is used to perform light detection on an entity object covered with entity hair, so that a first virtual model with higher fineness can be generated, basic information of the entity hair is determined, on this basis, the first virtual hair in the first virtual model is modified based on the basic information of the entity hair to obtain a second virtual model, a more suitable second virtual model can be recommended for the entity object, and then, based on difference information between the first virtual hair and the second virtual hair, the entity hair is shaped, thereby achieving the purpose of automatic haircut, reducing waiting time of a user, and the first virtual model can be modified based on the basic information of the entity hair, so that the entity object can be shaped appropriately, generating accuracy of the second virtual model is improved, thereby achieving the technical effects of improving the self-service haircut machine, improving the user experience, and further solving the technical problem that the haircut effect of the self-service haircut machine for haircut work is poor, resulting in the user experience of haircut.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the present disclosure, and together with the description serve to explain the present disclosure. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a hair styling process according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a hair styling treatment method according to one embodiment of the present disclosure;
FIG. 3 is a process flow diagram of a scanner in a hair styling process in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic illustration of a simulator process flow in a hair styling process in accordance with an embodiment of the present disclosure;
FIG. 5 is a schematic view of a process flow of a styling model in a hair styling process in accordance with an embodiment of the present disclosure;
FIG. 6 is a block diagram of a hair styling treatment in accordance with one embodiment of the present disclosure;
fig. 7 is a block diagram of a hair styling treatment apparatus according to one embodiment of the present disclosure;
fig. 8 is a schematic structural view of a hair styling treatment apparatus according to an embodiment of the present disclosure;
Fig. 9 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that those skilled in the art will better understand the present disclosure, a technical solution in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure, shall fall within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The above-described method embodiments to which the present disclosure relates may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, the mobile terminal can be a smart phone, a tablet computer, a palm computer, a mobile internet device, a PAD, a game machine and other terminal devices. Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a hair styling process method according to an embodiment of the present disclosure. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a Central Processing Unit (CPU), a Graphics Processor (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data, and in one embodiment of the present disclosure, may further include: input output device 108 and display device 110.
In some optional embodiments, which are based on game scenes, the device may further provide a human-machine interaction interface with a touch-sensitive surface, where the human-machine interaction interface may sense finger contacts and/or gestures to interact with a Graphical User Interface (GUI), where the human-machine interaction functions may include the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
According to one embodiment of the present disclosure, there is provided an embodiment of a hair styling process, it being noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system, such as a set of computer executable instructions, and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
Fig. 2 is a flowchart of a hair styling treatment method according to one embodiment of the present disclosure, as shown in fig. 2, the method comprising the steps of:
step S202, performing light sensing detection on an entity object covered with entity hair in a physical environment, generating a first virtual model in the virtual environment, and determining basic information of the entity hair, wherein the first virtual model comprises a virtual object corresponding to the entity object and the first virtual hair corresponding to the entity hair, and the basic information at least comprises a hair quality and a hair style.
Specifically, the physical object may be a person with hair, an animal, or a dummy with hair. In the case where the physical object is a human or dummy model, the physical hair may be hair or beard, and in the case where the physical object is an animal, the physical hair may be body hair of any part of the animal. The above-described light-sensitive scanning techniques are used to characterize techniques that employ multiple types of light for scanning, such as laser light, infrared light, and the like; and confirming information such as mechanical properties, color properties, endocrine properties and the like of the physical hair which is the physical object by the light-sensitive scanning technology. The virtual object may be a three-dimensional model of any portion of the solid object covered with hair, and, for example, a human being, the virtual object may be a three-dimensional model of a vertex portion, or a three-dimensional model of a hair generation portion such as a chin or an armpit. The first virtual hair may be a three-dimensional hair model of a solid hair. The basic information of the solid hair may be information of the quality, color, style, etc. of the solid hair, but is not limited thereto, and in the present disclosure, since the styling apparatus for hair does not have a hair dyeing function, the basic information of the solid hair may be the quality and style of the hair.
As an alternative implementation manner, a scanner may be used to perform light sensing scanning on a solid object covered with solid hair, fig. 3 is a process flow chart of the scanner in a hair styling processing method according to an embodiment of the present disclosure, as shown in fig. 3, light sensing detection may be performed on the solid object covered with solid hair first to obtain light sensing detection information, and then a first virtual model is generated based on the light sensing detection information, and the first virtual model is output. Further, basic information such as the hair quality, hair style, etc. of the solid hair can be determined based on the result of the light sensation detection.
Step S204, modifying the first virtual hair in the first virtual model based on the basic information of the entity hair, and generating a second virtual model, wherein the second virtual model comprises a virtual object and a second virtual hair obtained by modifying the first virtual hair.
As an alternative embodiment, a recommendation algorithm or model may be used to select a suitable second virtual model for the physical object based on the first virtual model. Alternatively, the first virtual model may be displayed to the user, and the user may directly modify the first virtual hair in the first virtual model to obtain the adjusted second virtual hair.
The recommendation algorithm may be a factorization-based machine learning algorithm (Factorization Machine, abbreviated as FM) that may be used to address regression or classification problems on high-dimensional sparse data. The FM has the advantage that the weight of each feature and the relationship between features can be considered simultaneously, so that high-dimensional sparse data can be well processed. Specifically, the prediction function of the FM model can be expressed as: y (x) =w0+_sum_i=1 } { n } w_ix_i+_sum_i=1 } { n } \sum_j=i+1 } { n } \langlev_i, v_j\langlex_ix_j. Where w_0 is the bias term, w_i is the weight of the ith feature, and v_i is the factor vector of the ith feature. The second term (i.e., sum_ { i=1 } { n } w_ix_i) is a representation of a linear model, and the third term (i.e., \sum_ { i=1 } { n } \sum_ { j=i+1 } { n } \langlev_i) is a factorized representation that represents the form of the factor vector inner product of two features as the cross product of two features. This allows for simultaneous consideration of the relationship between the two features and the weight of each feature. In training the FM model, optimization is typically performed using algorithms such as random gradient descent (Stochastic Gradient Descent, SGD for short) or alternating least squares, with the goal of minimizing a Loss function such as Mean-square Error (MSE for short) or Log-Loss (Log-Loss) or the like. The FM has the advantage that high-dimensional sparse data can be processed and can be easily fused with other models, such as combining Deep Neural Networks (DNN) to construct a hybrid model, etc. Therefore, the method is widely applied to application scenes such as recommendation systems, advertisement recommendation and the like.
As an alternative embodiment, to ensure the haircut effect after self-service haircut, a second virtual model for representing the haircut effect after haircut may be generated by recommending an appropriate hairstyle for the user based on the basic information such as the current hairquality and hairstyle of the user.
As an alternative implementation, fig. 4 is a schematic diagram of a simulator processing flow in a hair styling processing method according to an embodiment of the present disclosure, a recommendation model may be used to perform analysis based on a first virtual model to obtain a plurality of recommendation results, that is, a plurality of candidate virtual models, the recommendation model may recommend again, the recommendation model may push the plurality of candidate models to a user, the user determines an initial second virtual model from the plurality of candidate models, and then the user may modify the initial second virtual model based on own needs to obtain the second virtual model.
Step S206, styling the solid hair based on the difference information between the first virtual hair and the second virtual hair.
Specifically, the difference information includes information such as a difference in length, a difference in position, a difference in bending degree, and a difference in bending position, for example, the first virtual hair is a longer curl without a bang, the second virtual hair is a shorter straight hair with a bang, the difference in length may be a difference between a long hair and a short hair, the difference in position may be a difference between a difference without a bang and a condition with a bang, a lock of hair generated as the same position needs to be placed at different positions, the difference in bending degree may be a difference between a straight hair and a curl, and the difference in bending position may be a difference in bending position between different types of curls, for example, a small number of micro-curls and a wool curl may be definitely different. After confirming the difference information, the entity hair can be cut by utilizing the plurality of miniature hair cutters simultaneously based on the length difference and the position difference, and after the cut entity hair is washed, the cut entity hair can be subjected to blowing treatment based on the bending difference and the bending position difference, and the bending degree and the bending position of each hair are adjusted by utilizing the heat of wind, so that the image suitable for a user is molded. It is easily noted that by the above process, the present disclosure cuts hair by a three-dimensional model (i.e., a second virtual model) so that the modification can be accurate to a specific length of each hair, thereby allowing perfect styling.
As an alternative implementation manner, the shaping module may be used to shape the solid hair, and fig. 5 is a schematic flow diagram of a processing procedure of the shaping module in a method for shaping the hair according to an embodiment of the disclosure, as shown in fig. 5, the shaping module may cut the solid hair first, for example, the length of the solid hair may be cut, the solid hair may be made to show different layers, and after the cutting is completed, the blower may be used to perform shaping processing on the cut solid hair, so as to show a bending degree and a bending position corresponding to the second virtual hair.
Through the steps, the light detection technology is adopted to carry out light detection on the entity object covered with the entity hair, and because the light detection technology combines a plurality of scanning modes, a first virtual model with higher fineness can be generated, basic information of the entity hair is determined, on the basis, the first virtual hair in the first virtual model is modified based on the basic information of the entity hair to obtain a second virtual model, a more proper second virtual model can be recommended for the entity object, then, based on difference information between the first virtual hair and the second virtual hair, the entity hair is modeling processed, the purpose of automatic haircut is achieved, the waiting time of a user is reduced, the first virtual model can be modified based on the basic information of the entity hair, the entity object can be properly modeled, the generating accuracy of the second virtual model is improved, the haircut effect of a self-service haircut machine is improved, the technical effect of user experience is improved, and the technical problem that the haircut effect of the haircut work is poor through the self-service haircut machine is solved, and the haircut effect of the user experience is poor is solved.
Optionally, performing light sensing detection on the physical object covered with the physical hair in the physical environment, generating a first virtual model in the virtual environment, and determining basic information of the physical hair, including: performing light sensing detection on the entity object and the entity hair to obtain object detection information of the entity object and hair detection information of the entity hair; a first virtual model is generated based on the object detection information and the hair detection information, wherein the first virtual model contains the object detection information and the hair detection information.
Optionally, performing light sensing detection on the physical object covered with the physical hair in the physical environment, and determining basic information of the physical hair includes: performing light sensing detection on the solid hair to obtain hair detection information of the solid hair; based on the hair detection information, basic information of the entity hair is determined.
In particular, the object detection information may be used to characterize the appearance characteristics of the physical object, for example, in the case where the physical object is a person, the appearance characteristics may be skin color, skin texture, head shape, and the like. The hair detection feature may be a property of the solid hair, such as information on the number, stiffness, length, shape, color, humidity, fat content, and tortuosity of the hair.
As an alternative embodiment, the above-mentioned light-sensitive detection includes 3D (Dimension) laser scanning, infrared scanning of the solid hair, and also scanning of the area of the solid object covered with hair using multispectral analysis techniques. The head shape in the object detection information and the information such as the number, the hardness and the bending degree of the hair in the hair detection information can be accurately captured by utilizing 3D laser scanning; the humidity and the grease content of the hair can be detected by infrared scanning, so that the hair quality information of the hair is confirmed based on the humidity and the grease content; using multispectral analysis techniques, the color of the skin of the area covered with hair, as well as the surface and deep color of the hair, can be strategically placed.
As an alternative embodiment, after obtaining the object detection information, a virtual object may be generated according to the object detection information; after the hair detection information is obtained, a first virtual hair can be generated according to the hair detection information, finally, the virtual object and the first virtual hair are integrated, and the first virtual hair is implanted into the three-dimensional hair model according to the ratio of 1:1, so that each hair in the finally generated first virtual model is consistent with the hair of the user.
As an alternative embodiment, after obtaining the hair detection information, the basic information of the solid hair may be determined from the hair detection information, for example, the hair quality of the hair may be determined based on endocrine properties of the detected hair, and the hair style of the hair may be determined based on mechanical properties, color properties, etc. of the detected hair.
Optionally, the object detection information includes shape information and skin information of the physical object, and the hair detection information includes mechanical properties, color properties, and endocrine properties of the physical hair, wherein the light sensing detection is performed on the physical object and the physical hair to obtain object detection information of the physical object and hair detection information of the physical hair, including: performing laser scanning on the entity object and the entity hair to obtain shape information of the entity object and mechanical properties of the entity hair; carrying out multispectral technical scanning on the entity object and the entity hair to obtain skin information of the entity object and color attributes of the entity hair; and carrying out infrared scanning on the solid hair to obtain the endocrine attribute of the solid hair.
Specifically, the mechanical properties of the hair may be information such as length, shape, bending and hardness, the color properties may be color of the hair, and the endocrine properties may be humidity and fat content of the hair.
As an alternative implementation manner, the information of the head shape, the hair number, the hair hardness, the hair bending and the like can be accurately captured through 3D laser scanning, wherein the head shape belongs to the shape information, and the hair number, the hair hardness and the hair bending belong to the mechanical property.
As an alternative embodiment, the surface and deep color of the skin on which the hair grows, and the surface and deep color of the hair may be measured using a multispectral analysis technique, wherein the surface and deep color of the skin belongs to skin information and the surface and deep color of the hair belongs to color attributes.
As an alternative embodiment, the humidity and fat content of the hair is detected by infrared scanning, wherein the humidity and fat content of the hair belongs to the endocrine property described above.
In the above-mentioned alternative embodiment, the information such as color, position, length, thickness, etc. of each hair can be accurately ascertained by the light-sensitive detection technology, and the quality, the oil and the firmness of the hair are also known, so that the shape of the area covered by the hair and the appearance and state of the hair of the entity object can be accurately determined in multiple aspects, and further, a first virtual model with high resolution and high precision can be generated, so that the appropriate second virtual hair can be conveniently recommended to the entity object in the following steps.
Optionally, generating the first virtual model based on the object detection information and the hair detection information includes: generating a virtual object based on the object detection information; generating a first virtual hair based on the hair detection information; and combining the first virtual hair with the virtual object to obtain a first virtual model.
Specifically, after the object detection information is obtained, the object detection information may be analyzed to determine information such as a head shape, a skin texture, a skin color, and the like, and the virtual object may be generated based on the information.
As an alternative embodiment, after obtaining the humidity and the fat content in the hair detection information, the humidity and the fat content are analyzed to determine the hair quality of each hair, and the hair color can be determined by analyzing the color of the hair surface and the deep level. After determining the hair quality and color, the first virtual hair is generated in combination with the number, length, shape, stiffness, and curvature of the hair in the hair detection information.
Optionally, modifying the first virtual hair in the first virtual model based on the basic information of the entity hair, generating a second virtual model includes: determining the type of the entity object based on object detection information contained in the first virtual model; acquiring attribute information and hair style preference information of an entity object; the first virtual hair is modified based on the attribute information of the entity object, the hairstyle preference information, the type of the entity object and the basic information of the entity hair, and a second virtual model is generated.
Specifically, in the case where the entity object is a human, the type is a looks type of the entity object, such as a euler looks, an asian looks, etc., and the attribute information may be information such as a height, a weight, a name, a gender, a native place, and a occupation of the entity object. In the case where the above-mentioned physical object is an animal, the type may be a variety, and the attribute information may be information such as sex, weight, age, habitat, and habitat climate. After determining the endocrine properties of the hair, determining the hair quality of the hair based on the endocrine properties; after the color attribute in the hair detection information is obtained using the multi-spectral analysis technique, the color of the solid hair can be determined. And then, determining a target hair style suitable for the entity object by combining the attribute information, the hair style preference information, the type, the hair quality and the hair style, modifying the first virtual hair based on the target hair style to obtain second virtual hair, and integrating the second virtual hair and the virtual object to generate a second virtual model.
Optionally, modifying the first virtual hair based on the attribute information of the entity object, the hairstyle preference information, the type of the entity object and the basic information of the entity hair, generating the second virtual model includes: extracting the characteristics of attribute information, hair style preference information, types of the entity objects and basic information of the entity hair of the entity objects to obtain a plurality of characteristic vectors; capturing interaction relations among a plurality of feature vectors by utilizing a factorization algorithm; a second virtual model is generated based on the interaction relationship between the plurality of feature vectors.
Specifically, before attribute information, hair style preference information, types of the entity objects and basic information of the entity hairs of the entity objects are input into a recommendation model, feature extraction is required to obtain a plurality of feature vectors capable of representing features of the objects to be cut; the interaction relationship between the feature vectors may be a mapping relationship between the hair quality information and the hair style information, a preference relationship between the attribute information and the hair style information, and the like, and finally the second virtual model may be generated according to the interaction relationship.
Optionally, after modifying the first virtual hair in the first virtual model based on the basic information of the entity hair, the method further comprises: outputting a second virtual model on the operation interface; responding to an adjustment instruction acted on the operation interface, and adjusting the second virtual hair based on the adjustment instruction to obtain a modified second virtual hair; performing styling processing on the solid hair based on difference information between the first virtual hair and the second virtual hair, including: the styling process is performed on the solid hair based on the difference information between the first virtual hair and the modified second virtual hair.
Specifically, the operation interface may be an interface in a terminal connected to the hair styling apparatus via a network or bluetooth, or may be an operation interface directly provided on a display screen of the hair styling apparatus. The adjustment instruction may be a touch instruction on the operation interface, or may be an operation instruction for adjusting a parameter of the first virtual hair through an input device such as a keyboard or a mouse. After the second virtual model is determined, the second virtual model can be displayed to the user through an operation interface; the user adjusts the second virtual hair in the second virtual model according to the user's own needs, for example, the user can adjust the length of the second virtual hair, or the position and bending degree of the hair, etc. Then, the styling process is performed on the solid hair according to the difference information between the first virtual hair and the modified virtual hair.
Optionally, performing styling processing on the solid hair based on the difference information between the first virtual model and the second virtual model includes: dividing the solid hair to obtain a plurality of modeling areas; determining region differences corresponding to the modeling regions based on the difference information; and carrying out modeling treatment on the modeling areas based on the area differences corresponding to the modeling areas.
As an alternative implementation manner, in order to complete the styling process, the solid hair may be divided into a plurality of styling areas, each styling area is simultaneously worked, the second virtual model and the first virtual model are analyzed and compared, the difference information between the first virtual model and the second virtual model is determined, and then the length and the position of the solid hair to be cut in each styling area are obtained, that is, the area difference corresponding to each styling area is determined, and trimming and styling are performed after the determination is completed.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present disclosure may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present disclosure.
In this embodiment, a device is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, which are not described in detail. As used below, the terms "unit," "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 6 is a block diagram of a hair styling treatment device according to one embodiment of the present disclosure, as shown in fig. 6, the device comprising:
the detection module 62 is configured to perform light sensing detection on an entity object covered with entity hair in a physical environment, generate a first virtual model in a virtual environment, and determine basic information of the entity hair, where the first virtual model includes a virtual object corresponding to the entity object, and a first virtual hair corresponding to the entity hair, and the basic information includes at least a hair quality and a hair style;
a generating module 64, configured to modify a first virtual hair in the first virtual model based on the basic information of the entity hair, and generate a second virtual model, where the second virtual model includes a virtual object, and a second virtual hair obtained by modifying the first virtual hair;
The styling module 66 is configured to perform styling processing on the solid hair based on the difference information between the first virtual hair and the second virtual hair.
Optionally, the detection module includes: the hair detection unit is used for performing light sensing detection on the entity object and the entity hair to obtain object detection information of the entity object and hair detection information of the entity hair; a model generation unit configured to generate a first virtual model based on the object detection information and the hair detection information, wherein the first virtual model contains the object detection information and the hair detection information; an information determination unit for determining basic information of the physical hair based on the hair detection information.
Optionally, the object detection information includes shape information and skin information of the physical object, the hair detection information includes mechanical properties, color properties, and endocrine properties of the physical hair, and the model generating unit includes: the laser scanning subunit is used for carrying out laser scanning on the entity object and the entity hair to obtain the shape information of the entity object and the mechanical properties of the entity hair; the multispectral technology scanning subunit is used for carrying out multispectral technology scanning on the entity object and the entity hair to obtain skin information of the entity object and color attributes of the entity hair; and the infrared scanning subunit is used for carrying out infrared scanning on the solid hair to obtain the endocrine attribute of the solid hair.
Optionally, the model generating unit further includes: a first generation subunit configured to generate a virtual object based on the object detection information; a second generation subunit for generating a first virtual hair based on the hair detection information; and the combining subunit is used for combining the first virtual hair with the virtual object to obtain a first virtual model.
Optionally, the generating module includes: the type determining unit is used for determining the type of the entity object based on the object detection information contained in the first virtual model; a preference acquisition unit for acquiring attribute information and hair style preference information of the entity object; a modification unit for modifying the first virtual hair based on the attribute information of the entity object, the hairstyle preference information, the type of the entity object and the basic information of the entity hair, and generating a second virtual model
Optionally, the modification unit includes: the feature extraction subunit is used for extracting features of attribute information, hair style preference information, types of the entity objects and basic information of the entity hairs of the entity objects to obtain a plurality of feature vectors; an interaction relation capturing unit for capturing interaction relations among the plurality of feature vectors by using a factorization algorithm; and the second virtual model generation subunit is used for generating a second virtual model based on the interaction relation among the plurality of feature vectors.
Optionally, the apparatus further comprises: the output module is used for outputting the second virtual model on the operation interface; the adjusting module is used for responding to the adjusting instruction acted on the operation interface, adjusting the second virtual hair based on the adjusting instruction and obtaining the modified second virtual hair; the styling module is also used for styling the solid hair based on the difference information between the first virtual hair and the modified second virtual hair.
Optionally, the difference information includes: length difference, position difference, bending difference, and bending position difference, wherein the styling module comprises: the cutting unit is used for cutting the entity hair based on the length difference and the position difference to obtain the cut entity hair; and the blowing treatment unit is used for blowing the cut solid hair based on the curvature difference and the bending position difference.
Optionally, the modeling module includes: the region dividing unit is used for dividing the solid hair to obtain a plurality of modeling regions; a region difference determining unit for determining region differences corresponding to the plurality of modeling regions based on the difference information; and the modeling processing unit is used for simultaneously performing modeling processing on the modeling areas based on the area differences corresponding to the modeling areas.
Fig. 7 is a block diagram of a hair styling treatment apparatus according to one embodiment of the present disclosure, as shown in fig. 7, the apparatus comprising:
the scanner 72 is configured to perform light sensing detection on an entity object covered with entity hair in a physical environment, generate a first virtual model in a virtual environment, and determine basic information of the entity hair, where the first virtual model includes a virtual object corresponding to the entity object, and a first virtual hair corresponding to the entity hair, and the basic information includes at least a hair quality and a hair style;
a modeler 74, coupled to the scanner, for modifying a first virtual hair in the first virtual model based on the basic information of the physical hair, to generate a second virtual model, wherein the second virtual model comprises a virtual object, and a second virtual hair obtained by modifying the first virtual hair;
and a styling device 76 connected to the scanner for styling the solid hair based on the difference information between the first virtual hair and the second virtual hair.
Optionally, the scanner comprises: the laser scanner is used for carrying out laser scanning on the entity object and the entity hair to obtain the shape information of the entity object and the mechanical properties of the entity hair; the multispectral technology scanner is used for carrying out multispectral technology scanning on the entity object and the entity hair to obtain skin information of the entity object and color attributes of the entity hair; and the infrared scanner is used for carrying out infrared scanning on the solid hair to obtain the endocrine attribute of the solid hair.
Optionally, the apparatus further comprises: the mask, the scanner, the modeler and the modeler are arranged inside the mask, the scanner and the modeler move to the outside of the mask through a plurality of holes on the mask, and the mask is used for covering the physical object and the physical hair.
As an alternative implementation, fig. 8 is a schematic structural diagram of a hair styling apparatus according to an embodiment of the present disclosure, in which, in fig. 8, a scanner 82, a styling device 86, a mask 88, and a mask 88 are provided with a plurality of holes 881, it should be noted that the number of holes shown in fig. 8 is not an actual number, and the holes may be opened when the styling device 86 or the scanner 82 needs to work outside the mask 88, and closed at other times, and the styling device 86 may be a micro hair clipper as shown in fig. 8, or a micro hair dryer. And, it should be noted that the modeler 84 is also disposed inside the mask 88, but is not shown in fig. 8.
It should be noted that each of the above units and modules may be implemented by software or hardware, and the latter may be implemented by, but not limited to: the units and the modules are all positioned in the same processor; alternatively, the units and modules are located in different processors in any combination.
Embodiments of the present disclosure also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Alternatively, in this embodiment, the above-mentioned computer-readable storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for performing the steps of:
s1, performing light sensing detection on an entity object covered with entity hair in a physical environment, generating a first virtual model in a virtual environment, and determining basic information of the entity hair, wherein the first virtual model comprises a virtual object corresponding to the entity object and the first virtual hair corresponding to the entity hair, and the basic information at least comprises a hair quality and a hair style;
S2, modifying first virtual hair in the first virtual model based on basic information of the entity hair to generate a second virtual model, wherein the second virtual model comprises a virtual object and second virtual hair obtained by modifying the first virtual hair;
and S3, performing styling treatment on the solid hair based on the difference information between the first virtual hair and the second virtual hair.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: performing light sensing detection on the entity object and the entity hair to obtain object detection information of the entity object and hair detection information of the entity hair; generating a first virtual model based on the object detection information and the hair detection information, wherein the first virtual model contains the object detection information and the hair detection information; based on the hair detection information, basic information of the entity hair is determined.
Optionally, the object detection information comprises shape information and skin information of the physical object, the hair detection information comprises physical properties, color properties and endocrine properties of the physical hair, the computer readable storage medium being further arranged to store program code for performing the steps of: performing laser scanning on the entity object and the entity hair to obtain shape information of the entity object and mechanical properties of the entity hair; carrying out multispectral technical scanning on the entity object and the entity hair to obtain skin information of the entity object and color attributes of the entity hair; and carrying out infrared scanning on the solid hair to obtain the endocrine attribute of the solid hair.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: generating a virtual object based on the object detection information; generating a first virtual hair based on the hair detection information; and combining the first virtual hair with the virtual object to obtain a first virtual model.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: determining the type of the entity object based on object detection information contained in the first virtual model; acquiring attribute information and hair style preference information of an entity object; the first virtual hair is modified based on the attribute information of the entity object, the hairstyle preference information, the type of the entity object and the basic information of the entity hair, and a second virtual model is generated.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: extracting the characteristics of attribute information, hair style preference information, types of the entity objects and basic information of the entity hair of the entity objects to obtain a plurality of characteristic vectors; capturing interaction relations among a plurality of feature vectors by utilizing a factorization algorithm; a second virtual model is generated based on the interaction relationship between the plurality of feature vectors.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: modifying the first virtual hair in the first virtual model based on the basic information of the entity hair, and outputting the second virtual model on the operation interface after generating the second virtual model; responding to an adjustment instruction acted on the operation interface, and adjusting the second virtual hair based on the adjustment instruction to obtain a modified second virtual hair; the styling process is performed on the solid hair based on the difference information between the first virtual hair and the modified second virtual hair.
Optionally, the difference information includes: the length difference, the position difference, the bending difference, and the bending position difference, the computer readable storage medium being further configured to store program code for performing the steps of: cutting the entity hair based on the length difference and the position difference to obtain the cut entity hair; and blowing the cut solid hair based on the bending difference and the bending position difference.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: dividing the solid hair to obtain a plurality of modeling areas; determining region differences corresponding to the modeling regions based on the difference information; and carrying out modeling treatment on the modeling areas based on the area differences corresponding to the modeling areas.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a computer readable storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present application, a computer-readable storage medium stores thereon a program product capable of implementing the method described above in this embodiment. In some possible implementations, aspects of the disclosed embodiments may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of the disclosure, when the program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present disclosure may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the embodiments of the present disclosure is not limited thereto, and in the embodiments of the present disclosure, the computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Any combination of one or more computer readable media may be employed by the program product described above. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present disclosure also provide an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, performing light sensing detection on an entity object covered with entity hair in a physical environment, generating a first virtual model in a virtual environment, and determining basic information of the entity hair, wherein the first virtual model comprises a virtual object corresponding to the entity object and the first virtual hair corresponding to the entity hair, and the basic information at least comprises a hair quality and a hair style;
S2, modifying first virtual hair in the first virtual model based on basic information of the entity hair to generate a second virtual model, wherein the second virtual model comprises a virtual object and second virtual hair obtained by modifying the first virtual hair;
and S3, performing styling treatment on the solid hair based on the difference information between the first virtual hair and the second virtual hair.
Optionally, the above processor may be further configured to perform the following steps by a computer program: performing light sensing detection on the entity object and the entity hair to obtain object detection information of the entity object and hair detection information of the entity hair; generating a first virtual model based on the object detection information and the hair detection information, wherein the first virtual model contains the object detection information and the hair detection information; based on the hair detection information, basic information of the entity hair is determined.
Optionally, the object detection information comprises shape information and skin information of the physical object, the hair detection information comprises physical properties, color properties and endocrine properties of the physical hair, and the processor may be further arranged to perform the following steps by means of a computer program: performing laser scanning on the entity object and the entity hair to obtain shape information of the entity object and mechanical properties of the entity hair; carrying out multispectral technical scanning on the entity object and the entity hair to obtain skin information of the entity object and color attributes of the entity hair; and carrying out infrared scanning on the solid hair to obtain the endocrine attribute of the solid hair.
Optionally, the above processor may be further configured to perform the following steps by a computer program: generating a virtual object based on the object detection information; generating a first virtual hair based on the hair detection information; and combining the first virtual hair with the virtual object to obtain a first virtual model.
Optionally, the above processor may be further configured to perform the following steps by a computer program: determining the type of the entity object based on object detection information contained in the first virtual model; acquiring attribute information and hair style preference information of an entity object; the first virtual hair is modified based on the attribute information of the entity object, the hairstyle preference information, the type of the entity object and the basic information of the entity hair, and a second virtual model is generated.
Optionally, the above processor may be further configured to perform the following steps by a computer program: extracting the characteristics of attribute information, hair style preference information, types of the entity objects and basic information of the entity hair of the entity objects to obtain a plurality of characteristic vectors; capturing interaction relations among a plurality of feature vectors by utilizing a factorization algorithm; a second virtual model is generated based on the interaction relationship between the plurality of feature vectors.
Optionally, the above processor may be further configured to perform the following steps by a computer program: modifying the first virtual hair in the first virtual model based on the basic information of the entity hair, and outputting the second virtual model on the operation interface after generating the second virtual model; responding to an adjustment instruction acted on the operation interface, and adjusting the second virtual hair based on the adjustment instruction to obtain a modified second virtual hair; the styling process is performed on the solid hair based on the difference information between the first virtual hair and the modified second virtual hair.
Optionally, the difference information includes: the length difference, the position difference, the bending difference and the bending position difference may be further arranged to perform the following steps by means of a computer program: cutting the entity hair based on the length difference and the position difference to obtain the cut entity hair; and blowing the cut solid hair based on the bending difference and the bending position difference.
Optionally, the above processor may be further configured to perform the following steps by a computer program: dividing the solid hair to obtain a plurality of modeling areas; determining region differences corresponding to the modeling regions based on the difference information; and carrying out modeling treatment on the modeling areas based on the area differences corresponding to the modeling areas.
Fig. 9 is a schematic diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 9, the electronic device 900 is merely an example, and should not impose any limitation on the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 9, the electronic apparatus 900 is embodied in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: the at least one processor 910, the at least one memory 920, a bus 930 that connects the different system components (including the memory 920 and the processor 910), and a display 940.
Wherein the above-mentioned memory 920 stores program code that can be executed by the processor 910, such that the processor 910 performs the steps according to various exemplary implementations of the present disclosure described in the above-mentioned method section of the embodiment of the present application.
The memory 920 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 9201 and/or cache memory 9202, and may further include Read Only Memory (ROM) 9203, and may also include nonvolatile memory such as one or more magnetic storage devices, flash memory, or other nonvolatile solid state memory.
In some examples, memory 920 may also include a program/utility 9204 having a set (at least one) of program modules 9205, such program modules 9205 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Memory 920 may further include memory located remotely from processor 910, which may be connected to electronic device 900 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The bus 930 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processor 910, or a local bus using any of a variety of bus architectures.
Display 940 may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of electronic device 900.
Optionally, the electronic apparatus 900 may also communicate with one or more external devices 1400 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic apparatus 900, and/or any device (e.g., router, modem, etc.) that enables the electronic apparatus 900 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 950. Also, electronic device 900 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 960. As shown in fig. 9, the network adapter 960 communicates with other modules of the electronic device 900 over the bus 930. It should be appreciated that although not shown in fig. 9, other hardware and/or software modules may be used in connection with electronic device 900, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The electronic device 900 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power supply, and/or a camera.
It will be appreciated by those skilled in the art that the configuration shown in fig. 9 is merely illustrative and is not intended to limit the configuration of the electronic device. For example, the electronic device 900 may also include more or fewer components than shown in fig. 9, or have a different configuration than shown in fig. 1. The memory 920 may be used to store a computer program and corresponding data, such as a computer program and corresponding data corresponding to a hair styling method in an embodiment of the present disclosure. The processor 910 executes a computer program stored in the memory 920 to perform various functional applications and data processing, i.e., to implement a hair styling method as described above.
The foregoing embodiment numbers of the present disclosure are merely for description and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present disclosure, the descriptions of the various embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present disclosure, and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present disclosure, which are intended to be comprehended within the scope of the present disclosure.

Claims (15)

1. A hair styling treatment method comprising:
performing light sensing detection on an entity object covered with entity hair in a physical environment, generating a first virtual model in a virtual environment, and determining basic information of the entity hair, wherein the first virtual model comprises a virtual object corresponding to the entity object and a first virtual hair corresponding to the entity hair, and the basic information at least comprises a hair quality and a hair style;
modifying the first virtual hair in the first virtual model based on the basic information of the entity hair to generate a second virtual model, wherein the second virtual model comprises the virtual object and second virtual hair obtained by modifying the first virtual hair;
and styling the solid hair based on the difference information between the first virtual hair and the second virtual hair.
2. The method of claim 1, wherein the light detecting the physical object covered with the physical hair in the physical environment, generating the first virtual model in the virtual environment, and determining the underlying information of the physical hair, comprises:
performing light sensing detection on the entity object and the entity hair to obtain object detection information of the entity object and hair detection information of the entity hair;
generating the first virtual model based on the object detection information and the hair detection information, wherein the first virtual model contains the object detection information and the hair detection information;
based on the hair detection information, basic information of the solid hair is determined.
3. The method according to claim 2, wherein the object detection information comprises shape information and skin information of the physical object, the hair detection information comprises mechanical properties, color properties and endocrine properties of the physical hair, wherein the light-sensitive detection of the physical object and the physical hair to obtain the object detection information of the physical object and the hair detection information of the physical hair comprises:
Performing laser scanning on the solid object and the solid hair to obtain shape information of the solid object and mechanical properties of the solid hair;
carrying out multispectral technical scanning on the entity object and the entity hair to obtain skin information of the entity object and color attributes of the entity hair;
and carrying out infrared scanning on the solid hair to obtain the endocrine attribute of the solid hair.
4. The method of claim 2, wherein generating the first virtual model based on the object detection information and the hair detection information comprises:
generating the virtual object based on the object detection information;
generating the first virtual hair based on the hair detection information;
and combining the first virtual hair with the virtual object to obtain the first virtual model.
5. The method of claim 1, wherein modifying the first virtual hair in the first virtual model based on the underlying information of the physical hair, generating the second virtual model, comprises:
determining the type of the entity object based on object detection information contained in the first virtual model;
Acquiring attribute information and hair style preference information of the entity object;
and modifying the first virtual hair based on the attribute information of the entity object, the hairstyle preference information, the type of the entity object and the basic information of the entity hair, and generating the second virtual model.
6. The method of claim 5, wherein modifying the first virtual hair based on the attribute information of the physical object, the hairstyle preference information, the type of the physical object, and the underlying information of the physical hair, generating the second virtual model comprises:
extracting the characteristics of the attribute information of the entity object, the hair style preference information, the type of the entity object and the basic information of the entity hair to obtain a plurality of characteristic vectors;
capturing interaction relations among the plurality of feature vectors by utilizing a factorization algorithm;
the second virtual model is generated based on the interaction relationship between the plurality of feature vectors.
7. The method of claim 1, wherein after modifying the first virtual hair in the first virtual model based on the underlying information of the physical hair, generating a second virtual model, the method further comprises:
Outputting the second virtual model on an operation interface;
responding to an adjustment instruction acted on the operation interface, and adjusting the second virtual hair based on the adjustment instruction to obtain a modified second virtual hair;
said styling the solid hair based on the difference information between the first virtual hair and the second virtual hair, comprising:
styling the solid hair based on the difference information between the first virtual hair and the modified second virtual hair.
8. The method of claim 1, wherein the difference information comprises: a length difference, a position difference, a curvature difference, and a bending position difference, wherein styling the solid hair based on difference information between the first virtual model and the second virtual model comprises:
cutting the entity hair based on the length difference and the position difference to obtain the cut entity hair;
and carrying out blowing treatment on the cut solid hair based on the bending difference and the bending position difference.
9. The method of claim 1, wherein styling the solid hair based on difference information between the first virtual model and the second virtual model comprises:
Dividing the solid hair to obtain a plurality of modeling areas;
determining region differences corresponding to the modeling regions based on the difference information;
and carrying out modeling processing on the modeling areas based on the area differences corresponding to the modeling areas.
10. A hair styling treatment device comprising:
the detection module is used for carrying out light sensing detection on the entity object covered with the entity hair in the physical environment, generating a first virtual model in the virtual environment, and determining basic information of the entity hair, wherein the first virtual model comprises a virtual object corresponding to the entity object and a first virtual hair corresponding to the entity hair, and the basic information at least comprises a hair quality and a hair style;
the generation module is used for modifying the first virtual hair in the first virtual model based on the basic information of the entity hair to generate a second virtual model, wherein the second virtual model comprises the virtual object and second virtual hair obtained after the modification of the first virtual hair;
and the styling module is used for styling the solid hair based on the difference information between the first virtual hair and the second virtual hair.
11. A hair styling treatment apparatus, comprising:
the device comprises a scanner, a first virtual model and a second virtual model, wherein the scanner is used for carrying out light sensing detection on an entity object covered with entity hair in a physical environment, generating the first virtual model in the virtual environment and determining basic information of the entity hair, the first virtual model comprises a virtual object corresponding to the entity object and a first virtual hair corresponding to the entity hair, and the basic information at least comprises a hair quality and a hair style;
the modeler is connected with the scanner and used for modifying the first virtual hair in the first virtual model to generate a second virtual model, wherein the second virtual model comprises the virtual object and second virtual hair obtained by modifying the first virtual hair;
and the styling device is connected with the scanner and is used for styling the solid hair based on the difference information between the first virtual hair and the second virtual hair.
12. The apparatus of claim 11, wherein the scanner comprises:
the laser scanner is used for carrying out laser scanning on the solid object and the solid hair to obtain the shape information of the solid object and the mechanical properties of the solid hair;
The multispectral technology scanner is used for carrying out multispectral technology scanning on the entity object and the entity hair to obtain skin information of the entity object and color attributes of the entity hair;
and the infrared scanner is used for carrying out infrared scanning on the solid hair to obtain the endocrine attribute of the solid hair.
13. The apparatus of claim 11, wherein the apparatus further comprises:
the scanner, the modeler and the modeler are arranged inside the mask, the scanner and the modeler move to the outside of the mask through a plurality of holes on the mask, and the mask is used for covering the solid object and the solid hair.
14. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program, wherein the computer program is arranged to perform the method of any of claims 1 to 9 when being run by a processor.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of claims 1 to 9.
CN202310506926.7A 2023-05-05 2023-05-05 Method, device, apparatus and storage medium for styling hair Pending CN116597121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310506926.7A CN116597121A (en) 2023-05-05 2023-05-05 Method, device, apparatus and storage medium for styling hair

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310506926.7A CN116597121A (en) 2023-05-05 2023-05-05 Method, device, apparatus and storage medium for styling hair

Publications (1)

Publication Number Publication Date
CN116597121A true CN116597121A (en) 2023-08-15

Family

ID=87610816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310506926.7A Pending CN116597121A (en) 2023-05-05 2023-05-05 Method, device, apparatus and storage medium for styling hair

Country Status (1)

Country Link
CN (1) CN116597121A (en)

Similar Documents

Publication Publication Date Title
CN110110118A (en) Dressing recommended method, device, storage medium and mobile terminal
CN109310196B (en) Makeup assisting device and makeup assisting method
CN113050795A (en) Virtual image generation method and device
US11978242B2 (en) Systems and methods for improved facial attribute classification and use thereof
JP7493532B2 (en) Changing the appearance of the hair
CN110110611A (en) Portrait attribute model construction method, device, computer equipment and storage medium
US20220164852A1 (en) Digital Imaging and Learning Systems and Methods for Analyzing Pixel Data of an Image of a Hair Region of a User's Head to Generate One or More User-Specific Recommendations
TWI780919B (en) Method and apparatus for processing face image, electronic device and storage medium
US20220284678A1 (en) Method and apparatus for processing face information and electronic device and storage medium
TW202040517A (en) Method of generating 3d facial model for an avatar and related device
CN114266695A (en) Image processing method, image processing system and electronic equipment
CN114913303A (en) Virtual image generation method and related device, electronic equipment and storage medium
CN108596094B (en) Character style detection system, method, terminal and medium
CN117152566A (en) Classification model training method, model, classification method and product
CN113050794A (en) Slider processing method and device for virtual image
CN111325173A (en) Hair type identification method and device, electronic equipment and storage medium
CN116597121A (en) Method, device, apparatus and storage medium for styling hair
KR20200025652A (en) System for providing eyewear wearing and recommendation services using a true depth camera and method of the same
CN114333018A (en) Shaping information recommendation method and device and electronic equipment
CN116542846B (en) User account icon generation method and device, computer equipment and storage medium
CN112987932B (en) Human-computer interaction and control method and device based on virtual image
KR102498305B1 (en) System and method for ordering customized wig
CN112819921B (en) Method, apparatus, device and storage medium for changing hairstyle of character
KR102515436B1 (en) Method, device and system for processing face makeup based on artificial intelligence
KR102547323B1 (en) Object segmentation device and method formed through 3d scanning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination