CN113628350A - Intelligent hair dyeing and testing method and device - Google Patents
Intelligent hair dyeing and testing method and device Download PDFInfo
- Publication number
- CN113628350A CN113628350A CN202111058619.4A CN202111058619A CN113628350A CN 113628350 A CN113628350 A CN 113628350A CN 202111058619 A CN202111058619 A CN 202111058619A CN 113628350 A CN113628350 A CN 113628350A
- Authority
- CN
- China
- Prior art keywords
- hair
- region
- hairstyle
- intelligent
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000004209 hair Anatomy 0.000 title claims abstract description 52
- 238000004043 dyeing Methods 0.000 title claims abstract description 28
- 238000012360 testing method Methods 0.000 title claims description 11
- 230000000694 effects Effects 0.000 claims abstract description 17
- 238000009877 rendering Methods 0.000 claims abstract description 14
- 238000013135 deep learning Methods 0.000 claims abstract description 13
- 238000001514 detection method Methods 0.000 claims abstract description 9
- 230000006978 adaptation Effects 0.000 claims abstract description 6
- 230000001815 facial effect Effects 0.000 claims abstract description 6
- 238000010998 test method Methods 0.000 claims abstract description 4
- OLBCVFGFOZPWHH-UHFFFAOYSA-N propofol Chemical compound CC(C)C1=CC=CC(C(C)C)=C1O OLBCVFGFOZPWHH-UHFFFAOYSA-N 0.000 claims description 17
- 229960004134 propofol Drugs 0.000 claims description 17
- 239000000523 sample Substances 0.000 claims description 16
- 238000000034 method Methods 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 238000004040 coloring Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 5
- 210000004709 eyebrow Anatomy 0.000 claims description 4
- 230000006835 compression Effects 0.000 claims description 3
- 238000007906 compression Methods 0.000 claims description 3
- 230000008030 elimination Effects 0.000 claims description 3
- 238000003379 elimination reaction Methods 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000012706 support-vector machine Methods 0.000 claims description 3
- 230000037308 hair color Effects 0.000 claims 1
- 231100000640 hair analysis Toxicity 0.000 abstract description 8
- 238000004088 simulation Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- A—HUMAN NECESSITIES
- A45—HAND OR TRAVELLING ARTICLES
- A45D—HAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
- A45D44/00—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
- A45D44/005—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms for selecting or displaying personal cosmetic colours or hairstyle
-
- A—HUMAN NECESSITIES
- A47—FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
- A47G—HOUSEHOLD OR TABLE EQUIPMENT
- A47G1/00—Mirrors; Picture frames or the like, e.g. provided with heating, lighting or ventilating means
- A47G1/02—Mirrors used as equipment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The invention discloses an intelligent hair dyeing and hair test method, which comprises the following steps: the camera collects image information in real time; performing real-time facial feature detection and tracking through deep learning, and extracting face feature key points and hair part binary image data; assuming that the coordinates are respectively a point A at the top left of the cheek, coordinates [ Xleft, Yleft, Zleft ], a point B at the top right of the cheek, coordinates [ Xright, Yright, Zright ], a point C in the middle of the eyebrow center and coordinates [ Xcenter, Ycenter, Zcenter ]; evaluating the distance between the A and the B on the X axis, wherein a formula Xleft + Xright is the value of the size of the hairstyle model; loading a hairstyle model to a space position for positioning the hairstyle model and the adaptation size, performing real-time rendering display, and intelligently self-defining and adjusting length data and thickness data of the hairstyle model to perform different effect display; the invention is more innovative and simulates the virtual hair trial and dyeing effect, can meet the requirement that a user watches the own hair style in advance, and provides an intelligent hair cutting scheme for a barber shop.
Description
Technical Field
The invention relates to the technical field of three-dimensional simulation image processing, in particular to an intelligent hair dyeing and hair test method and device.
Background
The existing barber shop is used for informing or referring to a certain model picture to let a barber how to shape the hairstyle, a client cannot predict the effect after haircut, and once the haircut starts, the haircut cannot be retracted; the existing mirrors of barbershops are more mirrors with one mirror, which can only be used for a client to simply see the appearance of the client, the experience of hair trial and dyeing effect is not intelligently realized, the condition of the client after hair cutting and dyeing can not be met, and more creative and further requirements can not be met; the existing simulation hair test and dyeing technology is embodied in the application of app or computer desktop, and the effect cannot be conveniently seen in real time in the hair care process; it is desirable to provide a method and apparatus for implementing a hair test and dyeing function, which enables a client to view the effect of hair cutting and dyeing in real time.
Disclosure of Invention
The invention aims to provide the intelligent hair dyeing and hair test method and the intelligent hair dyeing and hair test device which are convenient to operate and innovative, can simulate the virtual hair test and hair dyeing effect, can enable a user to predict and watch the hair style of the user in advance and provide an intelligent hair cutting scheme for a barber shop.
The invention is realized by the following technical scheme:
an intelligent hair dyeing and testing method comprises the following steps:
step S1, the camera collects image information in real time;
step S2, performing real-time facial feature detection and tracking through deep learning, and extracting face feature key points and hair part binary image data;
step S3, selecting key point data of regions such as left and right cheeks, eyebrow center and the like from the key point data of the face obtained in step S2, and assuming that the key point data are respectively a point A at the top left of each cheek, coordinates [ Xleft, YLeft, ZLeft ], a point B at the top right of each cheek, coordinates [ Xright, YRight, Zright ], a point C in the middle of the eyebrow center and coordinates [ Xcenter, Ycenter, ZCenter ];
step S4, using the C coordinate obtained in the step S3 to position the space position of the hairstyle model; evaluating the distance between the A and the B on the X axis, wherein a formula Xleft + Xright is the value of the size of the hairstyle model;
s5, loading a hairstyle model to the spatial position and the adaptation size obtained in the S4, and performing real-time rendering display;
step S6, in the real-time rendering process of the hairstyle, length data and thickness data of the hairstyle model can be intelligently adjusted in a user-defined mode to carry out different effect display;
and step S7, coloring the white selected area by the hair part binary image obtained in the step S2, and performing superposition fusion processing and rendering display on the colored image and the camera picture image.
Further, in step S2, the face feature key points are implemented by face key point detection based on Caffe, and the model function is Y = F (X, W); wherein, X refers to the inputted face image, W refers to the model parameter to be learned, and Y ∈ [ (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4), (X5, Y5) ] refers to the detected face point coordinate position.
Further, in step S2, the hair part binary image is implemented by a deep learning recursive convolutional neural network, and 2000 candidate regions are selected from the image by using a sliding window method, and the features of the regions are respectively extracted to identify hair segmentation; the method comprises the following specific steps:
s21, inputting an image, and obtaining M region propofol by using a selective search;
s22, converting all region probes to a fixed size and using the converted region probes as the input of the trained CNN network to obtain 4096-dimensional characteristics of an f7 layer, so that the output of the f7 layer is M x 4096;
s23, for each category, scoring the extracted features by using the trained SVM classifier corresponding to the category, so that the weight matrix of the SVM is 4096 × N, and N is the number of categories;
s24, removing the region probes in each column in the scoring matrix by non-maximum compression, namely removing a plurality of region probes with higher repetition rate to obtain a plurality of region probes with highest scores in the column; after the elimination, finding the highest score from the remaining region propofol, then calculating whether the sum of the other region propofol and the image processing sum with the highest score exceeds the IOU threshold value or not, and continuously eliminating the excess until no region propofol is left; the same operation is adopted for each column, and finally, each column, namely each category can obtain the corresponding region dispose;
s25, carrying out regression on the multiple types of region propofol obtained in the step S24 by using K regressors, and adopting the characteristics of the pool5 layer; the weight W of the pool5 feature is used directly at the time of the training phase; and finally obtaining the corrected bounding box of each category.
Further, in the step S21, M =2000, that is, the number of region probes is 2000.
Further, in step S23, the number of SVMs is 20, and N = 20; the score matrix is 2000 × 20, indicating the score for each region pro posal belonging to the corresponding class.
Further, in step S25, K =20, that is, the number of regressors is 20.
Further, an intelligent hair dyeing and test device comprises an intelligent mirror; the intelligent mirror is used for intelligently dyeing hair and trying hair, and different effects are displayed by intelligently self-defining the length data and the thickness data of the hairstyle adjusting model.
The invention has the beneficial effects that:
the invention collects image information in real time through a camera; performing real-time facial feature detection and tracking through deep learning, and extracting face feature key points and hair part binary image data; selecting key point data of regions such as left and right cheeks, eyebrow centers and the like, and assuming that the key point data are respectively a point A at the top left of each cheek, coordinates [ Xleft, Yleft, Zleft ], a point B at the top right of each cheek, coordinates [ Xright, Yright, Zright ], a point C in the middle of each eyebrow center and coordinates [ Xcenter, center, ZCenter ] to obtain a C coordinate for positioning the spatial position of the hairstyle model; evaluating the distance between the A and the B on the X axis, wherein a formula Xleft + Xright is the value of the size of the hairstyle model; loading the hairstyle model to the space position and the adaptation size of the positioning hairstyle model, and performing real-time rendering display; in the real-time rendering process of the hairstyle, length data and thickness data of the hairstyle model can be intelligently adjusted in a user-defined manner to display different effects; coloring the white selected area on the binary image of the hair part, and superposing, fusing and displaying the colored image and the camera picture image; the invention is more innovative and simulates the virtual hair trial and dyeing effect, can meet the requirement that a user watches the own hair style in advance, and provides an intelligent hair cutting scheme for a barber shop.
Drawings
FIG. 1 is a block diagram of a process flow of an embodiment of the present invention.
Detailed Description
The invention will be described in detail with reference to the drawings and specific embodiments, which are illustrative of the invention and are not to be construed as limiting the invention.
It should be noted that the descriptions referring to "first" and "second" in the present invention are only for descriptive purposes and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
In the present invention, unless expressly stated or limited otherwise, the term "coupled" is to be interpreted broadly, e.g., "coupled" may be fixedly coupled, detachably coupled, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
An intelligent hair dyeing and testing method comprises the following steps:
step S1, the camera collects image information in real time;
step S2, performing real-time facial feature detection and tracking through deep learning, and extracting face feature key points and hair part binary image data;
step S3, selecting key point data of regions such as left and right cheeks, eyebrow center and the like from the key point data of the face obtained in step S2, and assuming that the key point data are respectively a point A at the top left of each cheek, coordinates [ Xleft, YLeft, ZLeft ], a point B at the top right of each cheek, coordinates [ Xright, YRight, Zright ], a point C in the middle of the eyebrow center and coordinates [ Xcenter, Ycenter, ZCenter ];
step S4, using the C coordinate obtained in the step S3 to position the space position of the hairstyle model; evaluating the distance between the A and the B on the X axis, wherein a formula Xleft + Xright is the value of the size of the hairstyle model;
s5, loading a hairstyle model to the spatial position and the adaptation size obtained in the S4, and performing real-time rendering display;
step S6, in the real-time rendering process of the hairstyle, length data and thickness data of the hairstyle model can be intelligently adjusted in a user-defined mode to carry out different effect display;
and step S7, coloring the white selected area by the hair part binary image obtained in the step S2, and performing superposition fusion processing and rendering display on the colored image and the camera picture image.
It should be noted that the occurrence of the deep learning framework reduces the threshold of entry, the user does not need to start coding from a complex neural network, the user can select an existing model according to needs, obtain model parameters through training, and the user can also add a layer on the basis of the existing model or select a classifier and an optimization algorithm which are needed by the user at the top. The fields in which the different frames are applicable are not completely consistent. In general, the deep learning framework provides a series of deep learning components, and when a new algorithm needs to be used, a user needs to define the new algorithm by himself and then the function interface of the deep learning framework is called to use the new algorithm customized by the user.
Specifically, in this embodiment, in step S2, the face feature key points are implemented by face key point detection based on Caffe, and the model function is Y = F (X, W); wherein, X refers to the inputted face image, W refers to the model parameter to be learned, and Y ∈ [ (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4), (X5, Y5) ] refers to the detected face point coordinate position.
Specifically, in the embodiment, in step S2, the binary image of the hair part is implemented by a deep learning recursive convolutional neural network, and a sliding window method is used to select 2000 candidate regions from the image, and the features of the candidate regions are extracted to identify hair segmentation; the method comprises the following specific steps:
s21, inputting an image, and obtaining M region propofol by using a selective search;
s22, converting all region probes to a fixed size and using the converted region probes as the input of the trained CNN network to obtain 4096-dimensional characteristics of an f7 layer, so that the output of the f7 layer is M x 4096;
s23, for each category, scoring the extracted features by using the trained SVM classifier corresponding to the category, so that the weight matrix of the SVM is 4096 × N, and N is the number of categories;
s24, removing the region probes in each column in the scoring matrix by non-maximum compression, namely removing a plurality of region probes with higher repetition rate to obtain a plurality of region probes with highest scores in the column; after the elimination, finding the highest score from the remaining region propofol, then calculating whether the sum of the other region propofol and the image processing sum with the highest score exceeds the IOU threshold value or not, and continuously eliminating the excess until no region propofol is left; the same operation is adopted for each column, and finally, each column, namely each category can obtain the corresponding region dispose;
s25, carrying out regression on the multiple types of region propofol obtained in the step S24 by using K regressors, and adopting the characteristics of the pool5 layer; the weight W of the pool5 feature is used directly at the time of the training phase; and finally obtaining the corrected bounding box of each category.
Specifically, in this embodiment, in step S21, M =2000, that is, the number of region propofol is 2000.
Specifically, in the present embodiment, in step S23, the number of SVMs is 20, and N = 20; the score matrix is 2000 × 20, indicating the score for each region pro posal belonging to the corresponding class.
Specifically, in the embodiment of the present invention, in step S25, K =20, that is, the number of regressors is 20.
Specifically, in the embodiment, the intelligent hair dyeing and hair test device comprises an intelligent mirror; the intelligent mirror is used for intelligently dyeing hair and trying hair, and different effects are displayed by intelligently self-defining the length data and the thickness data of the hairstyle adjusting model.
Specifically, referring to fig. 1, firstly, the invention collects image information in real time through a camera; performing real-time facial feature detection and tracking through deep learning, and extracting face feature key points and hair part binary image data; selecting key point data of regions such as left and right cheeks, eyebrow centers and the like, and assuming that the key point data are respectively a point A at the top left of each cheek, coordinates [ Xleft, Yleft, Zleft ], a point B at the top right of each cheek, coordinates [ Xright, Yright, Zright ], a point C in the middle of each eyebrow center and coordinates [ Xcenter, center, ZCenter ] to obtain a C coordinate for positioning the spatial position of the hairstyle model; evaluating the distance between the A and the B on the X axis, wherein a formula Xleft + Xright is the value of the size of the hairstyle model; loading the hairstyle model to the space position and the adaptation size of the positioning hairstyle model, and performing real-time rendering display; in the real-time rendering process of the hairstyle, length data and thickness data of the hairstyle model can be intelligently adjusted in a user-defined manner to display different effects; coloring the white selected area on the binary image of the hair part, and superposing, fusing and displaying the colored image and the camera picture image; the invention is more innovative and simulates the virtual hair trial and dyeing effect, can meet the requirement that a user watches the own hair style in advance, and provides an intelligent hair cutting scheme for a barber shop.
The technical solutions provided by the embodiments of the present invention are described in detail above, and the principles and embodiments of the present invention are explained herein by using specific examples, and the descriptions of the embodiments are only used to help understanding the principles of the embodiments of the present invention; meanwhile, for a person skilled in the art, according to the embodiments of the present invention, there may be variations in the specific implementation manners and application ranges, and in summary, the content of the present description should not be construed as a limitation to the present invention.
Claims (7)
1. An intelligent hair dyeing and testing method is characterized by comprising the following steps:
step S1, the camera collects image information in real time;
step S2, performing real-time facial feature detection and tracking through deep learning, and extracting face feature key points and hair part binary image data;
step S3, selecting key point data of the left and right cheek and the eyebrow area from the key point data of the human face obtained in step S2, and assuming that the key point data are respectively a point A at the top left of the cheek, coordinates [ Xleft, YLeft, ZLeft ], a point B at the top right of the cheek, coordinates [ Xright, YRight, Zright ], a point C at the middle of the eyebrow and coordinates [ Xcenter, Ycenter, ZCenter ];
step S4, using the C coordinate obtained in the step S3 to position the space position of the hairstyle model; evaluating the distance between the A and the B on the X axis, wherein a formula Xleft + Xright is the value of the size of the hairstyle model;
s5, loading a hairstyle model to the spatial position and the adaptation size obtained in the S4, and performing real-time rendering display;
step S6, in the real-time rendering process of the hairstyle, intelligently self-defining the length data and the thickness data of the hairstyle adjusting model to display different effects;
and step S7, coloring the white selected area by the hair part binary image obtained in the step S2, and performing superposition fusion processing and rendering display on the colored image and the camera picture image.
2. The intelligent hair dyeing and testing method according to claim 1, characterized in that: in step S2, the face feature key points are implemented by face key point detection based on Caffe, and the model function is Y = F (X, W); wherein, X refers to the inputted face image, W refers to the model parameter to be learned, and Y ∈ [ (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4), (X5, Y5) ] refers to the detected face point coordinate position.
3. The intelligent hair dyeing and testing method according to claim 1, characterized in that: in the step S2, the binary image of the hair part is implemented by a deep learning recursive convolutional neural network, and 2000 candidate regions are selected from the image by using a sliding window method, and the features of the regions are respectively extracted to identify hair segmentation; the method comprises the following specific steps:
s21, inputting an image, and obtaining M region propofol by using a selective search;
s22, converting all region probes to a fixed size and using the converted region probes as the input of the trained CNN network;
s23, for each category, scoring the extracted features by using the trained SVM classifier corresponding to the category, so that the weight matrix of the SVM is 4096 × N, and N is the number of categories;
s24, removing the region probes in each column in the scoring matrix by non-maximum compression, namely removing a plurality of region probes with higher repetition rate to obtain a plurality of region probes with highest scores in the column; after the elimination, finding the highest score from the remaining region propofol, then calculating whether the sum of the other region propofol and the image processing sum with the highest score exceeds the IOU threshold value or not, and continuously eliminating the excess until no region propofol is left; the same operation is adopted for each column, and finally, each column, namely each category, obtains a corresponding region propofol;
s25, performing regression on the region primers of the multiple categories obtained in the step S24 by using K regressors, and finally obtaining a corrected bounding box of each category.
4. The intelligent hair dyeing and testing method according to claim 3, characterized in that: in step S21, M =2000, that is, the number of region propofol is 2000.
5. The intelligent hair dyeing and testing method according to claim 3, characterized in that: in step S23, the number of SVMs is 20, and N = 20; the score matrix is 2000 × 20, indicating the score for each region pro posal belonging to the corresponding class.
6. The intelligent hair dyeing and testing method according to claim 3, characterized in that: in step S25, K =20, that is, the number of regressors is 20.
7. An apparatus for using the intelligent hair coloring test method of any one of claims 1-6, wherein: comprises an intelligent mirror; the intelligent mirror is used for intelligently dyeing hair and trying hair, and different effects are displayed by intelligently self-defining the length data and the thickness data of the hairstyle adjusting model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111058619.4A CN113628350A (en) | 2021-09-10 | 2021-09-10 | Intelligent hair dyeing and testing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111058619.4A CN113628350A (en) | 2021-09-10 | 2021-09-10 | Intelligent hair dyeing and testing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113628350A true CN113628350A (en) | 2021-11-09 |
Family
ID=78389577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111058619.4A Pending CN113628350A (en) | 2021-09-10 | 2021-09-10 | Intelligent hair dyeing and testing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113628350A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622613A (en) * | 2011-12-16 | 2012-08-01 | 彭强 | Hair style design method based on eyes location and face recognition |
CN103489219A (en) * | 2013-09-18 | 2014-01-01 | 华南理工大学 | 3D hair style effect simulation system based on depth image analysis |
KR20170011261A (en) * | 2015-07-22 | 2017-02-02 | 이서진 | Apparatus for hair style 3D simulation and method for simulating the same |
CN107742273A (en) * | 2017-10-13 | 2018-02-27 | 广州帕克西软件开发有限公司 | A kind of virtual try-in method of 2D hair styles and device |
KR20190052832A (en) * | 2017-11-09 | 2019-05-17 | (주)코아시아 | 3D simulation system for hair-styling |
CN109903257A (en) * | 2019-03-08 | 2019-06-18 | 上海大学 | A kind of virtual hair-dyeing method based on image, semantic segmentation |
CN111340921A (en) * | 2018-12-18 | 2020-06-26 | 北京京东尚科信息技术有限公司 | Dyeing method, dyeing apparatus, computer system and medium |
CN111510769A (en) * | 2020-05-21 | 2020-08-07 | 广州华多网络科技有限公司 | Video image processing method and device and electronic equipment |
CN112116699A (en) * | 2020-08-14 | 2020-12-22 | 浙江工商大学 | Real-time real-person virtual trial sending method based on 3D face tracking |
CN112906585A (en) * | 2021-02-25 | 2021-06-04 | 商楚苘 | Intelligent hairdressing auxiliary system, method and readable medium based on machine learning |
-
2021
- 2021-09-10 CN CN202111058619.4A patent/CN113628350A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622613A (en) * | 2011-12-16 | 2012-08-01 | 彭强 | Hair style design method based on eyes location and face recognition |
CN103489219A (en) * | 2013-09-18 | 2014-01-01 | 华南理工大学 | 3D hair style effect simulation system based on depth image analysis |
KR20170011261A (en) * | 2015-07-22 | 2017-02-02 | 이서진 | Apparatus for hair style 3D simulation and method for simulating the same |
CN107742273A (en) * | 2017-10-13 | 2018-02-27 | 广州帕克西软件开发有限公司 | A kind of virtual try-in method of 2D hair styles and device |
KR20190052832A (en) * | 2017-11-09 | 2019-05-17 | (주)코아시아 | 3D simulation system for hair-styling |
CN111340921A (en) * | 2018-12-18 | 2020-06-26 | 北京京东尚科信息技术有限公司 | Dyeing method, dyeing apparatus, computer system and medium |
CN109903257A (en) * | 2019-03-08 | 2019-06-18 | 上海大学 | A kind of virtual hair-dyeing method based on image, semantic segmentation |
CN111510769A (en) * | 2020-05-21 | 2020-08-07 | 广州华多网络科技有限公司 | Video image processing method and device and electronic equipment |
CN112116699A (en) * | 2020-08-14 | 2020-12-22 | 浙江工商大学 | Real-time real-person virtual trial sending method based on 3D face tracking |
CN112906585A (en) * | 2021-02-25 | 2021-06-04 | 商楚苘 | Intelligent hairdressing auxiliary system, method and readable medium based on machine learning |
Non-Patent Citations (1)
Title |
---|
陈云霁 等: "智能计算系统", vol. 1, 30 April 2020, 机械工业出版社, pages: 72 - 78 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109740466B (en) | Method for acquiring advertisement putting strategy and computer readable storage medium | |
CN109325437B (en) | Image processing method, device and system | |
KR102241153B1 (en) | Method, apparatus, and system generating 3d avartar from 2d image | |
US20210174072A1 (en) | Microexpression-based image recognition method and apparatus, and related device | |
EP3493138A1 (en) | Recommendation system based on a user's physical features | |
JP7407115B2 (en) | Machine performing facial health and beauty assistant | |
CN109310196B (en) | Makeup assisting device and makeup assisting method | |
WO2013005447A1 (en) | Face impression analysis method, cosmetic counseling method, and face image generation method | |
Arora et al. | AutoFER: PCA and PSO based automatic facial emotion recognition | |
CN107911643B (en) | Method and device for showing scene special effect in video communication | |
JP2012181688A (en) | Information processing device, information processing method, information processing system, and program | |
CN110909680A (en) | Facial expression recognition method and device, electronic equipment and storage medium | |
CN107632706A (en) | The application data processing method and system of multi-modal visual human | |
EP4073682B1 (en) | Generating videos, which include modified facial images | |
TWI780919B (en) | Method and apparatus for processing face image, electronic device and storage medium | |
CN113661520A (en) | Modifying the appearance of hair | |
CN114904268A (en) | Virtual image adjusting method and device, electronic equipment and storage medium | |
CN115546361A (en) | Three-dimensional cartoon image processing method and device, computer equipment and storage medium | |
CN113628350A (en) | Intelligent hair dyeing and testing method and device | |
Nakamae et al. | Recommendations for Attractive Hairstyles | |
JP4893968B2 (en) | How to compose face images | |
JP2022078936A (en) | Skin image analysis method | |
Ballagas et al. | Exploring pervasive making using generative modeling and speech input | |
JP6320844B2 (en) | Apparatus, program, and method for estimating emotion based on degree of influence of parts | |
CN112802031A (en) | Real-time virtual hair trial method based on three-dimensional human head tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |