CN108154142B - Skin wrinkle evaluation method and system based on voice recognition - Google Patents

Skin wrinkle evaluation method and system based on voice recognition Download PDF

Info

Publication number
CN108154142B
CN108154142B CN201810085453.7A CN201810085453A CN108154142B CN 108154142 B CN108154142 B CN 108154142B CN 201810085453 A CN201810085453 A CN 201810085453A CN 108154142 B CN108154142 B CN 108154142B
Authority
CN
China
Prior art keywords
face
skin
voice
pixel point
cloud server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810085453.7A
Other languages
Chinese (zh)
Other versions
CN108154142A (en
Inventor
陈碧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Taitai Enterprise Management Co.,Ltd.
Original Assignee
Hangzhou Meijie Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Meijie Technology Co ltd filed Critical Hangzhou Meijie Technology Co ltd
Priority to CN201810085453.7A priority Critical patent/CN108154142B/en
Publication of CN108154142A publication Critical patent/CN108154142A/en
Application granted granted Critical
Publication of CN108154142B publication Critical patent/CN108154142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a method and a system for evaluating skin wrinkles based on voice recognition, belonging to the technical field of skin detection; the method comprises the steps that under the irradiation of different light sources, face images related to the same face are respectively shot by an image acquisition device, and the acquired human physiological characteristic information and the face are correspondingly used as evaluation objects and sent to a cloud server; the cloud server obtains a unit normal vector of each pixel point through photometric stereo processing; the cloud server obtains surface depth information of each pixel point according to unit normal vector processing, and forms a face three-dimensional image of a human face; the cloud server judges the face three-dimensional image by adopting a skin wrinkle evaluation model formed by pre-training so as to obtain a corresponding skin wrinkle evaluation result, and makes a comprehensive evaluation result by referring to the human physiological characteristic information and outputs the comprehensive evaluation result. The beneficial effects of the above technical scheme are: the skin wrinkle condition monitoring system combines the functions of skin detection and a cosmetic mirror, provides the capability of three-dimensional reconstruction and three-dimensional evaluation of skin wrinkles, and combines the physiological characteristic information of a user, so that the user can accurately master the skin wrinkle condition of the user.

Description

Skin wrinkle evaluation method and system based on voice recognition
Technical Field
The invention relates to the technical field of skin detection, in particular to a method and a system for evaluating skin wrinkles based on voice recognition.
Background
As the quality of life of people is improved, more and more people, especially women, begin to pay attention to their skin conditions, and more skin condition-oriented care products are also in the market place. Among them, women are concerned about skin conditions of the face, such as whether there are wrinkles in the corners of the eyes, whether there are facial wrinkles, etc., and select different care products to use according to the skin conditions.
Although some skin detection devices, such as skin detectors, exist on the market, the skin detection devices are expensive and complicated to operate, and are not suitable for users to use at home. Meanwhile, the detection principle of these skin detection devices is usually only to perform planar data processing on the sensing data acquired by some sensors, which does not involve the problem of three-dimensional facial reconstruction, and the physiological characteristic information of the user figure, such as the age of the detected person, is often ignored when detecting the skin of the user, so that the skin detection device has a good detection effect on oily skin, skin water deficiency and other skin conditions, and has a poor detection effect on skin conditions of facial skin wrinkles, which cannot meet the requirements of the user.
Disclosure of Invention
According to the problems in the prior art protection, a technical scheme of a method and a system for evaluating skin wrinkles based on voice recognition is provided, and aims to combine the functions of skin detection and a cosmetic mirror to provide the capabilities of three-dimensional reconstruction and three-dimensional evaluation of skin wrinkles, so that a user can accurately master the skin wrinkle condition of the user, and the user experience is improved.
The technical scheme specifically comprises the following steps:
a skin wrinkle evaluation method based on voice recognition is characterized in that a skin detection mirror is adopted to evaluate wrinkles of skin of a human face, the skin detection mirror comprises an image acquisition device, a voice prompt device, a voice acquisition device, a voice recognition device and a data processing device, the skin detection mirror is remotely connected with a cloud server,
further comprising:
step S1, under the irradiation of different light sources, the image acquisition device is adopted to respectively shoot different face images related to the same face;
step S2, the data processing unit sends a prompt instruction to the voice prompt device after the image acquisition device acquires the face image;
step S3, after receiving the prompt instruction, the voice prompt device plays a preset voice prompt;
step S4, after the voice prompt is played, the data processing unit sends a starting instruction to the voice acquisition device to enable the voice acquisition device to be in a working state;
step S5, according to the voice prompt answer information, the voice collecting device collects and obtains the voice signal of the answer information and outputs the voice signal to the voice recognition device;
step S6, recognizing the human physiological information in the voice signal by the voice recognition device, and sending the human physiological information and the human face image together as an evaluation object to the cloud server through the data processing device;
step S7, the cloud server obtains a unit normal vector of each pixel point in the face image through photometric stereo processing;
step S8, the cloud server processes the unit normal vector to obtain surface depth information of each pixel point;
step S9, the cloud server forms a face three-dimensional image of a face according to the surface depth information of each pixel point and the position information of each pixel point on the face image;
step S10, the cloud server judges the face three-dimensional image by adopting a skin wrinkle evaluation model formed by pre-training so as to obtain a corresponding skin wrinkle evaluation result;
and step S11, the cloud server makes a comprehensive evaluation result according to the wrinkle evaluation result and the reference person physiological characteristic information and outputs the comprehensive evaluation result to a user terminal remotely connected with the cloud server for a user to view.
Preferably, in the method for evaluating skin wrinkles based on voice recognition, in step S1, the face image is captured under at least 3 different light sources, and all the light sources are not on the same straight line.
Preferably, the method for evaluating skin wrinkles based on voice recognition, wherein the number of the light sources is 12.
Preferably, the method for evaluating skin wrinkles based on speech recognition, wherein the step S7 specifically includes:
step S71, one pixel point is taken as a pixel point to be processed;
step S72, determining whether the pixel to be processed is a highlight pixel:
if yes, go to step S73;
if not, go to step S74;
s73, replacing the non-highlight pixel points at the same position in different face images, and then turning to S74;
step S74, processing to obtain a surface normal vector of the pixel point to be processed;
and step S75, processing according to the surface normal vector of the pixel point to obtain a unit normal vector of the pixel point.
Preferably, the method for evaluating skin wrinkles based on speech recognition, wherein the step S8 specifically includes:
and aiming at each pixel point, establishing a preset constraint formula according to the unit normal vector, and processing according to the constraint formula to obtain the surface depth information of the pixel point.
Preferably, the method for evaluating skin wrinkles based on speech recognition, wherein the constraint formula is specifically:
Figure BDA0001562276670000031
Figure BDA0001562276670000032
wherein the content of the first and second substances,
V1and V2Are all tangential vectors of the face surface of the human face;
n is used to represent the unit vector of the pixel point;
x and y are used for representing the position information of the plane of the pixel point on the face image;
z is used to represent the surface depth information.
Preferably, the method for evaluating skin wrinkles based on voice recognition, wherein in step S10, the skin wrinkle evaluation model includes: a first evaluation model for evaluating canthus wrinkles of a human face;
the first evaluation model is formed by adopting deep neural network learning according to a first training set prepared in advance;
the first training set comprises a plurality of first training data pairs, and each first training data pair comprises stereo image data of a human face with different shapes of eye corner wrinkles and evaluation scores corresponding to the stereo image data.
Preferably, the method for evaluating skin wrinkles based on voice recognition, wherein in step S10, the skin wrinkle evaluation model includes: a second evaluation model for evaluating a grain of the face;
the second evaluation model is formed by adopting deep neural network learning according to a second training set prepared in advance;
the second training set comprises a plurality of second training data pairs, and each second training data pair comprises stereo image data of a face with different-shape grain and evaluation scores corresponding to the stereo image data.
A system for evaluating skin wrinkles based on speech recognition, comprising:
the skin detection mirror is internally provided with an image acquisition device, a voice prompt device, a voice acquisition device, a voice recognition device and a data processing device, and the image acquisition device, the voice prompt device, the voice acquisition device and the voice recognition device are respectively connected with the data processing device;
the image acquisition device is used for respectively shooting different face images related to the same face under the irradiation of different light sources;
the voice prompt device is used for playing a preset voice prompt for a user to answer information according to the voice prompt,
the voice acquisition device is used for acquiring the voice signal of the acquired answer information and outputting the voice signal to the voice recognition device;
the voice recognition device is used for recognizing the human physiological information in the voice signal;
the cloud server is remotely connected with the skin detection mirror, and the skin detection mirror is used for sending the human physiological information and the human face image which are used as evaluation objects by the data processing device to the cloud server;
the cloud server adopts the skin wrinkle evaluation method, evaluates the skin wrinkles of the human face according to the human face image, and outputs an evaluation result to a user terminal remotely connected with the cloud server for a user to check.
The beneficial effects of the above technical scheme are: the skin wrinkle three-dimensional reconstruction and three-dimensional evaluation capabilities are provided by combining the functions of skin detection and a cosmetic mirror, so that a user can accurately master the skin wrinkle condition of the user, and meanwhile, the wrinkle evaluation result can be comprehensively evaluated through the biological characteristic information of the user, so that the user can check the wrinkle evaluation result, and the user experience is improved.
Drawings
FIG. 1 is a schematic flow chart of a method for evaluating skin wrinkles based on speech recognition according to a preferred embodiment of the present invention;
FIG. 2 is a flowchart illustrating the detailed process of step S7 based on FIG. 1 according to a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of the skin wrinkle evaluation system based on speech recognition according to the preferred embodiment of the present invention;
FIG. 4 is a schematic view of a skin detection mirror in accordance with a preferred embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
In light of the above problems in the prior art, there is provided a method for evaluating wrinkles on skin based on voice recognition, in which a skin detection mirror is used to evaluate wrinkles on skin of a human face, the skin detection mirror includes an image acquisition device, a voice prompt device, a voice acquisition device, a voice recognition device and a data processing device, the skin detection mirror is remotely connected to a cloud server,
further comprising:
step S1, under the irradiation of different light sources, the image acquisition device is adopted to respectively shoot different face images related to the same face;
step S2, the data processing unit sends a prompt instruction to the voice prompt device after the image acquisition device acquires the face image;
step S3, after receiving the prompt instruction, the voice prompt device plays a preset voice prompt;
step S4, after the voice prompt is played, the data processing unit sends a starting instruction to the voice acquisition device to enable the voice acquisition device to be in a working state;
step S5, according to the voice prompt answer information, the voice collecting device collects and obtains the voice signal of the answer information and outputs the voice signal to the voice recognition device;
step S6, recognizing the human physiological information in the voice signal by the voice recognition device, and sending the human physiological information and the human face image together as an evaluation object to the cloud server through the data processing device;
step S7, the cloud server obtains a unit normal vector of each pixel point in the face image through photometric stereo processing;
step S8, the cloud server processes the unit normal vector to obtain surface depth information of each pixel point;
step S9, the cloud server forms a face three-dimensional image of a face according to the surface depth information of each pixel point and the position information of each pixel point on the face image;
step S10, the cloud server judges the face three-dimensional image by adopting a skin wrinkle evaluation model formed by pre-training so as to obtain a corresponding skin wrinkle evaluation result;
and step S11, the cloud server makes a comprehensive evaluation result according to the wrinkle evaluation result and the reference person physiological characteristic information and outputs the comprehensive evaluation result to a user terminal remotely connected with the cloud server for a user to view.
Specifically, in this embodiment, the skin detection mirror may be modified by using a common cosmetic mirror as a mirror body, that is, the skin detection mirror is made into a cosmetic mirror that can be normally used by people, so that the functions of skin wrinkle detection and evaluation can be integrated into the cosmetic mirror while providing daily use for people.
In this embodiment, an image collecting device is disposed on the skin detection mirror, and the image collecting device may be a camera disposed on the skin detection mirror, and specifically may be disposed directly above the skin detection mirror, so as to conveniently capture an entire image of a face of a user.
In this embodiment, the image capturing device on the skin detection mirror respectively captures face images of the same face under different light sources, where the different light sources are point light sources in different directions and are placed at different positions around the skin detection mirror, so as to form different face images of the same face under irradiation of different point light sources. In these face images, the positions of the same pixel point relative to the face image are the same, and the difference is only the brightness of the pixel point, which will be described in detail below.
In this embodiment, the preset voice prompt may include gender and age, because the skin conditions presented by different genders and ages are different;
the voice acquisition device acquires voice signals of answer information of a user, the voice recognition device recognizes person physiological state information such as gender, age and the like in the voice signals, and the data processing device sends the task physiological state information and the face image to the cloud server as an evaluation object.
In this embodiment, the skin detection mirror is remotely connected to a cloud server, and uploads the different face images associated with the same face to the cloud server. Specifically, a wireless communication module such as a WiFi module may be disposed in the skin detection mirror, and remotely connected to the cloud server through an indoor route.
In this embodiment, after receiving the face images uploaded by the skin detection mirror, the cloud server performs comprehensive processing on different face images shot under different point light sources. Specifically, the processing is performed in the cloud server according to each pixel point on the face image, and the difference of the same pixel point in different face images only lies in the difference of the light source directions of the point light sources and the difference of the brightness of the pixel point caused by the difference, so that the attributes of different face images can be integrated into the same pixel point, for example, the attribute of one pixel point includes information such as the light source directions and brightness values of the pixel point in different face images.
In this embodiment, the cloud server processes each pixel point by a photometric stereo method to obtain unit normal vectors of different pixel points, and then obtains surface depth information of each pixel point according to the unit normal vector processing. After the surface depth information exists, the cloud server can establish a face three-dimensional image of the face according to the pixel points, namely, the face is subjected to three-dimensional image reconstruction.
In this embodiment, after the face stereo image of the human face is acquired, the face stereo image is evaluated by using a skin wrinkle evaluation model formed by pre-training, and a corresponding skin wrinkle evaluation result is output. And the cloud server remotely issues the evaluation result to the corresponding user terminal for the user to check. Specifically, in the skin wrinkle evaluation model, the input data is a stereo image of a face, and the output data is an evaluation result of a corresponding skin wrinkle region in the stereo image. The specific operation principle of the skin wrinkle evaluation model will be described in detail below.
After obtaining the evaluation result of the corresponding skin wrinkle area, making comprehensive evaluation by combining the human physiological characteristic information of the user;
the specific evaluation method can be realized by the following modes:
the corresponding grade can be set for the skin wrinkle evaluation result, namely the evaluation score in advance, and then the age distribution of different sexes is set to be normal corresponding to the grade;
after the skin wrinkle evaluation result of the face image of the user is acquired, the grade corresponding to the skin wrinkle evaluation value can be judged,
for example, skin wrinkle assessment results are ranked 1, and men ranked 1 have a normal age distribution of 18-23 years; men on level 1 had an age distribution between 20 and 25 years indicating normal;
the content of the final output comprehensive evaluation result includes the skin wrinkle evaluation result and the corresponding age distribution, and the user can know the skin care degree while seeing the skin wrinkle evaluation result, and if the age of the user is female age 40 and the wrinkle evaluation result is level 1, the skin care is better.
In summary, in the technical solution of the present invention, the image collecting device on the skin detection mirror is adopted to respectively shoot different face images for the same face under different light sources, the cloud server processes the face images by using a photometric stereo method according to the face images to finally form a stereo image of the face, and the stereo image is sent to the evaluation model for image recognition and wrinkle evaluation, and finally a corresponding evaluation result is output. Compared with the prior art, the technical scheme of the invention adopts a photometric stereo method, solves surface depth information and other modes to establish a three-dimensional image of the face, and carries out wrinkle evaluation on the image, thereby truly reflecting the condition of skin wrinkles on the face of a user. And the function of skin detection is integrated into the cosmetic mirror, so that the daily use of a user is facilitated.
It should be noted that, in consideration of the completeness of face image shooting, the skin detection mirror in the present invention should be a cosmetic mirror with a certain mirror surface area, such as a cosmetic mirror placed on a cosmetic table, or a cosmetic mirror directly installed on a bedroom wall, so as to avoid implementing the technical solution of the present invention by using a relatively small cosmetic mirror available on the market and capable of being held in the hand.
In a preferred embodiment of the present invention, in step S1, the face image is obtained by shooting under at least 3 different light sources, and all the light sources are not on the same straight line.
Further, in a preferred embodiment of the present invention, the number of the light sources is specifically 12, that is, 12 light sources are disposed around the skin detection mirror, directions of the light sources are different, and all the light sources are not on the same straight line.
In a preferred embodiment of the present invention, as shown in fig. 2, the step S7 specifically includes:
step S71, one pixel point is taken as a pixel point to be processed;
step S72, determining whether the pixel to be processed is a highlight pixel:
if yes, go to step S73;
if not, go to step S74;
step S73, replacing the non-highlight pixel points at the same position in different face images, and then turning to step S74;
step S74, processing to obtain surface normal vectors of the pixel points to be processed;
and step S75, processing according to the surface normal vector of the pixel point to obtain a unit normal vector of the pixel point.
Specifically, in this embodiment, because the highlight pixel cannot accurately determine the skin wrinkle image, the highlight pixel needs to be removed first when the facial image is reconstructed. The removing method comprises the following steps: firstly, judging whether the pixel points are highlight pixel points or not by a threshold value method (the brightness values of the pixel points are highlight pixel points if the brightness values are higher than a preset threshold value). Then, for the pixel point judged to be highlight, the non-highlight pixel point at the same position in other face images is used for replacing, specifically, as described above, the attributes of different face images are already used as the attributes of each pixel point to participate in the calculation, for example, the attribute value of one pixel point includes the brightness value of the pixel point in different face images, and for one pixel point, the principle of rejecting the highlight pixel point is to reject the brightness value of one pixel point higher than the preset threshold value, and only the brightness value of the non-highlight pixel point is left.
In this embodiment, after the highlight pixels are eliminated, all the non-highlight pixels are sequentially processed to obtain the surface normal vector of each pixel, and further obtain the unit normal vector of each pixel.
The process of solving the surface normal vector specifically includes:
for a color image, the luminance of each pixel is represented by the values of R, G, B three color channels. In this embodiment, the processing procedure of the surface normal vector is described by taking the R value as an example.
Assuming that the face surface in the invention conforms to an ideal Lambertian scattering model, the luminance equation of the pixel point should be:
IR=ρRL·nR; (1)
wherein, I is used to represent the brightness of the pixel point, L is used to represent the direction vector of the light source, ρ is used to represent the texture reflectivity of the surface region corresponding to the pixel point, and n is used to represent the unit normal vector of the surface region corresponding to the pixel point.
The direction vectors of the light sources are known in advance, and the direction vectors can be realized by shooting a group of highlight black ball images in different light source directions, and the illumination direction can be obtained by searching the position of a highlight point in each highlight black ball image, so that the highlight black ball images are used as the light source directions of the point light sources.
In the above step, the brightness of the non-highlight pixel under the illumination of different light sources can be represented as:
IR=(I1R,I2R,...,IqR)T; (2)
q is used for representing the number of light sources, wherein the light sources corresponding to the highlight pixels are removed, and T is the transposition calculation of the matrix.
Accordingly, the unit normal vector for each pixel point can be expressed as:
nR=(n1R,n2R,...,nqR)T; (3)
by calculation, the direction vectors of the q light sources should be:
Figure DEST_PATH_BDA0001562275640000081
multiplying by L at both ends of the above equation (1) simultaneously-1It is possible to obtain:
L-1IR=ρR·nR; (5)
the modulus of the left vector of the equation of equation (5) above is the value of the texture reflectivity, and the direction of the left vector is the direction of the surface normal vector.
After the surface normal vector of the R channel of the pixel point is obtained, calculating the unit normal vector of the R channel, namely:
nR=(nRa,nRb,nRc)T; (6)
wherein, (a, b, c) is the vector direction coordinate of the pixel point in the normal vector space.
The unit normal vectors of the B channel and the G channel of the pixel point can be respectively obtained by comparing the formulas (1) to (6):
nB=(nBa,nBb,nBc)T; (7)
nG=(nGa,nGb,nGc)T; (8)
the unit normal vector of the final pixel point can be the average value of the unit normal vectors of the RGB channels, and is expressed as:
n=(na,nb,nc)T; (9)
the gradient (-n) of each pixel point can be calculated according to the formula (9)a/nc,-nb/nc) And a normal vector map of facial skin wrinkles can be established.
In a preferred embodiment of the present invention, the step S3 specifically includes:
and aiming at each pixel point, establishing a preset constraint formula according to the unit normal vector, and processing according to the constraint formula to obtain the surface depth information of the pixel point.
Further, based on the tangent plane principle, the normal direction of each point on the object surface should be perpendicular to the tangential direction, and then the following constraint formula can be established:
Figure BDA0001562276670000111
Figure BDA0001562276670000112
wherein, V1And V2The (x, y, z) is a three-dimensional coordinate of the pixel point on the face surface, where (x, y) is a planar coordinate of the pixel point, that is, position information of a plane of the pixel point on the face image (the position information described in step S4), and z is used to represent a depth value of the pixel point on the face image.
For a human face image with m pixels, 2m constraint equations can be obtained, the unit normal vector n of each pixel is known, the position information x and y of the plane of each pixel is also known, and the depth value z is a scalar, so that the constraint equations of all the pixels can form an equation matrix, and the surface depth information of each pixel can be obtained by solving the equation matrix.
After the surface depth information of each pixel point is obtained, the three-dimensional coordinates (x, y, z) exist, so that a face three-dimensional image of a human face can be constructed and 3D display can be carried out.
In a preferred embodiment of the present invention, in the step S10, the skin wrinkle evaluation model includes: a first evaluation model for evaluating canthus wrinkles of a human face;
the first evaluation model is formed by adopting deep neural network learning according to a first training set prepared in advance;
the first training set includes a plurality of first training data pairs, each of which includes stereoscopic image data of a human face having different shapes of eye corner wrinkles and evaluation scores of the corresponding stereoscopic image data.
Specifically, in this embodiment, a plurality of first training data pairs are prepared in advance, each first training data pair includes a stereoscopic image of an eye corner wrinkle, and an evaluation score obtained by manually evaluating the stereoscopic image. The greater the number of first training data pairs, the more accurate the evaluation result of the first evaluation model formed by training.
In this embodiment, the first evaluation model is specifically used for evaluating the canthus wrinkles of the human face. In the practical application process, the first evaluation model firstly finds out a stereoscopic image of canthus wrinkles from the whole face stereoscopic image, then evaluates the stereoscopic image of canthus wrinkles, and finally outputs an evaluation result, wherein the evaluation result can be given in a mode of evaluation scores.
In another preferred embodiment of the present invention, in the step S10, the skin wrinkle evaluation model includes: a second evaluation model for evaluating a grain of the face;
a second evaluation model is formed by adopting deep neural network learning according to a second training set prepared in advance;
the second training set comprises a plurality of second training data pairs, and each second training data pair comprises stereo image data of a face with different-shape grain and evaluation scores of the corresponding stereo image data.
Specifically, in this embodiment, a plurality of second training data pairs are prepared in advance, each second training data pair includes a stereoscopic image of a grain, and an evaluation score obtained after the stereoscopic image is manually evaluated. The larger the number of the second training data pairs, the more accurate the evaluation result of the second evaluation model formed by training.
In this embodiment, the second evaluation model is specially used for evaluating a facial ordinance print. In the practical application process, the second evaluation model firstly finds out the stereo image of the grain from the whole face stereo image, then evaluates the stereo image of the grain, and finally outputs an evaluation result, wherein the evaluation result can also be given in a score evaluation mode.
In another preferred embodiment of the present invention, the first evaluation model and the second evaluation model may be used in a skin wrinkle evaluation model at the same time, so that the conditions of the canthus wrinkles and the statutory lines of the user can be comprehensively evaluated.
In a preferred embodiment of the present invention, based on the above-mentioned skin wrinkle evaluation method based on voice recognition, there is now provided a skin wrinkle evaluation system based on voice recognition a, specifically as shown in fig. 3, comprising:
the skin detection mirror A1 is characterized in that an image acquisition device A11 is arranged in the skin detection mirror A1 and is used for respectively shooting different face images related to the same face by adopting the image acquisition device A11 under the irradiation of different light sources;
a skin wrinkle evaluation system a based on speech recognition, comprising:
the skin detection mirror A1 is characterized in that an image acquisition device A11, a voice prompt device A12, a voice acquisition device A13, a voice recognition device A14 and a data processing device A15 are arranged in the skin detection mirror 1, and the image acquisition device A11, the voice prompt device A12, the voice acquisition device A13 and the voice recognition device A14 are respectively connected with the data processing device A15;
the image acquisition device A11 is used for respectively shooting different face images related to the same face by adopting the image acquisition device A11 under the irradiation of different light sources;
the voice prompt device a12 is used to play a preset voice prompt for the user to answer the message according to the voice prompt,
the voice acquisition device A13 is used for acquiring the voice signal of the acquired answer information and outputting the voice signal to the voice recognition device A14;
the voice recognition device A14 is used for recognizing the human physiological information in the voice signal;
the cloud server A2 is remotely connected with the skin detection mirror A1, the data processing device A14 in the skin detection mirror A1 sends the human physiological information and the human face image as evaluation objects to the cloud server A2 in the cloud server A2, the human face image is evaluated according to the human face image by adopting the above skin wrinkle evaluation method, and the evaluation result is output to the user terminal B remotely connected with the cloud server A2 for the user to check.
Further, as shown in fig. 4, the mirror surface appearance schematic diagram of the skin detection mirror a1 is that a plurality of LED lamp beads a13 can be further arranged around the mirror surface body a12 of the skin detection mirror a1 as point light sources in different directions to assist the image acquisition device a11 in shooting the human face.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (7)

1. A skin wrinkle evaluation method based on voice recognition is characterized in that a skin detection mirror is adopted to evaluate wrinkles of skin of a human face, the skin detection mirror comprises an image acquisition device, a voice prompt device, a voice acquisition device, a voice recognition device and a data processing device, the skin detection mirror is remotely connected with a cloud server,
further comprising:
step S1, under the irradiation of different light sources, the image acquisition device is adopted to respectively shoot different face images related to the same face;
step S2, the data processing unit sends a prompt instruction to the voice prompt device after the image acquisition device acquires the face image;
step S3, after receiving the prompt instruction, the voice prompt device plays a preset voice prompt;
step S4, after the voice prompt is played, the data processing unit sends a starting instruction to the voice acquisition device to enable the voice acquisition device to be in a working state;
step S5, according to the voice prompt answer information, the voice collecting device collects and obtains the voice signal of the answer information and outputs the voice signal to the voice recognition device;
step S6, recognizing the human physiological information in the voice signal by the voice recognition device, and sending the human physiological information and the face image together as an evaluation object to the cloud server through the data processing device;
step S7, the cloud server obtains a unit normal vector of each pixel point in the face image through photometric stereo processing;
step S8, the cloud server processes the unit normal vector to obtain surface depth information of each pixel point;
step S9, the cloud server forms a face three-dimensional image of a face according to the surface depth information of each pixel point and the position information of each pixel point on the face image;
step S10, the cloud server judges the face three-dimensional image by adopting a skin wrinkle evaluation model formed by pre-training so as to obtain a corresponding skin wrinkle evaluation result;
step S11, the cloud server makes a comprehensive evaluation result according to the wrinkle evaluation result and the reference person physiological characteristic information and outputs the comprehensive evaluation result to a user terminal remotely connected with the cloud server for a user to check;
in the step S10, the skin wrinkle evaluation model includes: a first evaluation model for evaluating canthus wrinkles of a human face;
the first evaluation model is formed by adopting deep neural network learning according to a first training set prepared in advance;
the first training set comprises a plurality of first training data pairs, and each first training data pair comprises stereo image data of a human face with different shapes of eye corner wrinkles and evaluation scores corresponding to the stereo image data;
in the step S10, the skin wrinkle evaluation model includes: a second evaluation model for evaluating a grain of the face;
the second evaluation model is formed by adopting deep neural network learning according to a second training set prepared in advance;
the second training set comprises a plurality of second training data pairs, and each second training data pair comprises stereo image data of a face with different-shape grain and evaluation scores corresponding to the stereo image data;
the skin wrinkle evaluation model can simultaneously adopt the first evaluation model and the second evaluation model to evaluate canthus wrinkles and statutory lines of the human face.
2. The method for evaluating skin wrinkles based on speech recognition according to claim 1, wherein in said step S1, said face image is captured under at least 3 different light sources, all said light sources are not in the same straight line.
3. The voice recognition-based skin wrinkle evaluation method according to claim 2, wherein the number of light sources is 12.
4. The method for evaluating skin wrinkles based on speech recognition according to claim 1, wherein said step S7 specifically comprises:
step S71, one pixel point is taken as a pixel point to be processed;
step S72, determining whether the pixel to be processed is a highlight pixel:
if yes, go to step S73;
if not, go to step S74;
s73, replacing the non-highlight pixel points at the same position in different face images, and then turning to S74;
step S74, processing to obtain a surface normal vector of the pixel point to be processed;
and step S75, processing according to the surface normal vector of the pixel point to obtain a unit normal vector of the pixel point.
5. The method for evaluating skin wrinkles based on speech recognition according to claim 1, wherein said step S8 specifically comprises:
and aiming at each pixel point, establishing a preset constraint formula according to the unit normal vector, and processing according to the constraint formula to obtain the surface depth information of the pixel point.
6. The method for skin wrinkle assessment based on speech recognition according to claim 5, characterized in that said constraint formula is specifically:
Figure DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE004
wherein the content of the first and second substances,
V1and V2Are all tangential vectors of the face surface of the human face;
n is used to represent the unit normal vector of the pixel point;
x and y are used for representing the position information of the plane of the pixel point on the face image;
z is used to represent the surface depth information.
7. A system for evaluating skin wrinkles based on speech recognition, comprising:
the skin detection mirror is internally provided with an image acquisition device, a voice prompt device, a voice acquisition device, a voice recognition device and a data processing device, and the image acquisition device, the voice prompt device, the voice acquisition device and the voice recognition device are respectively connected with the data processing device;
the image acquisition device is used for respectively shooting different face images related to the same face under the irradiation of different light sources;
the voice prompt device is used for playing a preset voice prompt for a user to answer information according to the voice prompt,
the voice acquisition device is used for acquiring the voice signal of the acquired answer information and outputting the voice signal to the voice recognition device;
the voice recognition device is used for recognizing the human physiological information in the voice signal;
the cloud server is remotely connected with the skin detection mirror, and the skin detection mirror is used for sending the human physiological information and the human face image which are used as evaluation objects by the data processing device to the cloud server;
the cloud server adopts the skin wrinkle evaluation method according to any one of claims 1 to 6, evaluates the skin wrinkles of the human face according to the human face image, and outputs the evaluation result to a user terminal remotely connected with the cloud server for the user to view.
CN201810085453.7A 2018-01-29 2018-01-29 Skin wrinkle evaluation method and system based on voice recognition Active CN108154142B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810085453.7A CN108154142B (en) 2018-01-29 2018-01-29 Skin wrinkle evaluation method and system based on voice recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810085453.7A CN108154142B (en) 2018-01-29 2018-01-29 Skin wrinkle evaluation method and system based on voice recognition

Publications (2)

Publication Number Publication Date
CN108154142A CN108154142A (en) 2018-06-12
CN108154142B true CN108154142B (en) 2021-02-26

Family

ID=62459183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810085453.7A Active CN108154142B (en) 2018-01-29 2018-01-29 Skin wrinkle evaluation method and system based on voice recognition

Country Status (1)

Country Link
CN (1) CN108154142B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109381165B (en) * 2018-09-12 2022-05-03 维沃移动通信有限公司 Skin detection method and mobile terminal
CN117033688B (en) * 2023-08-11 2024-03-12 翡梧(上海)创意设计有限公司 Character image scene generation system based on AI interaction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106225174A (en) * 2016-08-22 2016-12-14 珠海格力电器股份有限公司 Air-conditioner control method and system and air-conditioner
CN106650215A (en) * 2016-10-11 2017-05-10 武汉嫦娥医学抗衰机器人股份有限公司 Skin type detection and individuation evaluation system and method based on cloud platform
CN107184023A (en) * 2017-07-18 2017-09-22 上海勤答信息科技有限公司 A kind of Intelligent mirror

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9020250B2 (en) * 2011-09-19 2015-04-28 Haileo, Inc. Methods and systems for building a universal dress style learner
CN107392858B (en) * 2017-06-16 2020-09-29 Oppo广东移动通信有限公司 Image highlight area processing method and device and terminal equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106225174A (en) * 2016-08-22 2016-12-14 珠海格力电器股份有限公司 Air-conditioner control method and system and air-conditioner
CN106650215A (en) * 2016-10-11 2017-05-10 武汉嫦娥医学抗衰机器人股份有限公司 Skin type detection and individuation evaluation system and method based on cloud platform
CN107184023A (en) * 2017-07-18 2017-09-22 上海勤答信息科技有限公司 A kind of Intelligent mirror

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Prototype of Photometric Stereo System using 6 and 9 LED light source to reconstruct human skin texture";E. Juliastuti等;《Applied Mechanics and Materials》;20150702;第771卷;第72-75页 *

Also Published As

Publication number Publication date
CN108154142A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
CN108324247B (en) Method and system for evaluating skin wrinkles at specified positions
CN106469302B (en) A kind of face skin quality detection method based on artificial neural network
WO2020207423A1 (en) Skin type detection method, skin type grade classification method and skin type detection apparatus
CN109583285B (en) Object recognition method
CN108764071B (en) Real face detection method and device based on infrared and visible light images
CN105718869B (en) The method and apparatus of face face value in a kind of assessment picture
EP2987138B1 (en) Active stereo with adaptive support weights from a separate image
CN106372629B (en) Living body detection method and device
US11710289B2 (en) Information processing apparatus, information processing system, and material identification method
CN111166290A (en) Health state detection method, equipment and computer storage medium
CN105303151B (en) The detection method and device of human face similarity degree
CN107018323B (en) Control method, control device and electronic device
CN107085654B (en) Health analysis method and device based on face image
CN112818722B (en) Modular dynamic configurable living body face recognition system
CA2794659A1 (en) Apparatus and method for iris recognition using multiple iris templates
US9412054B1 (en) Device and method for determining a size of in-vivo objects
CN110363087B (en) Long-baseline binocular face in-vivo detection method and system
CN108154142B (en) Skin wrinkle evaluation method and system based on voice recognition
CN108363964A (en) A kind of pretreated wrinkle of skin appraisal procedure and system
CN108509857A (en) Human face in-vivo detection method, electronic equipment and computer program product
CN110874572B (en) Information detection method and device and storage medium
CN103020589A (en) Face recognition method for single training sample
US10402996B2 (en) Distance measuring device for human body features and method thereof
CN114612960A (en) Method and device for traditional Chinese medicine health management through facial image
Trivedi et al. Height estimation of children under five years using depth images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211207

Address after: 310000 room 1-304, building 2, Xinming commercial center, Gongshu District, Hangzhou City, Zhejiang Province

Patentee after: Hangzhou Taitai Enterprise Management Co.,Ltd.

Address before: Room 303, building 6, 156 Wuchang Avenue, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province 310000

Patentee before: HANGZHOU MEIJIE TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right