CN114549434B - Skin quality detection device based on cloud calculates - Google Patents

Skin quality detection device based on cloud calculates Download PDF

Info

Publication number
CN114549434B
CN114549434B CN202210119537.4A CN202210119537A CN114549434B CN 114549434 B CN114549434 B CN 114549434B CN 202210119537 A CN202210119537 A CN 202210119537A CN 114549434 B CN114549434 B CN 114549434B
Authority
CN
China
Prior art keywords
image
coordinates
left eye
right eye
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210119537.4A
Other languages
Chinese (zh)
Other versions
CN114549434A (en
Inventor
成先桂
陆伟
谭美乐
邓桂艳
吴玲艳
覃文飞
杨猛
唐平
潘延斌
李建民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Peoples Hospital of Nanning
Original Assignee
Second Peoples Hospital of Nanning
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Peoples Hospital of Nanning filed Critical Second Peoples Hospital of Nanning
Priority to CN202210119537.4A priority Critical patent/CN114549434B/en
Publication of CN114549434A publication Critical patent/CN114549434A/en
Application granted granted Critical
Publication of CN114549434B publication Critical patent/CN114549434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The invention provides a skin quality detection device based on cloud computing, which comprises an image shooting module, a cloud computing processing module and a doctor terminal module, wherein the image shooting module is used for shooting images; the image shooting module comprises a shooting submodule and a transmission submodule; the shooting submodule comprises a shooting unit and a prompting unit; the shooting unit is used for acquiring a first face image of a client; the prompting unit is used for judging whether the head of the client meets the preset position requirement or not, and if not, sending prompting information to the client; the shooting unit is also used for acquiring a second face image of the client; the transmission sub-module is used for transmitting the second face image of the customer to the cloud computing processing module; the cloud computing processing module is used for acquiring a skin quality detection report of a client and transmitting the detection report to the doctor terminal module; and the doctor terminal module is used for displaying the detection report. The invention is beneficial to reducing the influence of the image quality on the speed of skin quality detection.

Description

Skin quality detection device based on cloud calculates
Technical Field
The invention relates to the field of skin quality detection, in particular to a skin quality detection device based on cloud computing.
Background
With the development of image recognition technology, more and more medical and beauty mechanisms adopt an image recognition mode to carry out preliminary detection on skin quality so as to assist a cosmetologist to quickly preliminarily know the skin quality of a client and conveniently carry out subsequent skin management work. However, in the conventional skin quality detection device, in the process of acquiring the face image of the client, the client is not prompted, so that the angle between the horizontal direction and the connecting line between the eyes of the client in the finally acquired head image is relatively large, and the speed of skin quality detection is influenced. It is not conducive to quickly obtaining results of the quality of the facial skin of the customer.
Disclosure of Invention
The invention aims to disclose a skin quality detection device based on cloud computing, and solve the problem that in the process of judging the skin quality of a client through image recognition in the prior art, the client is not prompted, so that in a finally obtained head image, the angle between a connecting line between two eyes of the client and the horizontal direction is large, and the skin quality detection speed is influenced.
In order to achieve the purpose, the invention adopts the following technical scheme:
a skin quality detection device based on cloud computing comprises an image shooting module, a cloud computing processing module and a doctor terminal module;
the image shooting module comprises a shooting submodule and a transmission submodule;
the shooting submodule comprises a shooting unit and a prompting unit; the shooting unit is used for acquiring a first face image of a client at a fixed time interval; the prompting unit is used for judging whether the head of the client meets a preset position requirement or not according to the length between the left eye and the right eye of the first face image and the position relation between the left eye and the right eye, and if not, prompting information is sent to the client according to the length between the left eye and the right eye in the first face image and the position relation between the left eye and the right eye;
the shooting unit is also used for acquiring a second face image of the client when the head of the client meets the preset position requirement;
the transmission sub-module is used for transmitting the second face image of the client to the cloud computing processing module;
the cloud computing processing module is used for inputting a second face image of the client into a pre-trained neural network model for recognition, acquiring a skin quality detection report of the client and transmitting the detection report to the doctor terminal module;
the doctor terminal module is used for displaying the detection report.
Preferably, the judging that the head of the client meets the preset position requirement according to the length between the left eye and the right eye of the first face image and the position relationship between the left eye and the right eye includes:
carrying out graying processing on the first face image to obtain a grayscale image;
adjusting the gray level image to obtain an adjusted image;
performing image segmentation processing on the adjustment image to obtain a face region image;
carrying out human eye detection on the face region image to obtain the coordinates of the left eye and the coordinates of the right eye in the face region image;
calculating the length of a connecting line between the coordinates of the left eye and the coordinates of the right eye;
determining a positional relationship between the coordinates of the left eye and the coordinates of the right eye;
and judging whether the head of the client meets the preset position requirement or not according to the length and the position relation.
Preferably, the performing the graying processing on the first face image to obtain a grayscale image includes:
graying the first face image using the following formula:
G(x,y)=a×R(x,y)+b×G(x,y)+c×B(x,y)
the pixel value of the pixel point with the coordinate (x, y) in the RGB color space is represented in the red component image, the green component image and the blue component image, and the pixel value of the pixel point with the coordinate (x, y) in the gray image G is represented in the G (x, y).
Preferably, the adjusting the grayscale image to obtain an adjusted image includes:
the adjustment processing is performed on the grayscale image using:
Figure BDA0003497879900000021
wherein aG represents an adjustment image, alpha and beta represent preset weight coefficients, and aG (x, y) and G (x, y) respectively represent pixel values of pixel points with coordinates (x, y) in the adjustment image and the gray level image; g is a radical of formula 1 And g 2 Respectively representing a preset first constant coefficient and a preset second constant coefficient; msthr represents a preset proximity coefficient reference value, and ms (x, y) represents a proximity coefficient of a pixel point with coordinates (x, y) in a gray level image;
Figure BDA0003497879900000022
PT (x, y) represents a set of coordinates of pixel points in 8 neighborhoods of a pixel point with a coordinate (x, y) in a gray image, gi (i, j) represents a standard deviation of pixel values of pixel points in 8 neighborhoods of the pixel point with the coordinate (i, j) in the gray image, mi (i, j) represents a maximum value of pixel values of pixel points in 8 neighborhoods of the pixel point with the coordinate (i, j) in the gray image, G (i, j) represents a pixel value of a pixel point with the coordinate (i, j) in the gray image, and malv represents the number of pixel levels of the pixel points corresponding to the coordinate in PT (x, y); Γ represents the exponential coefficient of the coefficient,
Figure BDA0003497879900000031
preferably, the performing image segmentation processing on the adjustment image to obtain a face region image includes:
carrying out partition processing on the adjusted image to obtain NM sub-regions;
respectively carrying out image segmentation processing on each subregion by using a threshold segmentation algorithm to obtain a foreground pixel point of each subregion;
and forming a face region image according to all the foreground pixel points.
Preferably, the performing human eye detection on the face region image to obtain coordinates of a left eye and coordinates of a right eye in the face region image includes:
carrying out human eye detection on the face region image by using a human eye detection algorithm to respectively obtain a left eye pixel point set G1 and a right eye pixel point set G2;
calculating a first average coordinate of the pixel points in G1, and taking the first average coordinate as a coordinate of a left eye in the face area image;
and calculating second average coordinates of the pixel points in the G2, and taking the second average coordinates as coordinates of the right eye in the face area image.
Preferably, the calculating the length of the connection line between the coordinates of the left eye and the coordinates of the right eye includes:
the length of the link between the coordinates of the left eye and the coordinates of the right eye is calculated using the following formula:
Figure BDA0003497879900000032
where length represents the length of a line between the coordinates of the left eye and the coordinates of the right eye, (x) lf ,y lf ) Coordinates representing the left eye, (x) rg ,y rg ) Representing the coordinates of the right eye.
Preferably, the determining the position relationship between the coordinates of the left eye and the coordinates of the right eye includes:
if y lf >y rg If so, the position relationship between the left eye coordinate and the right eye coordinate is a first type relationship;
if y lf =y rg If so, the position relationship between the coordinates of the left eye and the coordinates of the right eye is a second type of relationship;
if y lf <y rg Then the positional relationship between the coordinates of the left eye and the coordinates of the right eye is a third-type relationship.
The invention has the following beneficial effects:
the client is prompted according to the length of the connecting line between the left eye and the right eye of the client and the position relation between the left eye and the right eye of the client, so that the high-quality face image can be acquired, and the influence of the quality of the image on the skin quality detection speed is avoided.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, without inventive effort, further drawings may be derived from the following figures.
Fig. 1 is a diagram of an exemplary embodiment of a skin quality detection apparatus based on cloud computing according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention and are not to be construed as limiting the present invention.
In one embodiment shown in fig. 1, the invention provides a skin quality detection device based on cloud computing, which comprises an image shooting module, a cloud computing processing module and a doctor terminal module;
the image shooting module comprises a shooting submodule and a transmission submodule;
the shooting submodule comprises a shooting unit and a prompting unit; the shooting unit is used for acquiring a first face image of a client at fixed time intervals; the prompting unit is used for judging whether the head of the client meets a preset position requirement according to the length between the left eye and the right eye of the first face image and the position relation between the left eye and the right eye, and if not, sending prompting information to the client according to the length between the left eye and the right eye in the first face image and the position relation between the left eye and the right eye;
the shooting unit is also used for acquiring a second face image of the client when the head of the client meets the preset position requirement;
the transmission sub-module is used for transmitting the second face image of the customer to the cloud computing processing module;
the cloud computing processing module is used for inputting a second face image of the client into a pre-trained neural network model for recognition, acquiring a skin quality detection report of the client and transmitting the detection report to the doctor terminal module;
and the doctor terminal module is used for displaying the detection report.
The client is prompted according to the length of the connecting line between the left eye and the right eye of the client and the position relation between the left eye and the right eye of the client, so that the high-quality face image can be acquired, and the influence of the quality of the image on the skin quality detection speed is avoided.
Specifically, when the length of a connecting line between the left eye and the right eye of the client in the face image is too long or too short, the quality of the face image is low, and if the length is too long, the distance from the client to the shooting submodule is too far, which is not beneficial to the subsequent identification in the cloud computing processing module; if the distance is too short, the distance between the client and the shooting submodule is too short, and the acquisition of a complete face image is not facilitated. When the left eye is higher than the right eye or the right eye is higher than the left eye, it indicates that the head of the client is tilted, so that the cloud computing processing module needs to adjust the angle of the image when acquiring the skin quality detection report, which obviously slows down the processing speed.
Specifically, the doctor terminal module comprises a computer, a tablet, a smart phone and the like.
Preferably, the skin quality detection report includes the type and location of skin defects present in the second face image;
types of skin imperfections include pockmarks, freckles, large pores, etc. The position is then a position in the face, for example with a pockethole at the bottom of the left eye.
Preferably, the judging that the head of the client meets the preset position requirement according to the length between the left eye and the right eye of the first face image and the position relationship between the left eye and the right eye includes:
carrying out graying processing on the first face image to obtain a grayscale image;
adjusting the gray level image to obtain an adjusted image;
performing image segmentation processing on the adjustment image to obtain a face region image;
carrying out human eye detection on the face region image to obtain the coordinates of the left eye and the coordinates of the right eye in the face region image;
calculating the length of a connecting line between the coordinates of the left eye and the coordinates of the right eye;
determining a positional relationship between the coordinates of the left eye and the coordinates of the right eye;
and judging whether the head of the client meets the preset position requirement or not according to the length and the position relation.
In the embodiment, the judgment of whether the polishing factor meets the preset position requirement on the head of the client or not can be effectively reduced by adjusting the image.
Preferably, the performing the graying processing on the first face image to obtain a grayscale image includes:
graying the first face image using the following formula:
G(x,y)=a×R(x,y)+b×G(x,y)+c×B(x,y)
the pixel value of the pixel point with the coordinate (x, y) in the RGB color space is represented in the red component image, the green component image and the blue component image, and the pixel value of the pixel point with the coordinate (x, y) in the gray image G is represented in the G (x, y).
Preferably, the adjusting the grayscale image to obtain an adjusted image includes:
the adjustment processing is performed on the grayscale image using:
Figure BDA0003497879900000051
wherein aG represents an adjustment image, alpha and beta represent preset weight coefficients, aG (x, y) and G (x, y) respectively represent pixel values of pixel points with coordinates (x, y) in the adjustment image and the gray level image; g 1 And g 2 Respectively representing a preset first constant coefficient and a preset second constant coefficient; msthr represents a preset proximity coefficient reference value, and ms (x, y) represents a proximity coefficient of a pixel point with coordinates (x, y) in a gray level image;
Figure BDA0003497879900000052
PT (x, y) represents a set of coordinates of pixel points in 8 neighborhoods of a pixel point with a coordinate (x, y) in a gray image, gi (i, j) represents a standard deviation of pixel values of pixel points in 8 neighborhoods of the pixel point with the coordinate (i, j) in the gray image, mi (i, j) represents a maximum value of pixel values of pixel points in 8 neighborhoods of the pixel point with the coordinate (i, j) in the gray image, G (i, j) represents a pixel value of a pixel point with the coordinate (i, j) in the gray image, and malv represents the number of pixel levels of the pixel points corresponding to the coordinate in PT (x, y); Γ represents the exponential coefficient of the coefficient,
Figure BDA0003497879900000061
in the embodiment, the gray image is adjusted through the difference between the pixel value of the pixel point and the pixel value between the pixels around the pixel point, so that the influence on the accuracy of eye identification when the illumination is unevenly distributed is favorably reduced in a balanced manner, and the accuracy of position prompt of a client is favorably improved. When the difference of the peripheral pixel values is considered, calculation is carried out from the aspects of a proximity coefficient reference value, a standard deviation, a maximum value, the number of pixel levels and the like, and the reduction of the balanced illumination distribution is facilitated.
Preferably, the image segmentation processing on the adjustment image to obtain the face region image includes:
carrying out partition processing on the adjusted image to obtain NM sub-areas;
respectively carrying out image segmentation processing on each subregion by using a threshold segmentation algorithm to obtain a foreground pixel point of each subregion;
and forming a face region image according to all the foreground pixel points.
Preferably, the partitioning the adjustment image to obtain NM sub-regions includes:
partitioning the adjusting image by adopting a multi-partition mode:
the 1 st sub-area divides the adjusting image into H multiplied by H sub-areas with the same area; h is more than or equal to 2 and less than or equal to 4;
storing all sub-regions in set dtdu 1
Respectively judge dtdu 1 Whether each sub-area in the set needs to be partitioned again or not, and storing all the sub-areas needing to be partitioned again into the set du 1 Storing all sub-areas not requiring to be partitioned again into the set du fi
Partitioning the k-th time, and respectively dividing the set du k-1 Each element in (1) is divided into H × H sub-regions with the same area, and all the sub-regions are stored into a set dtdu k
Respectively judge dtdu k Whether each sub-area in the set needs to be partitioned again or not is judged, and all the sub-areas needing to be partitioned again are stored into the set du k All sub-areas not requiring to be partitioned again are stored in the set du fi
Judgment du k If the set is an empty set, stopping partitioning the adjusting image if the set is the empty set; will set du at this time fi The number of the contained elements is recorded as NM;
for the sub-region sbk, whether the sub-region sbk needs to be partitioned again is judged as follows:
calculating the partition index of sbk:
Figure BDA0003497879900000071
wherein idc (sbk) denotes the partition index of sbk, δ 1 And delta 2 Representing an importance degree parameter, wherein nfah (sbk) represents the number of pixel points of which the pixel values are larger than a preset pixel value threshold value in the sbk; delta 12 =1; tot (sbk) represents the total number of pixel points in sbk, u (sbk) represents a set of pixel points in sbk, aT (q) represents the gradient of a pixel point q, and stgd represents a preset gradient standard value;
if the partition index is greater than the preset partition index threshold, it indicates that the sub-region sbk needs to be partitioned again.
In the embodiment, the adjustment image is divided into a plurality of sub-areas by means of multiple partitions, and then each sub-area is subjected to threshold segmentation, so that the accuracy of threshold segmentation can be effectively improved. When the subareas are divided, the subareas are not divided into subareas with the same area, but are divided for multiple times, and judgment is carried out through the subarea indexes, so that each subarea comprises pixel points with pixel values larger than a preset pixel value threshold, and when the variance among the pixel points in the subareas is larger, the subarea processing is more needed, and the subarea processing accuracy can be effectively improved by the aid of the subarea method when the image segmentation processing is carried out on each subarea by using a threshold segmentation algorithm.
Preferably, the performing human eye detection on the face region image to obtain the coordinates of the left eye and the coordinates of the right eye in the face region image includes:
carrying out human eye detection on the face region image by using a human eye detection algorithm to respectively obtain a left-eye pixel point set G1 and a right-eye pixel point set G2;
calculating a first average coordinate of the pixel points in G1, and taking the first average coordinate as a coordinate of a left eye in the face area image;
and calculating a second average coordinate of the pixel points in G2, and taking the second average coordinate as the coordinate of the right eye in the face area image.
Preferably, the calculating the length of a connection line between the coordinates of the left eye and the coordinates of the right eye includes:
the length of the link between the coordinates of the left eye and the coordinates of the right eye is calculated using the following formula:
Figure BDA0003497879900000072
wherein length represents the length of a line between the coordinates of the left eye and the coordinates of the right eye, (x) lf ,y lf ) Coordinates of the left eye, (x) rg ,y rg ) Representing the coordinates of the right eye.
Preferably, the determining the position relationship between the coordinates of the left eye and the coordinates of the right eye includes:
if y lf >y rg If so, the position relationship between the left eye coordinate and the right eye coordinate is a first type relationship;
if y lf =y rg If so, the position relationship between the left eye coordinate and the right eye coordinate is a second type of relationship;
if y lf <y rg Then, the positional relationship between the coordinates of the left eye and the coordinates of the right eye is a third-type relationship.
Preferably, the determining whether the head of the client meets a preset position requirement according to the length between the left eye and the right eye of the first face image and the position relationship between the left eye and the right eye includes:
the position relation between the coordinates of the left eye and the coordinates of the right eye is a second type relation, and lenthst1 is more than or equal to lenthst2, so that the head of the client meets the preset position requirement, lenthst1 represents a preset first length threshold, and lenthst2 represents a preset second length threshold;
preferably, the sending of the prompt information to the client according to the length between the left eye and the right eye in the first face image and the positional relationship between the left eye and the right eye includes:
if the length is less than the length 1, the prompt information comprises first information for prompting the client to be far away from the shooting unit;
if the length 2 is less than the length, the prompt information comprises second information for prompting the client to approach the shooting unit;
if the position relation between the left eye coordinate and the right eye coordinate is a first-type relation, the prompt information comprises third information for prompting the client to incline the head to the right;
if the position relation between the left eye coordinate and the right eye coordinate is the third type relation, the prompt message comprises a fourth message for prompting the client to incline the head to the left.
Specifically, the first information may respectively form prompt information with the third information or the fourth information; the second information can also form prompt information with the third information or the fourth information respectively.
While embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
It should be noted that, functional units/modules in the embodiments of the present invention may be integrated into one processing unit/module, or each unit/module may exist alone physically, or two or more units/modules are integrated into one unit/module. The integrated units/modules may be implemented in the form of hardware, or may be implemented in the form of software functional units/modules.
From the above description of the embodiments, it is clear for a person skilled in the art that the embodiments described herein can be implemented in hardware, software, firmware, middleware, code or any appropriate combination thereof. For a hardware implementation, a processor may be implemented in one or more of the following units: an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, other electronic units designed to perform the functions described herein, or a combination thereof. For a software implementation, some or all of the procedures of an embodiment may be performed by a computer program instructing associated hardware.
In practice, the program may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. Computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.

Claims (6)

1. A skin quality detection device based on cloud computing is characterized by comprising an image shooting module, a cloud computing processing module and a doctor terminal module;
the image shooting module comprises a shooting submodule and a transmission submodule;
the shooting submodule comprises a shooting unit and a prompting unit; the shooting unit is used for acquiring a first face image of a client at a fixed time interval; the prompting unit is used for judging whether the head of the client meets a preset position requirement according to the length between the left eye and the right eye of the first face image and the position relation between the left eye and the right eye, and if not, sending prompting information to the client according to the length between the left eye and the right eye in the first face image and the position relation between the left eye and the right eye;
the shooting unit is also used for acquiring a second face image of the client when the head of the client meets the preset position requirement;
the transmission sub-module is used for transmitting the second face image of the client to the cloud computing processing module;
the cloud computing processing module is used for inputting a second face image of the client into a pre-trained neural network model for recognition, acquiring a skin quality detection report of the client and transmitting the detection report to the doctor terminal module;
the doctor terminal module is used for displaying the detection report;
the judging that the head of the client meets the preset position requirement according to the length between the left eye and the right eye of the first face image and the position relation between the left eye and the right eye comprises the following steps:
carrying out graying processing on the first face image to obtain a grayscale image;
adjusting the gray level image to obtain an adjusted image;
performing image segmentation processing on the adjustment image to obtain a face region image;
detecting human eyes of the face area image to obtain the coordinates of the left eye and the coordinates of the right eye in the face area image;
calculating the length of a connecting line between the coordinates of the left eye and the coordinates of the right eye;
determining a positional relationship between the coordinates of the left eye and the coordinates of the right eye;
judging whether the head of the client meets the preset position requirement or not according to the length and position relation;
the adjusting the gray level image to obtain an adjusted image includes:
the adjustment processing is performed on the grayscale image using:
Figure FDA0003791702200000021
wherein aG represents an adjustment image, alpha and beta represent preset weight coefficients, and aG (x, y) and G (x, y) respectively represent pixel values of pixel points with coordinates (x, y) in the adjustment image and the gray level image; g 1 And g 2 Respectively representing a preset first constant coefficient and a preset second constant coefficient; msthr represents a preset proximity coefficient reference value, and ms (x, y) represents a proximity coefficient of a pixel point with coordinates (x, y) in a gray level image;
Figure FDA0003791702200000022
PT (x, y) represents a set of coordinates of pixel points in 8 neighborhoods of a pixel point with a coordinate (x, y) in a gray image, gi (i, j) represents a standard deviation of pixel values of pixel points in 8 neighborhoods of the pixel point with the coordinate (i, j) in the gray image, mi (i, j) represents a maximum value of pixel values of pixel points in 8 neighborhoods of the pixel point with the coordinate (i, j) in the gray image, G (i, j) represents a pixel value of a pixel point with the coordinate (i, j) in the gray image, and malv represents the number of pixel levels of the pixel points corresponding to the coordinate in PT (x, y); Γ represents the exponential coefficient of the coefficient,
Figure FDA0003791702200000023
2. the skin quality detection device based on cloud computing according to claim 1, wherein said graying the first face image to obtain a grayscale image comprises:
graying the first face image using the following formula:
G(x,y)=a×R(x,y)+b×G(x,y)+c×B(x,y)
the pixel value of the pixel point with the coordinate (x, y) in the RGB color space is represented in the red component image, the green component image and the blue component image, and the pixel value of the pixel point with the coordinate (x, y) in the gray image G is represented in the G (x, y).
3. The cloud-computing-based skin quality detection apparatus according to claim 1, wherein the performing image segmentation processing on the adjustment image to obtain a face region image includes:
carrying out partition processing on the adjusted image to obtain NM sub-regions;
respectively carrying out image segmentation processing on each subregion by using a threshold segmentation algorithm to obtain a foreground pixel point of each subregion;
and forming a face region image according to all the foreground pixel points.
4. The skin quality detection device based on cloud computing according to claim 1, wherein said performing human eye detection on the face region image to obtain coordinates of a left eye and coordinates of a right eye in the face region image comprises:
carrying out human eye detection on the face region image by using a human eye detection algorithm to respectively obtain a left eye pixel point set G1 and a right eye pixel point set G2;
calculating a first average coordinate of the pixel points in G1, and taking the first average coordinate as a coordinate of a left eye in the face area image;
and calculating a second average coordinate of the pixel points in G2, and taking the second average coordinate as the coordinate of the right eye in the face area image.
5. The cloud-computing-based skin quality detection device according to claim 1, wherein the calculating a length of a connection line between the coordinates of the left eye and the coordinates of the right eye includes:
the length of the connection line between the coordinates of the left eye and the coordinates of the right eye is calculated using the following formula:
Figure FDA0003791702200000031
wherein length represents the length of a line between the coordinates of the left eye and the coordinates of the right eye, (x) lf ,y lf ) Coordinates of the left eye, (x) rg ,y rg ) Representing the coordinates of the right eye.
6. The cloud-computing-based skin quality detection device according to claim 5, wherein the determining the position relationship between the coordinates of the left eye and the coordinates of the right eye comprises:
if y lf >y rg If so, the position relationship between the left eye coordinate and the right eye coordinate is a first type relationship;
if y lf =y rg If so, the position relationship between the coordinates of the left eye and the coordinates of the right eye is a second type of relationship;
if y lf <y rg Then the positional relationship between the coordinates of the left eye and the coordinates of the right eye is a third-type relationship.
CN202210119537.4A 2022-02-09 2022-02-09 Skin quality detection device based on cloud calculates Active CN114549434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210119537.4A CN114549434B (en) 2022-02-09 2022-02-09 Skin quality detection device based on cloud calculates

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210119537.4A CN114549434B (en) 2022-02-09 2022-02-09 Skin quality detection device based on cloud calculates

Publications (2)

Publication Number Publication Date
CN114549434A CN114549434A (en) 2022-05-27
CN114549434B true CN114549434B (en) 2022-11-08

Family

ID=81672621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210119537.4A Active CN114549434B (en) 2022-02-09 2022-02-09 Skin quality detection device based on cloud calculates

Country Status (1)

Country Link
CN (1) CN114549434B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182741A (en) * 2014-09-15 2014-12-03 联想(北京)有限公司 Image acquisition prompt method and device and electronic device
CN104754216B (en) * 2015-03-06 2018-03-27 广东欧珀移动通信有限公司 A kind of photographic method and device
CN112651962A (en) * 2021-01-07 2021-04-13 中科魔镜(深圳)科技发展有限公司 AI intelligent diagnosis system platform
CN113080874B (en) * 2021-04-17 2023-02-07 北京美医医学技术研究院有限公司 Multi-angle cross validation intelligent skin measuring system
CN113317759A (en) * 2021-05-27 2021-08-31 广州盈在科技有限公司 Skin quality detection system based on cloud computing technology
CN113255802A (en) * 2021-06-02 2021-08-13 北京美丽年华文化有限公司 Intelligent skin tendering system based on infrared laser
CN113674271B (en) * 2021-09-06 2022-06-07 广东康德威电气股份有限公司 Transformer monitoring system based on cloud computing

Also Published As

Publication number Publication date
CN114549434A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN110505459B (en) Image color correction method, device and storage medium suitable for endoscope
US8295593B2 (en) Method of detecting red-eye objects in digital images using color, structural, and geometric characteristics
CN109711268B (en) Face image screening method and device
CN107993189B (en) Image tone dynamic adjustment method and device based on local blocking
CN109584185A (en) Image processing method
KR20110014067A (en) Method and system for transformation of stereo content
CN115457067B (en) Nose-clearing and refreshing medicine liquid level detection method
CN111784609A (en) Image dynamic range compression method and device and computer readable storage medium
CN111028214B (en) Skin detection device based on cloud platform
CN111369478B (en) Face image enhancement method and device, computer equipment and storage medium
CN115439804A (en) Monitoring method and device for high-speed rail maintenance
CN115937186A (en) Textile defect identification method and system
CN114283095B (en) Image distortion correction method, system, electronic equipment and storage medium
CN106709862B (en) A kind of image processing method and device
CN114998320A (en) Method, system, electronic device and storage medium for visual saliency detection
CN114549434B (en) Skin quality detection device based on cloud calculates
CN107133932A (en) Retina image preprocessing method and device and computing equipment
CN108629730B (en) Video beautifying method and device and terminal equipment
CN112396016A (en) Face recognition system based on big data technology
CN112651962A (en) AI intelligent diagnosis system platform
JP6942825B2 (en) Methods and devices for enhancing facial images, electronic devices
CN115620117B (en) Face information encryption method and system for network access authority authentication
CN110136085B (en) Image noise reduction method and device
CN114863030B (en) Method for generating custom 3D model based on face recognition and image processing technology
CN114897647B (en) Teaching auxiliary system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant