CN105976308A - GPU-based mobile terminal high-quality beauty real-time processing method - Google Patents

GPU-based mobile terminal high-quality beauty real-time processing method Download PDF

Info

Publication number
CN105976308A
CN105976308A CN201610284768.5A CN201610284768A CN105976308A CN 105976308 A CN105976308 A CN 105976308A CN 201610284768 A CN201610284768 A CN 201610284768A CN 105976308 A CN105976308 A CN 105976308A
Authority
CN
China
Prior art keywords
image
formula
pixel
sum
skin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610284768.5A
Other languages
Chinese (zh)
Other versions
CN105976308B (en
Inventor
赖守波
韩志宏
余刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sobey Digital Technology Co Ltd
Original Assignee
Chengdu Sobey Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sobey Digital Technology Co Ltd filed Critical Chengdu Sobey Digital Technology Co Ltd
Priority to CN201610284768.5A priority Critical patent/CN105976308B/en
Publication of CN105976308A publication Critical patent/CN105976308A/en
Application granted granted Critical
Publication of CN105976308B publication Critical patent/CN105976308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a GPU-based mobile terminal high-quality beauty real-time processing method, which comprises an image acquisition step, an image processing step and an image fusion step. With the help of hardware acceleration characteristics of the GPU, multiple sub steps in the method are processed, and the problem of low efficiency when the GPU is used can be solved. Each sub step disclosed and used in the method can be well applied to GPU acceleration processing, immediate effect presentation can be obtained while the real-time efficiency is ensured, and the method is applied to special effect processing of a single image. In addition, the invention provides an image denoising algorithm processing framework which is easy to understand and excellent in performance, a quick image denoising scheme based on an image integral graph is used, the calculation speed is ensured not to be related with the size of a sampling window, and while image noise such as spots is well removed, details are kept.

Description

A kind of real-time processing method of mobile terminal high-quality U.S. based on GPU face
Technical field
The present invention relates to the real-time processing method of a kind of mobile terminal high-quality U.S. based on GPU face.
Background technology
U.S. face method is generally made up of multiple basic steps, and including image peripheral illumination and human body skin etc., noise goes Remove, human body skin detection, Face datection, speckle dispelling, skin-whitening, image co-registration etc..
Image denoising as most basic be also a most important ring, follow-up algorithm process is had vital work With, the algorithm of current denoising is more, generally includes Gaussian smoothing, bilinear filter is smooth, average filter smooth, based on Block-matching Three-dimensional denoising scheduling algorithm, the performance of the most each algorithm and effect all have difference in various degree, and each have different journey The limitation of degree, this is for there being large effect on the algorithms selection of application scenarios.Such as, average filter smoothing efficiency is the highest, But the more details such as face such as hair, eyelashes, eyebrow often filtering out image has the region of obvious characteristic;Gaussian smoothing When filter radius is less efficiency higher but time filter radius is bigger efficiency the lowest;Bilinear filter is smooth can be effectively maintained figure As edge details but mixed color phenomenon can be produced;Three-dimensional Denoising Algorithm based on Block-matching well can process white Gaussian noise but effect Rate is the lowest.Therefore, selecting a kind of algorithm that can balance in efficiency and effect, the result to total algorithm is that one is chosen War, needs again to be well applicable to corresponding application scenarios simultaneously.
Skin detection and Face datection, it is usually required mainly for process is skin and human face region, it is ensured that in skin and non-skin The seam crossing in region is without obvious artificial trace, and this is determined by application scenarios.Face inspection when for high-definition picture The performance of method of determining and calculating is relatively low, and this carries out pyramid often caused by needs and successively detects human face region, is not particularly suited for it simultaneously Its skin area such as arm, shoulder, neck etc., therefore select a kind of applicable skin and the detection algorithm of human face region, simultaneously Possess higher performance, particularly important.
Speckle removing, acne removing refers mainly to the regional area of skin area and processes, it is common practice to artificial hand selection dispels in region Remove, be not suitable for automatically processing of image.
Skin-whitening and image enhaucament can have multiple processing mode, reflect including index mapping, logarithmic mapping, power function Penetrate, linearly intensification, Auto Laves etc., its purpose is that region dark in image carries out enhancement process, strengthens details and presents effect Really, the most preferably retain the variations in detail of brighter areas, prevent whiting.
Generally speaking, just can complete owing to U.S. face method typically requires being mutually linked of several submethods, the most each son The faint change of method also can produce large effect to last effect, selects suitable submethod and is effectively combined in one Rise, meet that efficiency is higher and effect presents immediately simultaneously so that it is having more preferable application prospect, this is that the present invention will solve Problem.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, it is provided that a kind of mobile terminal high-quality U.S. based on GPU face Real-time processing method, by the hardware-accelerated characteristic of GPU, can solve to use inefficiency problem during CPU, simultaneously we Method proposes and each sub-steps of using can well be applied to GPU acceleration and process, while guaranteed efficiency is in real time, permissible Obtain instant effect to present.
It is an object of the invention to be achieved through the following technical solutions: a kind of mobile terminal high-quality U.S. based on GPU face Real-time processing method, it includes image acquisition step, image processing step and image co-registration step;
Described image acquisition step includes: the secondary RGB color image of input one;
Described image processing step includes three sub-steps independently executed pixel-by-pixel accelerated based on GPU hardware: scheme The shade processing sub-step of the skin area of the integrated beauty subslep of picture, the enhancement process sub-step of image and generation image;
Described integrated beauty subslep includes following sub-step:,
S111: the RGB color of input picture is transformed into YUV color space, retains UV passage simultaneously;
S112: sampling window size is set, it is judged that whether the size of sampling window is more than the threshold value preset: if it is make With integrogram, otherwise use box filtering;
Described use integrogram includes following sub-step:
S11211: generating the integrogram of luminance picture, including the integrogram of first order Yu quadratic term, wherein iterative formula is divided As follows
sumi,j=sumi,j-1+sumi-1,j-sumi-1,j-1+fi,j
sumsq i , j = sumsq i , j - 1 + sumsq i - 1 , j - sumsq i - 1 , j - 1 + f i , j 2 ;
In formula, sum represent directly and, sumsq represents that quadratic sum, f represent brightness value, preserves two width integrations obtained above Image;
S11212: all pixels in image are processed one by one, in the window centered by each pixel, counts respectively Calculating average and the variance of all pixels in this window, computing formula is as follows:
E = sum i + N , j + N - sum i + N , j - N - 1 - sum i - N - 1 , j + N + sum i - N - 1 , j - N - 1 ( 2 * N + 1 ) * ( 2 * N + 1 )
E s q = sumsq i + N , j + N - sumsq i + N , j - N - 1 - sumsq i - N - 1 , j + N + sumsq i - N - 1 , j - N - 1 ( 2 * N + 1 ) * ( 2 * N + 1 )
VAR=Esq-E2
In formula, E represents that average, VAR represent variance;I, j represent the vertically and horizontally side relative to the image upper left corner respectively To coordinate, N represents windows radius;
The described computing formula using box to filter is as follows:
E = Σ m = - N N Σ n = - N N f i + m , j + n ( 2 * N + 1 ) * ( 2 * N + 1 )
E s q = Σ m = - N N Σ n = - N N f i + m , j + n 2 ( 2 * N + 1 ) * ( 2 * N + 1 )
VAR=Esq-E2
In formula, E represents that average, VAR represent variance;M, n represent respectively vertically and horizontally with current pixel position Relative distance.
S113: image denoising: for each pixel, the average of the window centered by obtaining based on this pixel and variance After, carrying out smothing filtering according to the average obtained and variance, the correcting mode of described smothing filtering is:
k = V A R V A R + β + ϵ
fi,j=E* (1-k)+fi,j*k
In formula, β represents the parameter value of regulation, and its value is the biggest, and the degree representing smooth is the biggest, then the noise removed is the biggest; ∈ be one close to 0 decimal, its purpose is to prevent dividend is exception when 0;From what pixel value was corrected Can draw in formula, when the parameter value of regulation is the biggest, this pixel value is closer to E;
S114: image is sharpened process, compensates lifting to the grain details of image, processes formula and is:
S i , j = f i , j + f i , j - 1 + f i , j + 1 + f i - 1 , j + f i + 1 , j - f i , j * 4 4 * α
In formula, S represents the image after sharpening, and α represents the degree of sharpening, i.e. 4-neighborhood Laplce's gradient is to pixel value Percentage contribution, α value is the biggest, and sharpness is the biggest;
After S115: image sharpening processes, then with denoising before the UV passage that is converted to of RGB be merged into YUV image;
S116: YUV image step S115 obtained converts back RGB color again, for follow-up further process;
The enhancement process sub-step of described image uses nonlinear images to strengthen, and image carries out overall whitening and processes, Realize, first by image normalization to [0,1] by the way of keeping luminance detail while promoting the dark portion details of image In the range of, the method then using exponential function to map processes:
f i , j = f i , j p
In formula, p represents the degree of whitening;
The shade processing sub-step of the described skin area generating image includes following sub-step:
S121: the skin area of detection image: use threshold process, first marks off skin and roughly selects district with noncutaneous Territory, the RGB statistical value of the skin area of usual people is [a, b, c], and wherein a, b, c are the skin to multiple images and non-skin district The class value that territory carries out statistical classification and obtains, is then divided into skin region when the pixel value of image is more than statistical value during detection Territory, is otherwise non-skin region, obtains the Preliminary detection of a skin area;
S122: after obtaining the shade of skin area, makees further micronization processes: use specified window size to shade Gaussian Blur carries out shade processing, and the two-dimentional formula of Gaussian function is as follows:
f ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ 2
In formula, x, y represent respectively vertically and horizontally with the relative distance of current operation pixel, σ represents standard deviation.
Described image co-registration step includes: after the image processing steps, according to the shade of the skin area obtained Merging the image after overall whitening and the image after global de-noising pixel-by-pixel respectively, integrating formula is:
Finali,j=Bi,j*(1-αi.j)+Fi,ji.j
In formula, B represents the image after global de-noising, and F represents the image after overall whitening, and α represents the skin area obtained Shade, Final represents image co-registration result;
Obtain last result images after completing to merge, result images is exported.
Described in step S111, the RGB color of input picture is transformed into the conversion formula of YUV color space such as Under:
Y U V = 0.299 0.587 0.114 - 0.169 - 0.331 0.5 0.5 - 0.419 - 0.081 R G B
YUV image step S115 obtained described in step S116 converts back RGB color again must change public affairs Formula is as follows:
R G B = 1 0 1.402 1 - 0.344 - 0.714 1 1.772 0 Y U V .
Multiple in the class value that the described skin to multiple images and non-skin region carry out statistical classification and obtain It it is 1000.
The invention has the beneficial effects as follows:
(1) present invention proposes a kind of should be readily appreciated that and the Image denoising algorithm process framework of excellent performance, utilizes based on figure Rapid image denoising scheme as integrogram, it is ensured that calculate speed unrelated with the size of sampling window, well removes figure simultaneously As keeping details while noise such as speckle.
(2) present invention proposes a kind of more efficient detection of skin regions process framework, uses and the most slightly walks detection again Carry out the multi-step immediate processing method refined, can carry out without being stitched into seam crossing in skin and non-skin region very well.
(3) present invention uses nonlinear images Enhancement Method, and image carries out overall enhancing.
(4) present invention is by the hardware-accelerated characteristic of GPU, processes many sub-steps of this method, can solve Using inefficiency problem during CPU, this method proposes and each sub-steps of using can well be applied to GPU and add simultaneously Speed processes, and while guaranteed efficiency is in real time, can obtain instant effect and present, be applied to the special effect processing of single image.
Accompanying drawing explanation
Fig. 1 is the inventive method flow chart.
Detailed description of the invention
Technical scheme is described in further detail below in conjunction with the accompanying drawings: as it is shown in figure 1, a kind of shifting based on GPU The real-time processing method of moved end high-quality U.S. face, it includes image acquisition step, image processing step and image co-registration step;
Described image acquisition step includes: the secondary RGB color image of input one;
Described image processing step includes three sub-steps independently executed pixel-by-pixel accelerated based on GPU hardware: scheme The shade processing sub-step of the skin area of the integrated beauty subslep of picture, the enhancement process sub-step of image and generation image;
Described integrated beauty subslep, mainly includes human body skin area smoothing processing, such as face and other skin The region erasing that other impact such as the speckle in region, nevus is attractive in appearance, and the environment noise such as under-exposure etc. that global illumination is introduced, Other noise such as salt-pepper noise etc. introduced when other later stage processes, loses including the signal in picture or video transmitting procedure Lose, encoding and decoding damage process, blocking artifact etc., including following sub-step:
S111: the RGB color of input picture is transformed into YUV color space, this is primarily to operate in brightness Image, to improve efficiency, retains UV passage simultaneously, and described is transformed into YUV color space by the RGB color of input picture Conversion formula as follows:
Y U V = 0.299 0.587 0.114 - 0.169 - 0.331 0.5 0.5 - 0.419 - 0.081 R G B
Due to the calculating dependency of neighborhood territory pixel upper and lower before and after existing in the process of generation integrogram, this can relate to once going up The descending overhead time.In our scheme, a threshold value can be set, if the size of sampling window exceedes this threshold value, then Use integrogram, otherwise use box filtering.
S112: sampling window size is set, it is judged that whether the size of sampling window is more than the threshold value preset: if it is make With integrogram, otherwise use box filtering;
Described use integrogram includes following sub-step:
S11211: generating the integrogram of luminance picture, including the integrogram of first order Yu quadratic term, wherein iterative formula is divided As follows
sumi,j=sumi,j-1+sumi-1,j-sumi-1,j-1+fi,j
sumsq i , j = sumsq i , j - 1 + sumsq i - 1 , j - sumsq i - 1 , j - 1 + f i , j 2 ;
In formula, sum represent directly and, sumsq represents that quadratic sum, f represent brightness value, preserves two width integrations obtained above Image;Can be used for the quick filter based on window of next step image denoising, even if also not interfering with when sampling window is bigger Computational efficiency.
S11212: all pixels in image are processed one by one, in the window centered by each pixel, counts respectively Calculating average and the variance of all pixels in this window, computing formula is as follows:
E = sum i + N , j + N - sum i + N , j - N - 1 - sum i - N - 1 , j + N + sum i - N - 1 , j - N - 1 ( 2 * N + 1 ) * ( 2 * N + 1 )
E s q = sumsq i + N , j + N - sumsq i + N , j - N - 1 - sumsq i - N - 1 , j + N + sumsq i - N - 1 , j - N - 1 ( 2 * N + 1 ) * ( 2 * N + 1 )
VAR=Esq-E2
In formula, E represents that average, VAR represent variance;I, j represent the vertically and horizontally side relative to the image upper left corner respectively To coordinate, N represents windows radius.
The described computing formula using box to filter is as follows:
E = Σ m = - N N Σ n = - N N f i + m , j + n ( 2 * N + 1 ) * ( 2 * N + 1 )
E s q = Σ m = - N N Σ n = - N N f i + m , j + n 2 ( 2 * N + 1 ) * ( 2 * N + 1 )
VAR=Esq-E2
In formula, E represents that average, VAR represent variance;M, n represent respectively vertically and horizontally with current pixel position Relative distance.
Luminance picture is carried out denoising, owing to the eyes of people are more more sensitive than carrier chrominance signal to luminance signal, therefore exists The noise of luminance signal can be more sensitive than chrominance signal noise, and after removing luminance signal noise, the eyes of people can be felt To significantly change, on the basis of improving computational efficiency, do not interfere with total quality simultaneously.
S113: image denoising: for each pixel, the average of the window centered by obtaining based on this pixel and variance After, carry out smothing filtering according to the average obtained and variance.Its principle epigraph is the most smooth, then the variance yields obtained is closer to 0, thus this pixel value is then closer to average E.The correcting mode of described smothing filtering is:
k = V A R V A R + β + ϵ
fi,j=E* (1-k)+fi,j*k
In formula, β represents the parameter value of regulation, and its value is the biggest, and the degree representing smooth is the biggest, then the noise removed is the biggest; ∈ be one close to 0 decimal, its purpose is to prevent dividend is exception when 0;From what pixel value was corrected Can draw in formula, when the parameter value of regulation is the biggest, this pixel value is closer to E;
S114: image is sharpened process, compensates lifting to the grain details of image, processes formula and is:
S i , j = f i , j + f i , j - 1 + f i , j + 1 + f i - 1 , j + f i + 1 , j - f i , j * 4 4 * α
In formula, S represents the image after sharpening, and α represents the degree of sharpening, i.e. 4-neighborhood Laplce's gradient is to pixel value Percentage contribution, α value is the biggest, and sharpness is the biggest;
After S115: image sharpening processes, then with denoising before the UV passage that is converted to of RGB be merged into YUV image;
S116: YUV image step S115 obtained converts back RGB color again, for follow-up further process;
Described YUV image step S115 obtained converts back RGB color again, and to obtain conversion formula as follows:
R G B = 1 0 1.402 1 - 0.344 - 0.714 1 1.772 0 Y U V
The enhancement process sub-step of described image uses nonlinear images to strengthen, and image carries out overall whitening and processes, Realize, first by image normalization to [0,1] by the way of keeping luminance detail while promoting the dark portion details of image In the range of, the method then using exponential function to map processes:
f i , j = f i , j p
In formula, p represents the degree of whitening;
The shade processing sub-step of the described skin area generating image includes following sub-step:
S121: the skin area of detection image: skin area, compared with non-skin region, is generally of the face being easier to distinguish Color, especially compared with dark black region, for the application scenarios of U.S. face, it is usually required mainly for differentiation is skin and the people of people Hair, eyebrow, eyelashes, the subarea processing of eyes.
Use threshold process, first mark off skin and noncutaneous region of roughly selecting, the RGB system of the skin area of usual people Evaluation is [a, b, c], wherein a, b, c be the skin to 1000 images with non-skin region carry out that statistical classification obtains one Class value, is then divided into skin area when the pixel value of image is more than statistical value during detection, is otherwise non-skin region, obtains one The Preliminary detection of individual skin area;
S122: after obtaining the shade of skin area, makees further micronization processes, and can not directly participate in image shade Merge, otherwise have obvious artificial trace at the skin area of image and the seam crossing in non-skin region.Specifically, employing refers to The Gaussian Blur determining window size carries out shade processing, and the two-dimentional formula of Gaussian function is as follows:
f ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ 2
In formula, x, y represent respectively vertically and horizontally with the relative distance of current operation pixel, σ represents standard deviation.
Described image co-registration step includes: after the image processing steps, according to the shade of the skin area obtained Merging the image after overall whitening and the image after global de-noising pixel-by-pixel respectively, integrating formula is:
Finali,j=Bi,j*(1-αi.j)+Fi,ji.j
In formula, B represents the image after global de-noising, and F represents the image after overall whitening, and α represents the skin area obtained Shade, Final represents image co-registration result;
Obtain last result images after completing to merge, result images is exported.
In whole scheme, owing to the process step related to is more, but each step can independently execute pixel-by-pixel, Therefore CPU can not in real time in the case of, use based on GPU hardware-accelerated, can process in real time, wherein mobile terminal use OpenGL ES accelerates.

Claims (3)

1. the real-time processing method of mobile terminal high-quality U.S. based on GPU face, it is characterised in that: it includes that Image Acquisition walks Suddenly, image processing step and image co-registration step;
Described image acquisition step includes: the secondary RGB color image of input one;
Described image processing step includes three sub-steps independently executed pixel-by-pixel accelerated based on GPU hardware: image The shade processing sub-step of the skin area of integrated beauty subslep, the enhancement process sub-step of image and generation image;
Described integrated beauty subslep includes following sub-step:
S111: the RGB color of input picture is transformed into YUV color space, retains UV passage simultaneously;
S112: sampling window size is set, it is judged that whether the size of sampling window is more than the threshold value preset: if it is use long-pending Component, otherwise uses box filtering;
Described use integrogram includes following sub-step:
S11211: generating the integrogram of luminance picture, including the integrogram of first order Yu quadratic term, wherein iterative formula is the most such as Under:
sumi,j=sumi,j-1+sumi-1,j-sumi-1,j-1+fi,j
sumsq i , j = sumsq i , j - 1 + sumsq i - 1 , j - sumsq i - 1 , j - 1 + f i , j 2 ;
In formula, sum represent directly and, sumsq represents that quadratic sum, f represent brightness value, preserves two width integrograms obtained above Picture;
S11212: process one by one for all pixels in image, in the window centered by each pixel, calculates this respectively The average of all pixels and variance in window, computing formula is as follows:
E = sum i + N , j + N - sum i + N , j - N - 1 - sum i - N - 1 , j + N + sum i - N - 1 , j - N - 1 ( 2 * N + 1 ) * ( 2 * N + 1 )
E s q = sumsq i + N , j + N - sumsq i + N , j - N - 1 - sumsq i - N - 1 , j + N + sumsq i - N - 1 , j - N - 1 ( 2 * N + 1 ) * ( 2 * N + 1 )
VAR=Esq-E2
In formula, E represents that average, VAR represent variance;I, j represent respectively relative to the image upper left corner vertically and horizontally Coordinate, N represents windows radius;
The described computing formula using box to filter is as follows:
E = Σ m = - N N Σ n = - N N f i + m , j + n ( 2 * N + 1 ) * ( 2 * N + 1 )
E s q = Σ m = - N N Σ n = - N N f i + m , j + n 2 ( 2 * N + 1 ) * ( 2 * N + 1 )
VAR=Esq-E2
In formula, E represents that average, VAR represent variance;M, n represent vertically and horizontally relative with current pixel position respectively Distance;
S113: image denoising: for each pixel, after the average of the window centered by obtaining based on this pixel and variance, Carrying out smothing filtering according to the average obtained and variance, the correcting mode of described smothing filtering is:
k = V A R V A R + β + ϵ
fi,j=E* (1-k)+fi,j*k
In formula, β represents the parameter value of regulation, and its value is the biggest, and the degree representing smooth is the biggest, then the noise removed is the biggest;ε is one Individual close to 0 decimal, its purpose is to prevent dividend is exception when 0;From the formula that pixel value is corrected Can draw, when the parameter value of regulation is the biggest, this pixel value is closer to E;
S114: image is sharpened process, compensates lifting to the grain details of image, processes formula and is:
S i , j = f i , j + f i , j - 1 + f i , j + 1 + f i - 1 , j + f i + 1 , j - f i , j * 4 4 * α
In formula, S represents the image after sharpening, and α represents the degree of sharpening, the i.e. contribution to pixel value of the 4-neighborhood Laplce's gradient Percentage ratio, α value is the biggest, and sharpness is the biggest;
After S115: image sharpening processes, then with denoising before the UV passage that is converted to of RGB be merged into YUV image;
S116: YUV image step S115 obtained converts back RGB color again, for follow-up further process;
The enhancement process sub-step of described image uses nonlinear images to strengthen, and image carries out overall whitening and processes, pass through The mode of luminance detail is kept to realize, first by the scope of image normalization to [0,1] while promoting the dark portion details of image In, the method then using exponential function to map processes:
f i , j = f i , j p
In formula, p represents the degree of whitening;
The shade processing sub-step of the described skin area generating image includes following sub-step:
S121: the skin area of detection image: use threshold process, first marks off skin and roughly selects region with noncutaneous, logical The RGB statistical value of the skin area of ordinary person is [a, b, c], and wherein a, b, c are that the skin to multiple images enters with non-skin region Row statistical classification and the class value that obtains, be then divided into skin area when the pixel value of image is more than statistical value during detection, no It is then non-skin region, obtains the Preliminary detection of a skin area;
S122: after obtaining the shade of skin area, makees further micronization processes: use the Gauss of specified window size to shade Obscuring and carry out shade processing, the two-dimentional formula of Gaussian function is as follows:
f ( x , y ) = 1 2 πσ 2 e - x 2 + y 2 2 σ 2
In formula, x, y represent respectively vertically and horizontally with the relative distance of current operation pixel, σ represents standard deviation;
Described image co-registration step includes: after the image processing steps, according to the shade difference of the skin area obtained Merging the image after overall whitening and the image after global de-noising pixel-by-pixel, integrating formula is:
Finali,j=Bi,j*(1-αi.j)+Fi,ji.j
In formula, B represents the image after global de-noising, and F represents the image after overall whitening, and α represents the screening of the skin area obtained Cover, Final represents image co-registration result;
Obtain last result images after completing to merge, result images is exported.
The real-time processing method of a kind of mobile terminal high-quality U.S. based on GPU the most according to claim 1 face, its feature exists In: the conversion formula that the RGB color of input picture is transformed into YUV color space described in step S111 is as follows:
Y U V = 0.299 0.587 0.114 - 0.169 - 0.331 0.5 0.5 - 0.419 - 0.081 R G B
YUV image step S115 obtained described in step S116 again converts back RGB color and obtains conversion formula such as Under:
R G B = 1 0 1.402 1 - 0.344 - 0.714 1 1.772 0 Y U V .
The real-time processing method of a kind of mobile terminal high-quality U.S. based on GPU the most according to claim 1 face, its feature exists In: multiple in the class value that the described skin to multiple images and non-skin region carry out statistical classification and obtain are 1000 ?.
CN201610284768.5A 2016-05-03 2016-05-03 A kind of real-time processing method of the high-quality U.S. face in mobile terminal based on GPU Active CN105976308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610284768.5A CN105976308B (en) 2016-05-03 2016-05-03 A kind of real-time processing method of the high-quality U.S. face in mobile terminal based on GPU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610284768.5A CN105976308B (en) 2016-05-03 2016-05-03 A kind of real-time processing method of the high-quality U.S. face in mobile terminal based on GPU

Publications (2)

Publication Number Publication Date
CN105976308A true CN105976308A (en) 2016-09-28
CN105976308B CN105976308B (en) 2017-10-27

Family

ID=56993850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610284768.5A Active CN105976308B (en) 2016-05-03 2016-05-03 A kind of real-time processing method of the high-quality U.S. face in mobile terminal based on GPU

Country Status (1)

Country Link
CN (1) CN105976308B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274452A (en) * 2017-05-31 2017-10-20 成都品果科技有限公司 A kind of small pox automatic testing method
CN108428215A (en) * 2017-02-15 2018-08-21 阿里巴巴集团控股有限公司 A kind of image processing method, device and equipment
CN108563414A (en) * 2018-03-20 2018-09-21 广东乐芯智能科技有限公司 A kind of watch displays luminance regulating method
CN109934783A (en) * 2019-03-04 2019-06-25 天翼爱音乐文化科技有限公司 Image processing method, device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100166331A1 (en) * 2008-12-31 2010-07-01 Altek Corporation Method for beautifying human face in digital image
CN103035019A (en) * 2012-12-11 2013-04-10 深圳深讯和科技有限公司 Image processing method and device
CN105469357A (en) * 2015-11-27 2016-04-06 努比亚技术有限公司 Image processing method and device, and terminal
CN105956993A (en) * 2016-05-03 2016-09-21 成都索贝数码科技股份有限公司 Instant presenting method of mobile end video beauty based on GPU

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100166331A1 (en) * 2008-12-31 2010-07-01 Altek Corporation Method for beautifying human face in digital image
CN103035019A (en) * 2012-12-11 2013-04-10 深圳深讯和科技有限公司 Image processing method and device
CN105469357A (en) * 2015-11-27 2016-04-06 努比亚技术有限公司 Image processing method and device, and terminal
CN105956993A (en) * 2016-05-03 2016-09-21 成都索贝数码科技股份有限公司 Instant presenting method of mobile end video beauty based on GPU

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428215A (en) * 2017-02-15 2018-08-21 阿里巴巴集团控股有限公司 A kind of image processing method, device and equipment
CN107274452A (en) * 2017-05-31 2017-10-20 成都品果科技有限公司 A kind of small pox automatic testing method
CN107274452B (en) * 2017-05-31 2020-07-24 成都品果科技有限公司 Automatic detection method for acne
CN108563414A (en) * 2018-03-20 2018-09-21 广东乐芯智能科技有限公司 A kind of watch displays luminance regulating method
CN109934783A (en) * 2019-03-04 2019-06-25 天翼爱音乐文化科技有限公司 Image processing method, device, computer equipment and storage medium
CN109934783B (en) * 2019-03-04 2021-05-07 天翼爱音乐文化科技有限公司 Image processing method, image processing device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN105976308B (en) 2017-10-27

Similar Documents

Publication Publication Date Title
CN108229278B (en) Face image processing method and device and electronic equipment
CN105913400A (en) Device for obtaining high-quality and real-time beautiful image
CN104182947B (en) Low-illumination image enhancement method and system
CN104252698B (en) Semi-inverse method-based rapid single image dehazing algorithm
CN105976309B (en) U.S. face mobile terminal that is a kind of efficient and being easy to Parallel Implementation
US9111132B2 (en) Image processing device, image processing method, and control program
CN109191390A (en) A kind of algorithm for image enhancement based on the more algorithm fusions in different colours space
CN105787888A (en) Human face image beautifying method
Lai et al. Improved local histogram equalization with gradient-based weighting process for edge preservation
CN105976308B (en) A kind of real-time processing method of the high-quality U.S. face in mobile terminal based on GPU
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
CN102027505A (en) Automatic face and skin beautification using face detection
CN105763747A (en) Mobile terminal for achieving high-quality real-time facial beautification
CN111223110B (en) Microscopic image enhancement method and device and computer equipment
CN105956993A (en) Instant presenting method of mobile end video beauty based on GPU
CN110298792B (en) Low-illumination image enhancement and denoising method, system and computer equipment
US20150302564A1 (en) Method for making up a skin tone of a human body in an image, device for making up a skin tone of a human body in an image, method for adjusting a skin tone luminance of a human body in an image, and device for adjusting a skin tone luminance of a human body in an image
CN112116536A (en) Low-illumination image enhancement method and system
WO2022088976A1 (en) Image processing method and device
CN106530309A (en) Video matting method and system based on mobile platform
CN105894480A (en) High-efficiency facial beautification device easy for parallel realization
CN109919859A (en) A kind of Outdoor Scene image defogging Enhancement Method calculates equipment and its storage medium
Liu et al. Single image haze removal via depth-based contrast stretching transform
CN103839245A (en) Retinex night color image enhancement method based on statistical regularities
CN108550124B (en) Illumination compensation and image enhancement method based on bionic spiral

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant