CN108764139A - A kind of method for detecting human face, mobile terminal and computer readable storage medium - Google Patents

A kind of method for detecting human face, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN108764139A
CN108764139A CN201810530284.3A CN201810530284A CN108764139A CN 108764139 A CN108764139 A CN 108764139A CN 201810530284 A CN201810530284 A CN 201810530284A CN 108764139 A CN108764139 A CN 108764139A
Authority
CN
China
Prior art keywords
face
pixel
preview image
camera
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810530284.3A
Other languages
Chinese (zh)
Other versions
CN108764139B (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN201810530284.3A priority Critical patent/CN108764139B/en
Publication of CN108764139A publication Critical patent/CN108764139A/en
Application granted granted Critical
Publication of CN108764139B publication Critical patent/CN108764139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The application is suitable for human face detection tech field, provides a kind of method for detecting human face, mobile terminal and computer readable storage medium, the method includes:After the camera for starting the mobile terminal, it determines whether as backlighting condition, if backlighting condition, it is then detected by the first face detection model in the preview image of camera and whether there is face, if detecting face in the preview image of camera by the first face detection model, human face region then is marked in the preview image, the precision of backlighting condition human face detection can be improved by the application.

Description

A kind of method for detecting human face, mobile terminal and computer readable storage medium
Technical field
The application belongs to a kind of human face detection tech field more particularly to method for detecting human face, mobile terminal and computer Readable storage medium storing program for executing.
Background technology
With the development of intelligent mobile terminal, use increasingly frequency of the people for camera function on the mobile terminals such as mobile phone It is numerous.The camera function of existing major part mobile terminal supports Face datection, after detecting face, for the face detected It the operations such as focused, beautified.
Currently, being normally based on traditional Face Detection model or facial feature points detection model for Face datection.So And the photo environment residing for camera differs greatly, and under some photo environments, just will appear based on traditional method for detecting human face Detection result is poor, or even the case where can not detecting face occurs.
Invention content
In view of this, the embodiment of the present application provides a kind of method for detecting human face, mobile terminal and computer-readable storage Medium passes through the problem that traditional Face datection mode detection result is poor to solve under some photo environments at present.
The first aspect of the embodiment of the present application provides a kind of method for detecting human face, including:
After the camera for starting mobile terminal, determine whether as backlighting condition;
If backlighting condition, is then detected by the first face detection model and whether there is face in the preview image of camera;
If face is detected in the preview image of camera by the first face detection model, in the preview graph Human face region is marked as in.
The second aspect of the embodiment of the present application provides a kind of mobile terminal, including:
Determining module, for after the camera for starting the mobile terminal, determining whether as backlighting condition;
First detection module, for if backlighting condition, then detecting the preview graph of camera by the first face detection model It whether there is face as in;
Labeling module, if for detecting face in the preview image of camera by the first face detection model, Then human face region is marked in the preview image.
The third aspect of the embodiment of the present application provides a kind of mobile terminal, including memory, processor and is stored in In the memory and the computer program that can run on the processor, when the processor executes the computer program The step of realizing the method that the embodiment of the present application first aspect provides.
The fourth aspect of the embodiment of the present application provides a kind of computer readable storage medium, the computer-readable storage Media storage has computer program, the computer program to realize the embodiment of the present application when being executed by one or more processors On the one hand the step of the method provided.
5th aspect of the embodiment of the present application provides a kind of computer program product, and the computer program product includes Computer program, the computer program realize that the embodiment of the present application first aspect provides when being executed by one or more processors The method the step of.
The embodiment of the present application determines whether as backlighting condition, after the camera for starting the mobile terminal if inverse Striation part is then detected by the first face detection model and whether there is face in the preview image of camera, if passing through described first Face datection model detects face in the preview image of camera, then marks human face region in the preview image, due to The application is after the camera for starting mobile terminal, it is first determined whether current is backlighting condition, if backlighting condition, is then passed through It whether there is face in preview image for the first face detection model detection camera of backlighting condition setting, this addresses the problem By problem that traditional Face datection mode detection result is poor under backlighting condition.
Description of the drawings
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application Embodiment for those of ordinary skill in the art without having to pay creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is a kind of implementation process schematic diagram of method for detecting human face provided by the embodiments of the present application;
Fig. 2 is the implementation process schematic diagram of another method for detecting human face provided by the embodiments of the present application;
Fig. 3 is the gray scale schematic diagram of the photo shot under a kind of backlighting condition provided by the embodiments of the present application;
Fig. 4 is a kind of schematic block diagram of mobile terminal provided by the embodiments of the present application;
Fig. 5 is the schematic block diagram of another mobile terminal provided by the embodiments of the present application.
Specific implementation mode
In being described below, for illustration and not for limitation, it is proposed that such as tool of particular system structure, technology etc Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific The application can also be realized in the other embodiments of details.In other situations, it omits to well-known system, device, electricity The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
It should be appreciated that ought use in this specification and in the appended claims, the instruction of term " comprising " is described special Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step, Operation, element, component and/or its presence or addition gathered.
It is also understood that the term used in this present specification is merely for the sake of the mesh for describing specific embodiment And be not intended to limit the application.As present specification and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singulative, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is Refer to any combinations and all possible combinations of one or more of associated item listed, and includes these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In order to illustrate technical solution described herein, illustrated below by specific embodiment.
Fig. 1 is a kind of implementation process schematic diagram of method for detecting human face provided by the embodiments of the present application, is applied to mobile whole End, this method as shown in the figure may comprise steps of:
Step S101 is determined whether after the camera for starting mobile terminal as backlighting condition.
In the embodiment of the present application, after the camera for starting the mobile terminal, the display interface meeting of the mobile terminal Show preview image, the i.e. current collected picture of camera, it can currently collected picture determines whether according to camera For backlighting condition.The backlighting condition is a kind of since shot subject is just at the situation between light source and camera, due to quilt The specific position for taking the photograph main body produces the situation that background luminance is significantly larger than shot subject, and especially background is shared in picture Area be more than shot subject when, can according to the light status of background expose so that the exposure of shot subject is insufficient.When When shot subject is face, it finds that obscure portions of face and color is dark in preview image.
In practical applications, it determines whether to examine current preview image for the method for backlighting condition It surveys, is determined whether according to testing result as backlighting condition, can also be worked as by the sensor detection being arranged on mobile terminal Whether preceding be backlighting condition.
As the another embodiment of the application, after determining whether as backlighting condition, can also include:
If not backlighting condition, by whether there is face in the preview image of the second Face datection model inspection camera;
If face is detected in the preview image of camera by the second Face datection model, in the preview graph Human face region is marked as in.
In the embodiment of the present application, the second Face datection model is traditional Face datection mode, such as HSV colour of skin moulds Type, or the model according to facial feature points detection.Second Face datection model is the spy of the colour of skin or face based on face Sign point is detected, and under backlighting condition, face is relatively fuzzyyer and color is dark, and effect is detected by traditional Face datection mode Fruit is poor, or even the case where still can not detecting face there are face in preview image occurs.So non-inverse can determine When striation part, reuses and whether there is face in the preview image of the second Face datection model inspection camera.
Step S102, if backlighting condition, then by the first face detection model detect camera preview image in whether There are faces.
In the embodiment of the present application, the first face detection model is MobileNet-SSD convolutional neural networks models, It is smaller relative to the terminal devices memory such as computer since the use environment of mobile terminal limits, the processing capacity of processor compared with Weak, large-scale convolutional neural networks model can not be disposed and be run on the mobile terminals such as mobile phone.Therefore, MobileNet is selected, Alternatively referred to as MobileNets.MobileNet is a kind of deep layer nerve of the lightweight proposed for embedded devices such as mobile phones Network effectively reduces network parameter by decomposing the convolution kernel in neural network.The process of decomposition is by Standard convolution One convolution of a depth convolution sum is resolved into, each convolution kernel is applied to each channel by depth convolution, and puts convolution For combining the output of channel convolution, this decomposition can effectively reduce calculation amount, model size be reduced, so that it can It operates in the embedded devices such as mobile phone.SSD network models are used for target detection, by MobileNet and SSD network integrations one It rises, can be used for the target detection in the embedded devices such as mobile phone.By taking VGG-SSD and MobileNet-SSD as an example, respectively same 7 detection pictures are detected in one equipment, it can be found that the detection time used in MobileNet-SSD is VGG-SSD 1/6 to the 1/2 of detection time used.
Step S103, if detecting face in the preview image of camera by the first face detection model, Human face region is marked in the preview image.
In the embodiment of the present application, by the first face detection model detected in the preview image of camera face it Afterwards, the preview image with detection block can be generated, i.e., the human face region in current preview image generates a detection block.
It should be noted that embodiment shown in FIG. 1, is after the camera for starting mobile terminal, it is first determined current Whether be backlighting condition, under backlighting condition, using MobileNet-SSD detection models detect camera preview image in whether There are faces, if not under backlighting condition, traditional Face datection model, such as HSV complexion models are used to detect camera It whether there is face in preview image.
In practical application, traditional method for detecting human face can also be used first after the camera for starting mobile terminal, Such as whether there is face in the preview image of HSV complexion models detection camera, if by the second Face datection model in camera Preview image in face is not detected, then determine whether then to pass through if backlighting condition for backlighting condition It whether there is face in the preview image of MobileNet-SSD detection models detection camera.The purpose being arranged in this way is:Although MobileNet-SSD detection models belong to the neural network of lightweight compared to other convolutional neural networks models, however, phase Than still needing to occupy more memories in traditional method for detecting human face, and the detection of backlighting condition also can committed memory, And in practical application, it is not to say that centainly inspection does not measure face to Face datection mode traditional under backlighting condition.So preferentially It is detected using the traditional Face datection mode smaller relative to committed memory and whether there is face in the preview image of camera, only Have in the case where face is not detected in traditional Face datection mode in the preview image of camera and is backlighting condition, MobileNet-SSD detection models can be enabled because be backlighting condition, traditional Face datection model in preview image not The case where detecting face, which is likely to be in preview image, is inherently not present face.This way it is secured that a variety of rings of taking pictures Under border, there are when face, improving the accuracy of detection of face in preview image, also, by the Memory control of system small as possible State.
In the embodiment of the present application, other than the above-mentioned Face datection model enumerated, the first face detection model and Second Face datection model can also be other Face datection models.However, it is desirable to explanation, the first Face datection mould The accuracy of detection of type is higher than the accuracy of detection of the second Face datection model, this is because under backlighting condition, human face region It is relatively gloomy, it is relatively difficult that face is detected in the image under backlighting condition, it is possible to select accuracy of detection phase Higher first face detection model is detected, and under non-backlighting condition, face is relatively clear, then using detection The lower second Face datection model of precision is detected.The high model of accuracy of detection is relative complex, and memory usage is higher, inspection The low model of survey precision is relatively easy, and memory usage is relatively low.In order to which the face detected in image can either be met, and can Memory usage when taking pictures is dropped to relatively low state, accuracy of detection can be selected relatively low, simultaneously under non-backlighting condition The second relatively low Face datection model of memory usage selects accuracy of detection higher, while EMS memory occupation under backlighting condition Rate also higher first face detection model.
Since the application is after the camera for starting mobile terminal, it is first determined whether current is backlighting condition, if inverse Striation part is then detected in the preview image of camera by the first face detection model being arranged for backlighting condition and whether there is people Face, this addresses the problem under backlighting condition by problem that traditional Face datection mode detection result is poor.
Fig. 2 is the flow diagram of another method for detecting human face provided by the embodiments of the present application, this method as shown in the figure It is on the basis of embodiment shown in Fig. 1, how description determines whether the process for backlighting condition, can specifically include following Step:
First, we analyze the preview image under backlighting condition, backlighting condition refer to shot subject just at light source and Between camera, cause shot subject exposure insufficient.As shown in figure 3, for the gray-scale map of the photo shot under a backlighting condition Picture usually there will be at least that a source region, source region can as seen in Figure 3 in the preview image under backlighting condition Can also be the intense light that a light source is launched to be a light source, and the gray value of the pixel of source region is compared It can be high in other regions.The first step needs first to determine source region, for example, obtaining in current preview image gray value first Pixel in preset range, and the source region in preview image is confirmed according to the coordinate of the pixel, after will confirm that The source region is denoted as first area.Step S201 is just to determine the process of source region to step S203.
Step S201 obtains pixel of the gray value in the first preset range in current preview image, and according to institute State the Coordinate generation pixel distribution map of pixel of the gray value in the first preset range.
In the embodiment of the present application, preview image is subjected to processing first and obtains gray level image.Due to light under backlighting condition The difference in source, the difference of photo angle, the gray value of source region not necessarily level off to 255, such as cool white light on daytime is light When source, source region is possible to level off to 255 in the gray level image of the photo shot under backlighting condition;And night, warm white are When light source, source region not levels off to 255 in the gray level image of the photo shot under backlighting condition, but in general no matter Under which kind of shooting environmental, the gray value of source region must be to 255 sides in the gray level image of the photo of backlighting condition shooting Close.So the first preset range can be set, such as between 200-255.Certainly, in practical application, it can also be set Its value range is as the first preset range.
After the first preset range, it is necessary to obtain in current preview image gray value in the first preset range Interior pixel actually not only has the gray value near light source in the first preset range as seen in Figure 3, by In the color of reflective or other objects itself, pixel of the gray value in the first preset range is likely distributed in preview graph Each place as in, but light source must be nearby that quantity is most.It at this moment, can be default first according to the gray value The Coordinate generation pixel distribution map of pixel in range obtains the region that pixel is most concentrated from pixel distribution map.
Step S202 is slided on the pixel distribution map by the sliding window of predetermined width, obtains the sliding Position when window includes most pixels, and record the of all pixels point that the sliding window includes at the position Three gray averages and coordinate mean value.
In the embodiment of the present application, the pixel distribution map is actually scatter plot, can be existed by the method for cluster The region that pixel is most concentrated is obtained on pixel distribution map.The method that the embodiment of the present application uses sliding window, can be arranged The sliding window of predetermined width, by the sliding window of the predetermined width on the pixel distribution map from left to right, from Right-to-left is slided from top to bottom or from bottom to up, when in some position sliding window including most pixels, records the position It sets, calculates the gray average and coordinate mean value of all pixels point that sliding window includes in the position, which is denoted as Third gray average.
In practical applications, sliding window can be set to predetermined width, length is unlimited, i.e. finite width indefinite length Strip sliding window, the Slideslip to the right since the left side of the pixel distribution map of preview image, may be arranged as from Right side Slideslip to the left.After can also sliding window being adjusted 90 °, slided downwards from the top of pixel distribution map, or Person slides upward from below.Certainly, sliding window can also be the sliding window of predetermined width preset length (as shown in Figure 3 Rectangular window), i.e., sliding window is a rectangular window with finite length and finite width.If it is finite length and Finite width rectangular window sliding process can be:By the sliding window with pre- since the upper left corner of pixel distribution map If step-length starts to slide to the right, slide into after the rightmost side of pixel distribution map, moves down default step-length and continue from pixel Point the distribution map rightmost side to the leftmost side slide, slide into after the leftmost side of pixel distribution map, move down default step-length after It is continuous to be slided from the pixel distribution map leftmost side to the rightmost side ... ..., and so on, until sliding terminates.It finds in sliding window Position where when including most pixels, and the gray scale of pixel when recording sliding window in the position in sliding window is equal Value and coordinate mean value.It should be noted that above-mentioned sliding process is only an example, it, can be from pixel in real process Any one position of distribution map starts any one position and terminates, it is desirable that the process that sliding window is slided according to preset step-length It is middle that pixel distribution map combining is complete.
As shown in figure 3, since light source exists under backlighting condition, source region is that gray value is default first in preview image Pixel in range compares the region of concentration, so, the position where when by sliding window including most pixels is exactly Position where source region.Pass through the point (central point in Fig. 3 in rectangular window) where the coordinate mean value of the location determination It is exactly the central point of source region.
Step S203, within the pre-determined distance put centered on the point where the coordinate mean value, with the third gray scale The region of pixel composition of the difference of mean value in the second preset range is the source region, and the source region is remembered For first area.
In the embodiment of the present application, as previously described, because position where when including most pixels by sliding window It can determine the central point of source region, and the mean value of source region pixel is all proximate to 255 and in a certain range Point, so, it can be by within the pre-determined distance put centered on the point where the coordinate mean value, with the third gray average Difference in the second preset range pixel composition region be the source region.
Continue by taking Fig. 3 as an example, source region can be the area put centered on the point in rectangular window in figure 3 Domain, however, source region specifically there are much ranges to be can be taking human as defined.We set third gray average h to light source The average value of area pixel point, then the second preset range is set as 0-a, then, exist with the difference of the third gray average The gray value x of pixel in second preset range meets | x-h |≤a, i.e. x=[h-a, h+a].
In practical application, all pixels in the position that third gray average is sliding window where when including most pixels The average value of point, since the size of sliding window is artificially arranged, so, the size of window influences whether that third gray scale is equal Value, same piece image, when sliding window is smaller, third gray average will be bigger than normal, when sliding window is smaller, third ash Spending mean value will be less than normal.Accordingly, it is determined that when the tonal range of the pixel of source region, third gray average not necessarily in Heart point.Assuming that third gray average be 245, then can be arranged the pixel of source region intensity value ranges be [245-a1, 245+a2], a1 and a2 can be equal, can not also be equal.It is assumed that a1=a2=5, then third gray average h is set as light Second preset range of the average value of source region pixel, setting is exactly [0-5], the gray value model of the pixel in source region It is exactly [240,250] to enclose, in the gray level image of preview image, and pixel of the not all gray value in [240,250] range Region where point is all source region, it is also necessary to be limited to the border circular areas put centered on the central point of rectangular window Interior, the region of pixel composition of the gray value in [240,250] range is source region.The size of this border circular areas can be with By pre-setting a value limitation, for example, being round within the pre-determined distance put centered on the point where the coordinate mean value Shape region.Certainly, the value of this pre-determined distance can also be obtained taking human as regulation by way of calculating, for example, setting step The radius of long b, the point centered on the point where the coordinate mean value, border circular areas are incremented by successively according to step-length b, calculate each half The mean value of pixel in the corresponding border circular areas of diameter (r=nb, n are more than 0 natural number), draws the mean value of pixel with half The curve of diameter variation records the point when the tangent slope in some position of curve is less than default slope, the point corresponding half Diameter value is exactly pre-determined distance.
After finding source region in preview image, source region is denoted as first area, meanwhile, in the preview image Region except one region is second area.
It should be noted that source region is not using the point where the coordinate mean value as the center of circle, radius be it is default away from From border circular areas, but using the point where the coordinate mean value as the center of circle, radius is gray scale in the border circular areas of pre-determined distance The region of the pixel composition of value and the difference of the third gray average in the second preset range.Moreover, source region is simultaneously Not necessarily there is light source, for example, under non-backlighting condition, perhaps determining source region is the area of white clothes in preview image Domain.Certainly, if it is determined that source region be white clothes in preview image region, then the pixel of the first area First gray average of point is less than or equal to described pre- with the difference of the second gray average of the pixel of the second area If value, it is that backlighting condition acquires also to be assured that current preview image not.Even if face is not detected, it is also possible to only It is that face is inherently not present in current preview image.
After first area and second area is determined, be in order to by first area and second area determine whether for Backlighting condition.Step S204 and step S205 be how according to the first gray average of the pixel of the first area and The second gray average of the pixel of second area determines whether current photo environment is backlighting condition in the preview image.
Step S204, if the first gray average of the pixel of the first area and the pixel of the second area The difference of second gray average is more than preset value, it is determined that current photo environment is backlighting condition.
Step S205, if the first gray average of the pixel of the first area and the pixel of the second area The difference of second gray average is less than or equal to the preset value, it is determined that current photo environment is not backlighting condition.
In the embodiment of the present application, we observe the grey level histogram under backlighting condition and non-backlighting condition, it is found that The pixel that grey level histogram under backlight environment is distributed on incandescent and very dark gray scale is more, and is distributed in middle gray Pixel is relatively fewer;The pixel that grey level histogram under non-backlight environment is distributed on incandescent and very dark gray scale is less, and The pixel being distributed in middle gray is relatively more.Identified source region is exactly the pixel in incandescent gray scale before us Concentrated area where point.Under backlighting condition, after first area (source region) is deducted, the gray scale of remaining second area is equal Value meeting relatively small (pixel in very dark tonal range is more).Under non-backlighting condition, the range of first area is very small, the After one region is deducted, the mean value of remaining second area can be relatively large (pixel of intermediate gradation range is more).So It can obtain, the pixel of the first gray average and the second area of the pixel of the first area under backlighting condition The second gray average difference, compared to the first gray average of the pixel of the first area and institute under non-backlighting condition State the difference of the second gray average of the pixel of second area, bigger.So a preset value can be set, if described first The difference of first gray average of the pixel in region and the second gray average of the pixel of the second area is more than default Value, it is determined that current photo environment is backlighting condition;If the first gray average of the pixel of the first area with it is described The difference of second gray average of the pixel of second area is less than or equal to the preset value, it is determined that current ring of taking pictures Border is not backlighting condition.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit It is fixed.
Fig. 4 is that the schematic block diagram for the mobile terminal that one embodiment of the application provides only is shown and this Shen for convenience of description It please the relevant part of embodiment.
The mobile terminal 4 can be the software unit being built in the mobile terminals such as mobile phone, tablet computer, notebook, hard Part unit or the unit of soft or hard combination can also be used as independent pendant and be integrated into the mobile phone, tablet computer, notebook etc. In mobile terminal.
The mobile terminal 4 includes:
Determining module 41, for after the camera for starting the mobile terminal, determining whether as backlighting condition;
First detection module 42, for if backlighting condition, then detecting the preview of camera by the first face detection model It whether there is face in image;
Labeling module 43, if for detecting people in the preview image of camera by the first face detection model Face then marks human face region in the preview image.
Optionally, the mobile terminal 4 further includes:
Second detection module 44, for before determining whether as backlighting condition, passing through the second Face datection model It detects and whether there is face in the preview image of camera, the accuracy of detection of the second Face datection model is less than described the first The accuracy of detection of face detection model;
The determining module 41 is additionally operable to, if being not detected in the preview image of camera by the second Face datection model Face, it is determined that whether current is backlighting condition.
Optionally, the determining module 41 includes:
Source region determination unit 411, for obtaining in current preview image gray value in the first preset range Pixel, and the source region in preview image, the source region after will confirm that are confirmed according to the coordinate of the pixel It is denoted as first area;
Backlight determination unit 412, for according to the first gray average of the pixel of the first area and described pre- The second gray average of the pixel of second area in image of looking at determines whether current photo environment is backlighting condition, described Two regions are the region other than first area in the preview image.
Optionally, the source region determination unit 411 includes:
Distribution map obtains subelement 4111, the coordinate for the pixel according to the gray value in the first preset range Generate pixel distribution map;
Mean value determination subelement 4112, it is sliding on the pixel distribution map for the sliding window by predetermined width It is dynamic, the position when sliding window includes most pixels is obtained, and record the sliding window and include at the position All pixels point third gray average and coordinate mean value;
Source region determination subelement 4113, pre-determined distance for being put centered on the point where the coordinate mean value it Interior, the region with the pixel composition of the difference of the third gray average in the second preset range is the source region.
Optionally, the backlight determination unit 412 is additionally operable to:
If the second gray scale of the first gray average of the pixel of the first area and the pixel of the second area The difference of mean value is more than preset value, it is determined that current photo environment is backlighting condition;
If the second gray scale of the first gray average of the pixel of the first area and the pixel of the second area The difference of mean value is less than or equal to the preset value, it is determined that current photo environment is not backlighting condition.
Optionally, the first face detection model is MobileNet SSD convolutional neural networks models, second people Face detection model is HSV complexion models.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each work( Can unit, module division progress for example, in practical application, can be as needed and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of the mobile terminal are divided into different functional units or module, to complete All or part of function described above.Each functional unit, module in embodiment can be integrated in a processing unit, Can also be that each unit physically exists alone, can also be during two or more units be integrated in one unit, above-mentioned collection At unit both may be used hardware form realize, can also be realized in the form of SFU software functional unit.In addition, each function Unit, module specific name also only to facilitate mutually distinguish, the protection domain being not intended to limit this application.Above-mentioned dress Set middle unit, module specific work process, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
Fig. 5 is the schematic block diagram for the mobile terminal that the another embodiment of the application provides.As shown in figure 5, the shifting of the embodiment Moving terminal 5 includes:It one or more processors 50, memory 51 and is stored in the memory 51 and can be in the processing The computer program 52 run on device 50.The processor 40 realizes above-mentioned each face inspection when executing the computer program 52 Survey the step in embodiment of the method, such as step S101 to S103 shown in FIG. 1.Alternatively, the processor 50 executes the meter The function of each module/unit in above-mentioned mobile terminal embodiment, such as module 41 to 43 shown in Fig. 4 are realized when calculation machine program 52 Function.
Illustratively, the computer program 52 can be divided into one or more module/units, it is one or Multiple module/units are stored in the memory 51, and are executed by the processor 50, to complete the application.Described one A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for Implementation procedure of the computer program 52 in the mobile terminal 5 is described.For example, the computer program 52 can be divided It is cut into determining module, first detection module, labeling module.
The determining module, for after the camera for starting the mobile terminal, determining whether as backlighting condition;
The first detection module, for if backlighting condition, then detecting the pre- of camera by the first face detection model It lookes at and whether there is face in image;
The labeling module, if for detecting people in the preview image of camera by the first face detection model Face then marks human face region in the preview image.
Other modules or unit can refer to the description in embodiment shown in Fig. 4, and details are not described herein.
The mobile terminal includes but are not limited to processor 50, memory 51.It will be understood by those skilled in the art that figure 5 be only an example of mobile terminal 5, does not constitute the restriction to mobile terminal 5, may include more more or less than illustrating Component, either combine certain components or different components, for example, the mobile terminal can also include input equipment, it is defeated Go out equipment, network access equipment, bus etc..
The processor 50 can be central processing unit (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor Deng.
The memory 51 can be the internal storage unit of the mobile terminal 5, such as the hard disk of mobile terminal 5 or interior It deposits.The memory 51 can also be to be equipped on the External memory equipment of the mobile terminal 5, such as the mobile terminal 5 Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge Deposit card (Flash Card) etc..Further, the memory 51 can also both include the storage inside list of the mobile terminal 5 Member also includes External memory equipment.The memory 51 is for storing needed for the computer program and the mobile terminal Other programs and data.The memory 51 can be also used for temporarily storing the data that has exported or will export.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
In embodiment provided herein, it should be understood that disclosed mobile terminal and method can pass through it Its mode is realized.For example, mobile terminal embodiment described above is only schematical, for example, the module or list Member division, only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or Component can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point is shown The mutual coupling or direct-coupling or communication connection shown or discussed can be by some interfaces, between device or unit Coupling or communication connection are connect, can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, it can also It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or In use, can be stored in a computer read/write memory medium.Based on this understanding, the application realizes above-mentioned implementation All or part of flow in example method, can also instruct relevant hardware to complete, the meter by computer program Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium May include:Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic of the computer program code can be carried Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described The content that computer-readable medium includes can carry out increasing appropriate according to legislation in jurisdiction and the requirement of patent practice Subtract, such as in certain jurisdictions, according to legislation and patent practice, computer-readable medium do not include be electric carrier signal and Telecommunication signal.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although with reference to aforementioned reality Example is applied the application is described in detail, it will be understood by those of ordinary skill in the art that:It still can be to aforementioned each Technical solution recorded in embodiment is modified or equivalent replacement of some of the technical features;And these are changed Or replace, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all Within the protection domain of the application.

Claims (10)

1. a kind of method for detecting human face, which is characterized in that it is applied to mobile terminal, the method includes:
After the camera for starting the mobile terminal, determine whether as backlighting condition;
If backlighting condition, is then detected by the first face detection model and whether there is face in the preview image of camera;
If face is detected in the preview image of camera by the first face detection model, in the preview image Mark human face region.
2. method for detecting human face as described in claim 1, which is characterized in that before determining whether as backlighting condition, Further include:
By whether there is face, the second Face datection model in the preview image of the second Face datection model inspection camera Accuracy of detection be less than the first face detection model accuracy of detection;
If face is not detected in the preview image of camera by the second Face datection model, it is determined that whether current is backlight Condition.
3. method for detecting human face as claimed in claim 1 or 2, which is characterized in that described to determine whether as backlighting condition Including:
Pixel of the gray value in the first preset range in current preview image is obtained, and according to the coordinate of the pixel Confirm that the source region in preview image, the source region after will confirm that are denoted as first area;
According to the pixel of second area in the first gray average of the pixel of the first area and the preview image The second gray average determine whether current photo environment is backlighting condition, the second area is the in the preview image Region other than one region.
4. method for detecting human face as claimed in claim 3, which is characterized in that described to be confirmed in advance according to the coordinate of the pixel The source region look in image includes:
According to the Coordinate generation pixel distribution map of pixel of the gray value in the first preset range;
It is slided on the pixel distribution map by the sliding window of predetermined width, it includes most pictures to obtain the sliding window Position when vegetarian refreshments, and record the third gray average and seat of all pixels point that the sliding window includes at the position Mark mean value;
Within the pre-determined distance put centered on the point where the coordinate mean value, the difference with the third gray average is The region of pixel composition in two preset ranges is the source region.
5. method for detecting human face as claimed in claim 3, which is characterized in that the pixel according to the first area The second gray average of the pixel of second area determines current ring of taking pictures in first gray average and the preview image Whether border is that backlighting condition includes:
If the second gray average of the first gray average of the pixel of the first area and the pixel of the second area Difference be more than preset value, it is determined that current photo environment be backlighting condition;
If the second gray average of the first gray average of the pixel of the first area and the pixel of the second area Difference be less than or equal to the preset value, it is determined that current photo environment be backlighting condition.
6. method for detecting human face as claimed in claim 1 or 2, which is characterized in that the first face detection model is MobileNet-SSD convolutional neural networks models;
The second Face datection model is HSV complexion models.
7. a kind of mobile terminal, which is characterized in that including:
Determining module, for after the camera for starting the mobile terminal, determining whether as backlighting condition;
First detection module is used for if backlighting condition, then in the preview image for detecting camera by the first face detection model With the presence or absence of face;
Labeling module, if for detecting face in the preview image of camera by the first face detection model, Human face region is marked in the preview image.
8. mobile terminal as claimed in claim 7, which is characterized in that further include:
Second detection module, for before determining whether as backlighting condition, passing through the second Face datection model inspection phase It whether there is face in the preview image of machine;
The determining module is additionally operable to, if face is not detected in the preview image of camera by the second Face datection model, It then determines whether as backlighting condition.
9. a kind of mobile terminal, including memory, processor and it is stored in the memory and can be on the processor The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 6 when executing the computer program The step of any one the method.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey Sequence realizes the step such as any one of claim 1 to 6 the method when the computer program is executed by one or more processors Suddenly.
CN201810530284.3A 2018-05-29 2018-05-29 Face detection method, mobile terminal and computer readable storage medium Active CN108764139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810530284.3A CN108764139B (en) 2018-05-29 2018-05-29 Face detection method, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810530284.3A CN108764139B (en) 2018-05-29 2018-05-29 Face detection method, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108764139A true CN108764139A (en) 2018-11-06
CN108764139B CN108764139B (en) 2021-01-29

Family

ID=64003304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810530284.3A Active CN108764139B (en) 2018-05-29 2018-05-29 Face detection method, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108764139B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886079A (en) * 2018-12-29 2019-06-14 杭州电子科技大学 A kind of moving vehicles detection and tracking method
CN111144215A (en) * 2019-11-27 2020-05-12 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111223549A (en) * 2019-12-30 2020-06-02 华东师范大学 Mobile end system and method for disease prevention based on posture correction
CN112257503A (en) * 2020-09-16 2021-01-22 深圳微步信息股份有限公司 Sex age identification method, device and storage medium
CN112866581A (en) * 2021-01-18 2021-05-28 盛视科技股份有限公司 Camera automatic exposure compensation method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080089560A1 (en) * 2006-10-11 2008-04-17 Arcsoft, Inc. Known face guided imaging method
CN103810695A (en) * 2012-11-15 2014-05-21 浙江大华技术股份有限公司 Light source positioning method and device
CN106161967A (en) * 2016-09-13 2016-11-23 维沃移动通信有限公司 A kind of backlight scene panorama shooting method and mobile terminal
CN106331510A (en) * 2016-10-31 2017-01-11 维沃移动通信有限公司 Backlight photographing method and mobile terminal
CN107085718A (en) * 2017-05-25 2017-08-22 广东欧珀移动通信有限公司 Method for detecting human face and device, computer equipment, computer-readable recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080089560A1 (en) * 2006-10-11 2008-04-17 Arcsoft, Inc. Known face guided imaging method
CN103810695A (en) * 2012-11-15 2014-05-21 浙江大华技术股份有限公司 Light source positioning method and device
CN106161967A (en) * 2016-09-13 2016-11-23 维沃移动通信有限公司 A kind of backlight scene panorama shooting method and mobile terminal
CN106331510A (en) * 2016-10-31 2017-01-11 维沃移动通信有限公司 Backlight photographing method and mobile terminal
CN107085718A (en) * 2017-05-25 2017-08-22 广东欧珀移动通信有限公司 Method for detecting human face and device, computer equipment, computer-readable recording medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886079A (en) * 2018-12-29 2019-06-14 杭州电子科技大学 A kind of moving vehicles detection and tracking method
CN111144215A (en) * 2019-11-27 2020-05-12 北京迈格威科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111144215B (en) * 2019-11-27 2023-11-24 北京迈格威科技有限公司 Image processing method, device, electronic equipment and storage medium
CN111223549A (en) * 2019-12-30 2020-06-02 华东师范大学 Mobile end system and method for disease prevention based on posture correction
CN112257503A (en) * 2020-09-16 2021-01-22 深圳微步信息股份有限公司 Sex age identification method, device and storage medium
CN112866581A (en) * 2021-01-18 2021-05-28 盛视科技股份有限公司 Camera automatic exposure compensation method and device and electronic equipment

Also Published As

Publication number Publication date
CN108764139B (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN108764139A (en) A kind of method for detecting human face, mobile terminal and computer readable storage medium
CN109064390B (en) Image processing method, image processing device and mobile terminal
CN108292311B (en) Apparatus and method for processing metadata
CN110113534B (en) Image processing method, image processing device and mobile terminal
CN108776819A (en) A kind of target identification method, mobile terminal and computer readable storage medium
CN105303514A (en) Image processing method and apparatus
CN108961157B (en) Picture processing method, picture processing device and terminal equipment
CN105874776A (en) Image processing apparatus and method
CN105574910A (en) Electronic Device and Method for Providing Filter in Electronic Device
KR20160097974A (en) Method and electronic device for converting color of image
CN108961267B (en) Picture processing method, picture processing device and terminal equipment
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
TW202001674A (en) Image processing method and apparatus
US9557822B2 (en) Method and apparatus for distinguishing features in data
CN107395958A (en) Image processing method and device, electronic equipment and storage medium
CN109117773A (en) A kind of characteristics of image point detecting method, terminal device and storage medium
CN108769634A (en) A kind of image processing method, image processing apparatus and terminal device
CN111311485B (en) Image processing method and related device
CN113658316B (en) Rendering method and device of three-dimensional model, storage medium and computer equipment
CN108769545A (en) A kind of image processing method, image processing apparatus and mobile terminal
CN113689373A (en) Image processing method, device, equipment and computer readable storage medium
CN107426490A (en) A kind of photographic method and terminal
CN113052923B (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
CN108932703B (en) Picture processing method, picture processing device and terminal equipment
KR20180111242A (en) Electronic device and method for providing colorable content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant