CN107862265A - Image processing method and related product - Google Patents
Image processing method and related product Download PDFInfo
- Publication number
- CN107862265A CN107862265A CN201711030272.6A CN201711030272A CN107862265A CN 107862265 A CN107862265 A CN 107862265A CN 201711030272 A CN201711030272 A CN 201711030272A CN 107862265 A CN107862265 A CN 107862265A
- Authority
- CN
- China
- Prior art keywords
- image
- facial image
- parameter
- face
- ambient parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the present application discloses a kind of image processing method and Related product, wherein, method includes:Obtain the first facial image;Obtain ambient parameter;Image procossing is carried out to first facial image according to the ambient parameter, obtains the second facial image, second facial image is used for recognition of face.The embodiment of the present application can be further perfect to facial image according to ambient parameter, improves man face image acquiring quality, such facial image, advantageously in recognition of face, so as to lift recognition of face success rate.
Description
Technical field
The application is related to technical field of mobile terminals, and in particular to a kind of image processing method and Related product.
Background technology
With a large amount of popularization and applications of mobile terminal (mobile phone, tablet personal computer etc.), the application that mobile terminal can be supported is got over
Come more, function is stronger and stronger, and mobile terminal develops towards variation, personalized direction, and turning into can not in user's life
The appliance and electronic lacked.
At present, recognition of face is increasingly favored by mobile terminal production firm, still, due to the complexity of environment
Property, for example, dark situation, bright ring border, can reduce recognition of face success rate to a certain extent, therefore, how to lift face figure
It is urgently to be resolved hurrily the problem of to lift recognition of face success rate as acquisition quality.
The content of the invention
The embodiment of the present application provides a kind of image processing method and Related product, can lift man face image acquiring matter
Amount, so as to lift recognition of face success rate.
In a first aspect, the embodiment of the present application provides a kind of mobile terminal, including application processor (application
Processor, AP), and with the AP face identification devices being connected and environmental sensor, wherein,
The face identification device, for obtaining the first facial image;
The environmental sensor, for obtaining ambient parameter;
The AP, for carrying out image procossing to first facial image according to the ambient parameter, obtain the second people
Face image, second facial image are used for recognition of face.
Second aspect, the embodiment of the present application provide a kind of image processing method, applied to including application processor AP, with
And include with the AP face identification devices being connected and the mobile terminal of environmental sensor, methods described:
The face identification device obtains the first facial image;
The environmental sensor obtains ambient parameter;
The AP carries out image procossing according to the ambient parameter to first facial image, obtains the second face figure
Picture, second facial image are used for recognition of face.
The third aspect, the embodiment of the present application provide a kind of image processing method, including:
Obtain the first facial image;
Obtain ambient parameter;
Image procossing is carried out to first facial image according to the ambient parameter, obtains the second facial image, it is described
Second facial image is used for recognition of face.
Fourth aspect, the embodiment of the present application provide a kind of image processing apparatus, including:
First acquisition unit, for obtaining the first facial image;
Second acquisition unit, for obtaining ambient parameter;
Processing unit, for carrying out image procossing to first facial image according to the ambient parameter, obtain second
Facial image, second facial image are used for recognition of face.
5th aspect, the embodiment of the present application provide a kind of mobile terminal, including:Application processor AP and memory;With
And one or more programs, one or more of programs are stored in the memory, and it is configured to by the AP
Perform, described program includes being used for such as the instruction of the part or all of step described in the third aspect.
6th aspect, the embodiment of the present application provide a kind of computer-readable recording medium, wherein, it is described computer-readable
Storage medium is used to store computer program, wherein, the computer program causes computer to perform such as the embodiment of the present application the
The instruction of part or all of step described in three aspects.
7th aspect, the embodiment of the present application provide a kind of computer program product, wherein, the computer program product
Non-transient computer-readable recording medium including storing computer program, the computer program are operable to make calculating
Machine is performed such as the part or all of step described in the embodiment of the present application third aspect.The computer program product can be one
Individual software installation bag.
Implement the embodiment of the present application, have the advantages that:
As can be seen that image processing method and Related product described in the embodiment of the present application, obtain the first face figure
Picture, ambient parameter is obtained, image procossing is carried out to the first facial image according to ambient parameter, obtains the second facial image, second
Facial image is used for recognition of face, can be further perfect to facial image according to ambient parameter, improves man face image acquiring
Quality, such facial image, advantageously in recognition of face, so as to lift recognition of face success rate.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of application, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Figure 1A is a kind of configuration diagram for Example mobile terminals that the embodiment of the present application provides;
Figure 1B is a kind of structural representation for mobile terminal that the embodiment of the present application provides;
Fig. 1 C are a kind of schematic flow sheets of image processing method disclosed in the embodiment of the present application;
Fig. 2 is the schematic flow sheet of another image processing method disclosed in the embodiment of the present application;
Fig. 3 is a kind of another structural representation for mobile terminal that the embodiment of the present application provides;
Fig. 4 A are a kind of structural representations for image processing apparatus that the embodiment of the present application provides;
Fig. 4 B are the structural representations of the processing unit of the image processing apparatus described by Fig. 4 A that the embodiment of the present application provides
Figure;
Fig. 4 C are the structures of the first acquisition unit of the image processing apparatus described by Fig. 4 A that the embodiment of the present application provides
Schematic diagram;
Fig. 4 D are another structural representations of the image processing apparatus described by Fig. 4 A that the embodiment of the present application provides;
Fig. 4 E are another structural representations of the image processing apparatus described by Fig. 4 A that the embodiment of the present application provides;
Fig. 5 is the structural representation of another mobile terminal disclosed in the embodiment of the present application.
Embodiment
In order that those skilled in the art more fully understand application scheme, below in conjunction with the embodiment of the present application
Accompanying drawing, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is only
Some embodiments of the present application, rather than whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art
The every other embodiment obtained under the premise of creative work is not made, belong to the scope of the application protection.
Term " first ", " second " in the description and claims of this application and above-mentioned accompanying drawing etc. are to be used to distinguish
Different objects, rather than for describing particular order.In addition, term " comprising " and " having " and their any deformations, it is intended that
It is to cover non-exclusive include.Such as process, method, system, product or the equipment for containing series of steps or unit do not have
The step of being defined in the step of having listed or unit, but alternatively also including not listing or unit, or alternatively also wrap
Include for other intrinsic steps of these processes, method, product or equipment or unit.
Referenced herein " embodiment " is it is meant that the special characteristic, structure or the characteristic that describe can wrap in conjunction with the embodiments
It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
Mobile terminal involved by the embodiment of the present application can include the various handheld devices with radio communication function,
Mobile unit, wearable device, computing device or other processing equipments for being connected to radio modem, and various forms
User equipment (user equipment, UE), mobile station (mobile station, MS), terminal device (terminal
Device) etc..For convenience of description, apparatus mentioned above is referred to as mobile terminal.
The embodiment of the present application is described in detail below.A kind of Example mobile terminals 1000 as shown in Figure 1A, the shifting
The face identification device of dynamic terminal 1000 can be camera module 21, and above-mentioned camera module can be single camera, for example,
Visible image capturing head, or, infrared camera.Or above-mentioned camera module can be dual camera, above-mentioned dual camera can
It is visible image capturing head with one, one is infrared camera, or, both visible image capturing head a, for example, shooting
Head be visible image capturing head, and another camera be infrared camera, in another example, camera is for visible image capturing head and separately
One camera is also visible image capturing head, and either above-mentioned camera module 21 can be front camera or rearmounted shooting
Head.
Figure 1B is referred to, Figure 1B is a kind of structural representation of shown mobile terminal 100, and the mobile terminal 100 wraps
Include:Application processor AP110, face identification device 120 and environmental sensor 160, wherein, the AP110 is connected by bus 150
Connect face identification device 120 and environmental sensor 160.Above-mentioned environmental sensor can be used for detecting ambient parameter, environmentally sensitive
Device can be following at least one:Breathing detection sensor, ambient light sensor, electromagnetic detection sensor, ambient color temperature detector are surveyed
Sensor, alignment sensor, temperature sensor, humidity sensor etc..
Based on the mobile terminal described by Figure 1A-Figure 1B, can be used for implementing function such as:
The face identification device 120, for obtaining the first facial image;
The environmental sensor 160, for obtaining ambient parameter;
The AP110, for carrying out image procossing to first facial image according to the ambient parameter, obtain second
Facial image, second facial image are used for recognition of face.
In a possible example, first facial image is carried out at image according to the ambient parameter described
In terms of reason, the AP110 is specifically used for:
The first property parameters of first facial image are obtained, first property parameters are first facial image
Picture quality characterization parameter;
Obtain the second property parameters;
Determine the difference parameter between first property parameters and second property parameters;
Adjustment parameter corresponding with the difference parameter is determined according to the ambient parameter;
Image procossing is carried out to first facial image according to the adjustment parameter.
In a possible example, in terms of the first facial image of the acquisition, the face identification device 120 is specific
For:
Face is focused, obtains target image;
FIG pull handle is carried out to the target image, human face region image is obtained, using the human face region image as institute
State the first facial image.
In a possible example, the AP110 also particularly useful for:
Image quality evaluation is carried out to first facial image, image quality evaluation values are obtained, in described image quality
When evaluation of estimate is less than pre-set image quality threshold, described the step of obtaining ambient parameter, is performed by the environmental sensor 160.
In a possible example, the AP110 also particularly useful for:
Detect whether the ambient parameter meets preparatory condition, when the ambient parameter does not meet the preparatory condition,
Perform described the step of image procossing is carried out to first facial image according to the ambient parameter.
Based on the mobile terminal described by above-mentioned Figure 1A-Figure 1B, available for a kind of execution image procossing as described below
Method, it is specific as follows:
The face identification device 120 obtains the first facial image;
The environmental sensor 160 obtains ambient parameter;
The AP110 carries out image procossing according to the ambient parameter to first facial image, obtains the second face
Image, second facial image are used for recognition of face.
As can be seen that the image processing method described in the embodiment of the present application, obtains the first facial image, environment is obtained
Parameter, image procossing is carried out to the first facial image according to ambient parameter, obtains the second facial image, the second facial image is used for
Recognition of face, can be further perfect to facial image according to ambient parameter, improves man face image acquiring quality, such people
Face image, advantageously in recognition of face, so as to lift recognition of face success rate.
Based on the mobile terminal described by Figure 1A-Figure 1B, Fig. 1 C are referred to, a kind of image provided for the embodiment of the present application
The embodiment schematic flow sheet of processing method.Image processing method described in the present embodiment, it may include following steps:
101st, the first facial image is obtained.
Wherein it is possible to by being focused to face, and shot, obtain the first facial image, the first facial image
Can be the image comprising human face region, in this way, the first facial image can be that a portion is behaved in the embodiment of the present application
Face area image, another part are background image.
Wherein, before above-mentioned steps 101, may include steps of:
A1, obtain target environment light parameter;
A2, determine target acquisition parameters corresponding with the target environment light parameter;
Then, above-mentioned steps 101, the first facial image is obtained, can be implemented as follows:
Face is shot according to the target acquisition parameters, obtains first facial image.
Wherein, above-mentioned target environment light parameter can be detected to obtain by environmental sensor, and above-mentioned environmental sensor can be with
For detecting ambient light parameter, ambient light parameter can be ambient brightness, or, environment colour temperature, environmental sensor can be with
To be following at least one:Ambient light sensor, ambient color temperature detection sensor.
Further, the corresponding relation between acquisition parameters and ambient light parameter can be prestored in mobile terminal,
And then corresponding with target environment light parameter target acquisition parameters are determined according to the corresponding relation, above-mentioned acquisition parameters can be with
Include but are not limited to:Focal length, exposure time, aperture size, exposal model, sensitivity ISO, white balance parameter etc..In this way,
Image optimal in the present context can be obtained.
Alternatively, in above-mentioned steps 101, the first facial image is obtained, it may include following steps:
B1, shot according to default acquisition parameters set pair face, obtain N facial images, the default acquisition parameters
Collection includes N group acquisition parameters, and the N facial images correspond with the N groups acquisition parameters, and the N is whole more than 1
Number;
B2, image quality evaluation is carried out to the N facial images, obtain N number of image quality evaluation values;
B3, facial image conduct corresponding to maximum image quality evaluation value is chosen from N number of image quality evaluation values
First facial image.
Wherein, above-mentioned acquisition parameters can include but are not limited to:Focal length, exposure time, aperture size, exposal model,
Sensitivity ISO, white balance parameter etc..In this way, image optimal in the present context can be obtained.Above-mentioned default acquisition parameters collection
Pre-save in memory, it can include N group acquisition parameters, and N is the integer more than 1.In this way, default shooting can be used
Each group of acquisition parameters in parameter set are shot to face, obtain N facial images, and carry out figure to N facial images
As quality evaluation, N number of image quality evaluation values are obtained, maximum image quality evaluation value is chosen from N number of image quality evaluation values
Corresponding facial image is as the first facial image, in this way, can be filtered out by different acquisition parameters suitable with environment
Optimal facial image, be advantageous to the accuracy rate that lifting determines human face region image in target facial image.
Wherein, in above-mentioned steps B2, image quality evaluation is carried out to the N facial images, can be as follows
Implement:
Image is carried out to each facial image in the N facial images using at least one image quality evaluation index
Quality evaluation, so as to obtain N number of image quality evaluation values.
Wherein, when evaluating facial image, multiple images quality evaluation index, each image quality evaluation can be included
Index also corresponds to a weight, in this way, when each image quality evaluation index carries out image quality evaluation to facial image,
An evaluation result is obtained, finally, is weighted, also just obtains final image quality evaluation values.Image quality evaluation
Index may include but be not limited only to:Average, standard deviation, entropy, definition, signal to noise ratio etc..
It should be noted that due to when use single evaluation index is evaluated picture quality, there is certain limitation
Property, therefore, picture quality can be evaluated using multiple images quality evaluation index, certainly, picture quality is evaluated
When, not image quality evaluation index is The more the better, because image quality evaluation index is more, the meter of image quality assessment process
It is higher to calculate complexity, it is better also to may not be certain image quality evaluation effect, therefore, higher situation is being required to image quality evaluation
Under, picture quality can be evaluated using 2~10 image quality evaluation indexs.Specifically, image quality evaluation is chosen to refer to
Target number and which index, according to depending on specific implementation situation.Certainly, specifically scene selection picture quality must be also combined to comment
Valency index, carry out carrying out the image quality index of image quality evaluation selection under dark situation under image quality evaluation and bright ring border
Can be different.
Alternatively, in the case of not high to image quality evaluation required precision, an image quality evaluation index can be used
Evaluated, for example, carrying out image quality evaluation values to pending image with entropy, it is believed that entropy is bigger, then illustrates picture quality
It is better, on the contrary, entropy is smaller, then illustrate that picture quality is poorer.
Alternatively, in the case of higher to image quality evaluation required precision, multiple images quality evaluation can be used
Index is evaluated image, and when multiple images quality evaluation index carries out image quality evaluation to image, it is more that this can be set
The weight of each image quality evaluation index in individual image quality evaluation index, can obtain multiple images quality evaluation value, according to
The plurality of image quality evaluation values and its corresponding weight can obtain final image quality evaluation values, for example, three image matter
Measuring evaluation index is respectively:A indexs, B indexs and C indexs, A weight is a1, and B weight is a2, and C weight is a3, is used
A, when B and C carries out image quality evaluation to a certain image, image quality evaluation values corresponding to A are b1, picture quality corresponding to B
Evaluation of estimate is b2, and image quality evaluation values corresponding to C are b3, then, last image quality evaluation values=a1b1+a2b2+
a3b3.Under normal circumstances, image quality evaluation values are bigger, illustrate that picture quality is better.
Alternatively, in above-mentioned steps 101, the first facial image is obtained, it may include following steps:
C1, face is focused, obtain target image;
C2, to the target image carry out FIG pull handle, obtain human face region image, using the human face region image as
First facial image.
Wherein it is possible to be focused to face, target image is obtained, human face region is not only included in the target image, also
Background area can be included, therefore, FIG pull handle can be carried out to target image, obtain human face region image, and by the face
Area image is as the first facial image.
102nd, ambient parameter is obtained.
Wherein, ambient parameter can include following at least one:Ambient light parameter, geo-location parameter, camera ginseng
Number, shooting distance parameter, environmental magnetic field interference coefficient, state of weather parameter, environmental background parameter, character features parameter, wherein,
Ambient parameter can be collected by environmental sensor, and ambient light parameter can be ambient brightness or environment colour temperature, take the photograph
As head parameter can be following at least one:Motor vibrations amplitude, camera lens soil level, camera lens degree of crushing, shot attribute ginseng
Number (for example, aperture size, angular field of view etc.) etc..Environmental background parameter can be following at least one:Desert, ocean,
Snow scenes etc..Character features parameter can include following at least one:The colour of skin, whether configure glasses, jewellery, scar etc..Ground
Reason location parameter can be understood as particular geographic location, and state of weather parameter can be following at least one:Temperature, temperature, wind
To, weather pattern (for example, the cloudy day, the rainy day, fine day is cloudy etc.), by taking temperature as an example, at different temperature, the property of camera
Can be also different.
Alternatively, between above-mentioned steps 101 and step 102, can also comprise the following steps:
Image quality evaluation is carried out to first facial image, image quality evaluation values are obtained, in described image quality
When evaluation of estimate is less than pre-set image quality threshold, described the step of obtaining ambient parameter is performed.
Wherein, above-mentioned pre-set image quality threshold can voluntarily be set by user, or, system default.It is above-mentioned to described
First facial image carries out image quality evaluation, and the specific implementation for obtaining image quality evaluation values is referred to above-mentioned steps
B2, it will not be repeated here.
103rd, image procossing is carried out to first facial image according to the ambient parameter, obtains the second facial image,
Second facial image is used for recognition of face.
Wherein, under different ambient parameters, the otherness that facial image is shown is different, for example, under dark situation, people
Face image can show partially secretly, and average gray value is smaller, and under bright ring border, then facial image can show partially bright, average gray value
It is larger, in the case where there is mist environment, then it can show facial image and obscure, therefore, be needed for different ambient parameters to face figure
As carrying out a certain degree of processing, to allow the facial image after processing to be more beneficial for recognition of face, to reach lifting
The purpose of recognition of face success rate.
Alternatively, in above-mentioned steps 103, image procossing is carried out to first facial image according to the ambient parameter,
It may include following steps:
31st, the first property parameters of first facial image are obtained, first property parameters are first face
The picture quality characterization parameter of image;
32nd, the second property parameters are obtained;
33rd, the difference parameter between first property parameters and second property parameters is determined;
34th, adjustment parameter corresponding with the difference parameter is determined according to the ambient parameter;
35th, image procossing is carried out to first facial image according to the adjustment parameter.
Wherein, above-mentioned first property parameters, the second property parameters may each comprise following at least one:Feature point number,
Characteristic point distribution density, average gray value, mean square deviation, comentropy etc..Wherein, feature point number is features whole in image
Point sum, characteristic point distribution density be each unit area in feature point number, mean square deviation be image mean square deviation, comentropy
For the comentropy of image.First facial image after determination, what its first property parameters was also to determine, the first property parameters can
To reflect the picture quality of the first facial image from side, the second property parameters are expected property parameters, i.e., user need by
Which kind of degree facial image is adjusted to, and the second property parameters can voluntarily be set by user, or, system default, further,
The difference parameter between the first property parameters and the second property parameters can be determined, difference parameter is determined to the first facial image
Regulation direction, for example, be to carry out noctovision enhancing, or overexposure carries out darkening processing, can prestore adjustment parameter
Mapping relations between property parameters, and then, adjustment parameter corresponding to difference parameter, root can be determined according to the mapping relations
Image procossing is carried out to the first facial image according to the adjustment parameter, obtains the second facial image, the second facial image is used for face
Identification.Above-mentioned adjustment parameter can include following at least one:Gray scale stretching parameter, sharpen parameter, pad parameter, defogging parameter
Etc., gray scale stretching parameter can be understood as carrying out image the control parameter of gray scale stretching, sharpens parameter and can be understood as pair
Image is sharpened the control parameter of processing, and defogging parameter can be understood as carrying out image the control parameter of defogging processing, fill out
Control parameter when parameter can be understood as being filled image is filled, for example, camera lens is broken, is left in facial image
One striped, the striped can be filled.
Alternatively, after above-mentioned steps 103, following steps can be performed:
Second facial image is matched with default face template, preset in second facial image with described
Face template is unlocked operation when the match is successful.
Wherein, default face template can pre-save in the terminal.
Further, it is above-mentioned to be matched second facial image with default face template, it may include following steps:
D1, the target area that definition in second facial image meets preset requirement is chosen, and to the target area
Domain carries out feature point extraction, obtains fisrt feature point set;
D2, extraction second facial image circumference, obtain the first profile;
D3, the first profile matched with the second profile of the default face template, and by described first
Feature point set is matched with the default face template;
D4, in the second outline success of the first profile and the default face template and the fisrt feature point
Collection with the default face template the match is successful when, confirm the match is successful;In the first profile and the default face template
The failure of the second outline, or, the fisrt feature point set and the default face template confirm matching when it fails to match
Failure.
Wherein, in the embodiment of the present application, target area can be chosen from the second facial image, if target area, adopted
The feature of collection is complete, therefore, is advantageous to lift recognition of face efficiency, on the other hand, can because target area is subregion
There can be contingency matching, or, identification region is less, therefore carries out contours extract to the second facial image, obtains the first profile,
In the matching process, the characteristic point of target area is matched with default face template, meanwhile, also by the first profile with presetting
Face template is matched, and when needing the both of which to match, just confirms that the match is successful, if both any one of matching lose
Lose, then it fails to match, in this way, while success rate is ensured, also ensure that matching speed and security.
Alternatively, above-mentioned definition can also be defined with feature point number, and after all, image is more clear, then it is included
Characteristic point is more, then, above-mentioned preset requirement is then:Feature point number is more than predetermined number threshold value, above-mentioned predetermined number threshold value
Can voluntarily it be set by user or system default, then above-mentioned steps D1, can implement as follows:Determine second people
The region that feature point number is more than predetermined number threshold value in face image is the target area.
Alternatively, above-mentioned definition can be calculated with specific formula, be described in the related art, no longer superfluous herein
State, then, above-mentioned preset requirement is then:Definition values are more than default clarity threshold, and above-mentioned default clarity threshold can be by
User voluntarily sets or system default, then above-mentioned steps D1 can be implemented as follows:Determine second facial image
The region that middle definition values are more than default clarity threshold is the target area.
In addition, features described above extraction can use following algorithm to realize:Harris Corner Detection Algorithms, scale invariant feature become
Change, SUSAN Corner Detection Algorithms etc., will not be repeated here.Contours extract in above-mentioned steps D2 can be following algorithm:Suddenly
Husband converts, haar or canny etc..
Alternatively, between second facial image is matched with default face template, can also include as follows
Step:
Image enhancement processing is carried out to second facial image;
Then, it is above-mentioned to be matched second facial image with the default face template, can be as follows
Implement:
Second facial image after image enhancement processing is matched with the default face template.
Above-mentioned image enhancement processing may include but be not limited only to:Image denoising (for example, wavelet transformation carry out image denoising),
Image restoration (for example, Wiener filtering), noctovision enhancing algorithm (for example, histogram equalization, gray scale stretching etc.), to the
After two facial images carry out image enhancement processing, the quality of the second facial image can get a promotion to a certain extent.
Alternatively, it is above-mentioned to be unlocked operation, can be following at least one situation:For example, mobile terminal, which is in, puts out screen
Under state, unblock operation can light screen, and enter the homepage of mobile terminal, or specified page;At mobile terminal
Under bright screen state, unblock operation can be, into the homepage of mobile terminal, or specified page;Mobile terminal is a certain should
The unblock page, unblock operation can unblock be completed, into the page after unblock, for example, mobile terminal may be at propping up
The page is paid, unblock operation can be paid.Above-mentioned specified page can be following at least one:The page of some application,
Or the page that user voluntarily specifies.
In the specific implementation, for example, the first facial image can be collected by camera, further, various dimensions ring is gathered
Border parameter, such as:Light (strong light), environmental background parameter (surrounding enviroment (forest, crowd's quantity), state of weather parameter (snow
Scape, temperature, humidity etc.)), camera parameter (such as motor jitter amplitude, camera lens greasy dirt/water covering/camera lens crush), character features
Parameter (for example, the colour of skin, beard, glasses, jewellery, scar etc.), and then, figure is carried out to the first facial image according to ambient parameter
As processing, such as:Prospect individual subject is protruded when crowd is more, different degrees of Edge contrast, Ke Yigen are carried out according to wobble information
Colour gamut curve adjustment is carried out according to light intensity degree, saturation degree processing etc. is carried out according to humidity and temperature etc., has mist to carry out demisting excellent
Change, jewellery carry out scratch figure compensation etc., so as to, obtain the second facial image, second facial image be suitable for face unblock when
Image quality requirements, and then, be advantageous to lifted recognition of face success rate.
As can be seen that the image processing method described in the embodiment of the present application, obtains the first facial image, environment is obtained
Parameter, image procossing is carried out to the first facial image according to ambient parameter, obtains the second facial image, the second facial image is used for
Recognition of face, can be further perfect to facial image according to ambient parameter, improves man face image acquiring quality, such people
Face image, advantageously in recognition of face, so as to lift recognition of face success rate.
Consistent with the abovely, referring to Fig. 2, a kind of embodiment stream of the image processing method provided for the embodiment of the present application
Journey schematic diagram.Image processing method described in the present embodiment, it may include following steps:
201st, the first facial image is obtained.
202nd, ambient parameter is obtained.
203rd, detect whether the ambient parameter meets preparatory condition.
Wherein, ambient parameter can include multiple parameters, and each parameter can correspond to a preparatory condition, above-mentioned default
The picture quality that condition can be understood as facial image corresponding condition when meeting the requirements, for example, brightness is in predetermined luminance model
(above-mentioned predetermined luminance range can voluntarily be set by user, or system default) is enclosed, it is (above-mentioned that colour temperature is in preset color temperature scope
Preset color temperature scope can voluntarily be set by user, or system default), the distance between face and camera be in it is default away from
From scope (above-mentioned pre-determined distance scope can voluntarily be set by user, or system default), etc..It is considered that ambient parameter
If meeting preparatory condition, to a certain extent, obtained quality of human face image is also in the range of meeting the requirements.In this way,
When ambient parameter meets preparatory condition, step 204 can not be performed, when ambient parameter does not meet preparatory condition, performs step
204。
For example, preparatory condition is:Brightness is 80~200, between the distance between face and camera are 0.25~1 meter,
Ambient parameter is:Ambient brightness is 100, and the distance between face and camera are 0.5 meter, then ambient parameter meets default bar
Part, then, it is believed that if ambient parameter meets preparatory condition, to a certain extent, obtained quality of human face image also exists
In the range of meeting the requirements, and then, step 204 can not be performed.
204th, when the ambient parameter does not meet the preparatory condition, according to the ambient parameter to first face
Image carries out image procossing, obtains the second facial image, second facial image is used for recognition of face.
Wherein, above-mentioned steps 201- steps 202, the specific descriptions of step 204 can refer to the image procossing described by Fig. 1 C
The corresponding step of method, will not be repeated here.
As can be seen that the image processing method described in the embodiment of the present application, obtains the first facial image, environment is obtained
Parameter, whether detection ambient parameter meets preparatory condition, when ambient parameter does not meet preparatory condition, according to ambient parameter to the
One facial image carries out image procossing, obtains the second facial image, the second facial image is used for recognition of face, can be according to environment
Parameter is further perfect to facial image, improves man face image acquiring quality, such facial image, advantageously in face
Identification, so as to lift recognition of face success rate.
Referring to Fig. 3, Fig. 3 is a kind of mobile terminal that the embodiment of the present application provides, including:Application processor AP and storage
Device;And one or more programs, one or more of programs are stored in the memory, and it is configured to by institute
AP execution is stated, described program includes being used for the instruction for performing following steps:
Obtain the first facial image;
Obtain ambient parameter;
Image procossing is carried out to first facial image according to the ambient parameter, obtains the second facial image, it is described
Second facial image is used for recognition of face.
In a possible example, first facial image is carried out at image according to the ambient parameter described
In terms of reason, described program includes being used for the instruction for performing following steps:
The first property parameters of first facial image are obtained, first property parameters are first facial image
Picture quality characterization parameter;
Obtain the second property parameters;
Determine the difference parameter between first property parameters and second property parameters;
Adjustment parameter corresponding with the difference parameter is determined according to the ambient parameter;
Image procossing is carried out to first facial image according to the adjustment parameter.
In a possible example, it is described acquisition the first facial image in terms of, described program include be used for perform with
The instruction of lower step:
Face is focused, obtains target image;
FIG pull handle is carried out to the target image, human face region image is obtained, using the human face region image as institute
State the first facial image.
In a possible example, described program also includes being used for the instruction for performing following steps:
Image quality evaluation is carried out to first facial image, image quality evaluation values are obtained, in described image quality
When evaluation of estimate is less than pre-set image quality threshold, described the step of obtaining ambient parameter is performed.
In a possible example, described program also includes being used for the instruction for performing following steps:
Detect whether the ambient parameter meets preparatory condition, when the ambient parameter does not meet the preparatory condition,
Perform described the step of image procossing is carried out to first facial image according to the ambient parameter.
It is the device for implementing above-mentioned image processing method below, it is specific as follows:
Fig. 4 A are referred to, Fig. 4 A are a kind of structural representations for image processing apparatus that the present embodiment provides.At the image
Reason device includes first acquisition unit 401, the second determining unit 402 and processing unit 403, wherein,
First acquisition unit 401, for obtaining the first facial image;
Second acquisition unit 402, for obtaining ambient parameter;
Processing unit 403, for carrying out image procossing to first facial image according to the ambient parameter, obtain the
Two facial images, second facial image are used for recognition of face.
Alternatively, if Fig. 4 B, Fig. 4 B are the detail knots of the processing unit 403 of the image processing apparatus described by Fig. 4 A
Structure, the processing unit 403 may include:Acquisition module 4031, determining module 4032 and processing module 4033, it is specific as follows:
Acquisition module 4031, for obtaining the first property parameters of first facial image, first property parameters
For the picture quality characterization parameter of first facial image;And obtain the second property parameters;
Determining module 4032, for determining that the difference between first property parameters and second property parameters is joined
Number;And adjustment parameter corresponding with the difference parameter is determined according to the ambient parameter;
Processing module 4033, for carrying out image procossing to first facial image according to the adjustment parameter.
Alternatively, if Fig. 4 C, Fig. 4 C are the specific thin of the first acquisition unit 401 of the image processing apparatus described by Fig. 4 A
Nodule structure, the first acquisition unit 401 can include:Focusing module 4011 and stingy module 4012, it is specific as follows:
Focusing module 4011, for being focused to face, obtain target image;
Module 4012 is scratched, for carrying out FIG pull handle to the target image, human face region image is obtained, by the people
Face area image is as first facial image.
Alternatively, such as Fig. 4 D, the modification structures of image processing apparatus of Fig. 4 D described by Fig. 4 A, it is compared with Fig. 4 A
Compared with may also include:Evaluation unit 404, it is specific as follows:
Evaluation unit 404, for carrying out image quality evaluation to first facial image, obtain image quality evaluation
Value, when described image quality evaluation value is less than pre-set image quality threshold, described obtain is performed by the second acquisition unit 402
The step of taking ambient parameter.
Alternatively, such as Fig. 4 E, the modification structures of image processing apparatus of Fig. 4 E described by Fig. 4 A, it is compared with Fig. 4 A
Compared with may also include:Detection unit 405, it is specific as follows:
Detection unit 405, for detecting whether the ambient parameter meets preparatory condition, do not met in the ambient parameter
During the preparatory condition, performed described to enter first facial image according to the ambient parameter by the processing unit 403
The step of row image procossing.
As can be seen that the image processing apparatus described in the embodiment of the present application, obtains the first facial image, environment is obtained
Parameter, image procossing is carried out to the first facial image according to ambient parameter, obtains the second facial image, the second facial image is used for
Recognition of face, can be further perfect to facial image according to ambient parameter, improves man face image acquiring quality, such people
Face image, advantageously in recognition of face, so as to lift recognition of face success rate.
It is understood that the function of each program module of the image processing apparatus of the present embodiment can be real according to the above method
The method specific implementation in example is applied, its specific implementation process is referred to the associated description of above method embodiment, herein no longer
Repeat.
The embodiment of the present application additionally provides another mobile terminal, as shown in figure 5, for convenience of description, illustrate only with
The related part of the embodiment of the present application, particular technique details do not disclose, refer to the embodiment of the present application method part.The movement
Terminal can be to include mobile phone, tablet personal computer, PDA (personal digital, assistant, personal digital assistant), POS
Any terminal device such as (point of sales, point-of-sale terminal), vehicle-mounted computer, so that mobile terminal is mobile phone as an example:
Fig. 5 is illustrated that the block diagram of the part-structure of the mobile phone related to the mobile terminal of the embodiment of the present application offer.Ginseng
Fig. 5 is examined, mobile phone includes:Radio frequency (radio frequency, RF) circuit 910, memory 920, input block 930, sensor
950th, voicefrequency circuit 960, Wireless Fidelity (wireless fidelity, WiFi) module 970, application processor AP980 and
The grade part of power supply 990.It will be understood by those skilled in the art that the handset structure shown in Fig. 5 does not form the restriction to mobile phone,
It can include than illustrating more or less parts, either combine some parts or different parts arrangement.
Each component parts of mobile phone is specifically introduced with reference to Fig. 5:
Input block 930 can be used for the numeral or character information for receiving input, and produce with the user of mobile phone set with
And the key signals input that function control is relevant.Specifically, input block 930 may include touching display screen 933, face identification device
931 and other input equipments 932.Face identification device 931 can refer to said structure, and concrete structure composition can refer to above-mentioned retouch
State, do not repeat excessively herein.Input block 930 can also include other input equipments 932.Specifically, other input equipments 932
Physical button, function key (such as volume control button, switch key etc.), trace ball, mouse, operation can be included but is not limited to
One or more in bar etc..
Wherein, the AP980, for performing following steps:
Obtain the first facial image;
Obtain ambient parameter;
Image procossing is carried out to first facial image according to the ambient parameter, obtains the second facial image, it is described
Second facial image is used for recognition of face.
AP980 is the control centre of mobile phone, using various interfaces and the various pieces of connection whole mobile phone, passes through fortune
Row performs the software program and/or module being stored in memory 920, and calls the data being stored in memory 920,
The various functions and processing data of mobile phone are performed, so as to carry out integral monitoring to mobile phone.Optionally, AP980 may include one or
Multiple processing units, the processing unit can be artificial intelligent chip, quantum chip;Preferably, AP980 can integrate application processor
And modem processor, wherein, application processor mainly handles operating system, user interface and application program etc., modulatedemodulate
Processor is adjusted mainly to handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into AP980
In.
In addition, memory 920 can include high-speed random access memory, nonvolatile memory, example can also be included
Such as at least one disk memory, flush memory device or other volatile solid-state parts.
RF circuits 910 can be used for the reception and transmission of information.Generally, RF circuits 910 include but is not limited to antenna, at least one
Individual amplifier, transceiver, coupler, low-noise amplifier (low noise amplifier, LNA), duplexer etc..In addition,
RF circuits 910 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication
Standard or agreement, including but not limited to global system for mobile communications (global system of mobile
Communication, GSM), general packet radio service (general packet radio service, GPRS), code division it is more
Location (code division multiple access, CDMA), WCDMA (wideband code division
Multiple access, WCDMA), Long Term Evolution (long term evolution, LTE), Email, Short Message Service
(short messaging service, SMS) etc..
Mobile phone may also include at least one sensor 950, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include environmental sensor and proximity transducer, wherein, environmental sensor can be according to the bright of ambient light
Secretly adjust the brightness of touching display screen, proximity transducer can close touching display screen and/or the back of the body when mobile phone is moved in one's ear
Light.As one kind of motion sensor, accelerometer sensor can detect in all directions the size of (generally three axles) acceleration,
Size and the direction of gravity are can detect that when static, application (such as horizontal/vertical screen switching, related trip available for identification mobile phone posture
Play, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;The gyro that can also configure as mobile phone
The other sensors such as instrument, barometer, hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Voicefrequency circuit 960, loudspeaker 961, microphone 962 can provide the COBBAIF between user and mobile phone.Audio-frequency electric
Electric signal after the voice data received conversion can be transferred to loudspeaker 961, sound is converted to by loudspeaker 961 by road 960
Signal plays;On the other hand, the voice signal of collection is converted to electric signal by microphone 962, is turned after being received by voicefrequency circuit 960
It is changed to voice data, then after voice data is played into AP980 processing, through RF circuits 910 to be sent to such as another mobile phone, or
Voice data is played to memory 920 further to handle.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronicses postal by WiFi module 970
Part, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 5 is shown
WiFi module 970, but it is understood that, it is simultaneously not belonging to must be configured into for mobile phone, can not change as needed completely
Become in the essential scope of invention and omit.
Mobile phone also includes the power supply 990 (such as battery) to all parts power supply, it is preferred that power supply can pass through power supply pipe
Reason system and AP980 are logically contiguous, so as to realize the work(such as management charging, electric discharge and power managed by power-supply management system
Energy.
Although being not shown, mobile phone can also include camera, bluetooth module etc., will not be repeated here.
In embodiment shown in earlier figures 1C or Fig. 2, each step method flow can based on the mobile phone structure realize.
In embodiment shown in earlier figures 3, Fig. 4 A~Fig. 4 E, each unit function can based on the mobile phone structure realize.
The embodiment of the present application also provides a kind of computer-readable storage medium, wherein, the computer-readable storage medium is stored for electricity
The computer program that subdata exchanges, it is any as described in above-mentioned embodiment of the method that the computer program make it that computer performs
A kind of part or all of step of image processing method.
The embodiment of the present application also provides a kind of computer program product, and the computer program product includes storing calculating
The non-transient computer-readable recording medium of machine program, the computer program are operable to make computer perform side as described above
The part or all of step of any image processing method described in method embodiment.
It should be noted that for foregoing each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the application is not limited by described sequence of movement because
According to the application, some steps can use other orders or carry out simultaneously.Secondly, those skilled in the art should also know
Know, embodiment described in this description belongs to preferred embodiment, involved action and module not necessarily the application
It is necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not have the portion being described in detail in some embodiment
Point, it may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed device, can be by another way
Realize.For example, device embodiment described above is only schematical, such as the division of the unit, it is only one kind
Division of logic function, can there is an other dividing mode when actually realizing, such as multiple units or component can combine or can
To be integrated into another system, or some features can be ignored, or not perform.Another, shown or discussed is mutual
Coupling direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING or communication connection of device or unit,
Can be electrical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of software program module.
If the integrated unit is realized in the form of software program module and is used as independent production marketing or use
When, it can be stored in a computer-readable access to memory.Based on such understanding, the technical scheme of the application substantially or
Person say the part to be contributed to prior art or the technical scheme all or part can in the form of software product body
Reveal and, the computer software product is stored in a memory, including some instructions are causing a computer equipment
(can be personal computer, server or network equipment etc.) performs all or part of each embodiment methods described of the application
Step.And foregoing memory includes:USB flash disk, read-only storage (read-only memory, ROM), random access memory
(random access memory, RAM), mobile hard disk, magnetic disc or CD etc. are various can be with the medium of store program codes.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
To instruct the hardware of correlation to complete by program, the program can be stored in a computer-readable memory, memory
It can include:Flash disk, read-only storage (read-only memory, ROM), random access memory (random access
Memory, RAM), disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the application and
Embodiment is set forth, and the explanation of above example is only intended to help and understands the present processes and its core concept;
Meanwhile for those of ordinary skill in the art, according to the thought of the application, can in specific embodiments and applications
There is change part, in summary, this specification content should not be construed as the limitation to the application.
Claims (14)
- A kind of 1. mobile terminal, it is characterised in that including application processor AP, and the face identification device being connected with the AP And environmental sensor, wherein,The face identification device, for obtaining the first facial image;The environmental sensor, for obtaining ambient parameter;The AP, for carrying out image procossing to first facial image according to the ambient parameter, obtain the second face figure Picture, second facial image are used for recognition of face.
- 2. mobile terminal according to claim 1, it is characterised in that it is described according to the ambient parameter to described first In terms of facial image carries out image procossing, the AP is specifically used for:The first property parameters of first facial image are obtained, first property parameters are the figure of first facial image As quality characterization parameter;Obtain the second property parameters;Determine the difference parameter between first property parameters and second property parameters;Adjustment parameter corresponding with the difference parameter is determined according to the ambient parameter;Image procossing is carried out to first facial image according to the adjustment parameter.
- 3. mobile terminal according to claim 1 or 2, it is characterised in that in terms of the first facial image of the acquisition, institute Face identification device is stated to be specifically used for:Face is focused, obtains target image;FIG pull handle is carried out to the target image, obtains human face region image, using the human face region image as described the One facial image.
- 4. according to the mobile terminal described in any one of claims 1 to 3, it is characterised in that the AP also particularly useful for:Image quality evaluation is carried out to first facial image, image quality evaluation values are obtained, in described image quality evaluation When value is less than pre-set image quality threshold, the environmental sensor is used to perform described the step of obtaining ambient parameter.
- 5. according to the mobile terminal described in any one of claims 1 to 3, it is characterised in that the AP also particularly useful for:Detect whether the ambient parameter meets preparatory condition, when the ambient parameter does not meet the preparatory condition, perform Described the step of image procossing is carried out to first facial image according to the ambient parameter.
- A kind of 6. image processing method, it is characterised in that applied to including application processor AP, and the people being connected with the AP The mobile terminal of face identification device and environmental sensor, methods described include:The face identification device obtains the first facial image;The environmental sensor obtains ambient parameter;The AP carries out image procossing according to the ambient parameter to first facial image, obtains the second facial image, institute State the second facial image and be used for recognition of face.
- A kind of 7. image processing method, it is characterised in that including:Obtain the first facial image;Obtain ambient parameter;Image procossing is carried out to first facial image according to the ambient parameter, obtains the second facial image, described second Facial image is used for recognition of face.
- 8. according to the method for claim 7, it is characterised in that it is described according to the ambient parameter to the first face figure As carrying out image procossing, including:The first property parameters of first facial image are obtained, first property parameters are the figure of first facial image As quality characterization parameter;Obtain the second property parameters;Determine the difference parameter between first property parameters and second property parameters;Adjustment parameter corresponding with the difference parameter is determined according to the ambient parameter;Image procossing is carried out to first facial image according to the adjustment parameter.
- 9. the method according to claim 7 or 8, it is characterised in that the first facial image of the acquisition, including:Face is focused, obtains target image;FIG pull handle is carried out to the target image, obtains human face region image, using the human face region image as described the One facial image.
- 10. according to the method described in any one of claim 7 to 9, it is characterised in that methods described also includes:Image quality evaluation is carried out to first facial image, image quality evaluation values are obtained, in described image quality evaluation When value is less than pre-set image quality threshold, described the step of obtaining ambient parameter is performed.
- 11. according to the method described in any one of claim 7 to 10, it is characterised in that methods described also includes:Detect whether the ambient parameter meets preparatory condition, when the ambient parameter does not meet the preparatory condition, perform Described the step of image procossing is carried out to first facial image according to the ambient parameter.
- A kind of 12. image processing apparatus, it is characterised in that including:First acquisition unit, for obtaining the first facial image;Second acquisition unit, for obtaining ambient parameter;Processing unit, for carrying out image procossing to first facial image according to the ambient parameter, obtain the second face Image, second facial image are used for recognition of face.
- A kind of 13. mobile terminal, it is characterised in that including:Application processor AP and memory;And one or more programs, One or more of programs are stored in the memory, and are configured to be performed by the AP, and described program includes Instruction for such as any one of claim 7-11 methods.
- A kind of 14. computer-readable recording medium, it is characterised in that it is used to store computer program, wherein, the computer Program causes computer to perform the method as described in claim any one of 7-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711030272.6A CN107862265B (en) | 2017-10-30 | 2017-10-30 | Image processing method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711030272.6A CN107862265B (en) | 2017-10-30 | 2017-10-30 | Image processing method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107862265A true CN107862265A (en) | 2018-03-30 |
CN107862265B CN107862265B (en) | 2022-02-18 |
Family
ID=61696413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711030272.6A Active CN107862265B (en) | 2017-10-30 | 2017-10-30 | Image processing method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107862265B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109117725A (en) * | 2018-07-09 | 2019-01-01 | 深圳市科脉技术股份有限公司 | Face identification method and device |
CN109190448A (en) * | 2018-07-09 | 2019-01-11 | 深圳市科脉技术股份有限公司 | Face identification method and device |
CN109255796A (en) * | 2018-09-07 | 2019-01-22 | 浙江大丰实业股份有限公司 | Stage equipment security solution platform |
CN109766759A (en) * | 2018-12-12 | 2019-05-17 | 成都云天励飞技术有限公司 | Emotion identification method and Related product |
CN109840885A (en) * | 2018-12-27 | 2019-06-04 | 深圳云天励飞技术有限公司 | Image interfusion method and Related product |
CN110032852A (en) * | 2019-04-15 | 2019-07-19 | 维沃移动通信有限公司 | Unlocking screen method and terminal device |
CN110765502A (en) * | 2019-10-30 | 2020-02-07 | Oppo广东移动通信有限公司 | Information processing method and related product |
CN110837416A (en) * | 2019-09-24 | 2020-02-25 | 深圳市火乐科技发展有限公司 | Memory management method, intelligent projector and related product |
CN111080543A (en) * | 2019-12-09 | 2020-04-28 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN111818308A (en) * | 2019-03-19 | 2020-10-23 | 温州洪启信息科技有限公司 | Security monitoring probe analysis processing method based on big data |
CN111866393A (en) * | 2020-07-31 | 2020-10-30 | Oppo广东移动通信有限公司 | Display control method, device and storage medium |
CN112730257A (en) * | 2021-01-26 | 2021-04-30 | 江苏盟星智能科技有限公司 | AVI product optical detection system and method |
WO2021238373A1 (en) * | 2020-05-26 | 2021-12-02 | 华为技术有限公司 | Method for unlocking by means of gaze and electronic device |
CN114125145A (en) * | 2021-10-19 | 2022-03-01 | 华为技术有限公司 | Method and equipment for unlocking display screen |
CN116189083A (en) * | 2023-01-11 | 2023-05-30 | 广东汇通信息科技股份有限公司 | Dangerous goods identification method for community security inspection assistance |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103699877A (en) * | 2013-12-02 | 2014-04-02 | 广东欧珀移动通信有限公司 | Method and system for improving face recognition effects |
CN103927802A (en) * | 2014-04-18 | 2014-07-16 | 深圳市威富安防有限公司 | Door lock control method and system |
CN104182721A (en) * | 2013-05-22 | 2014-12-03 | 华硕电脑股份有限公司 | Image processing system and image processing method capable of improving face identification rate |
CN106210516A (en) * | 2016-07-06 | 2016-12-07 | 广东欧珀移动通信有限公司 | One is taken pictures processing method and terminal |
CN106971159A (en) * | 2017-03-23 | 2017-07-21 | 中国联合网络通信集团有限公司 | A kind of image definition recognition methods, identity identifying method and device |
GB2546714A (en) * | 2013-03-28 | 2017-07-26 | Paycasso Verify Ltd | Method, system and computer program for comparing images |
CN107249105A (en) * | 2017-06-16 | 2017-10-13 | 广东欧珀移动通信有限公司 | Exposure compensation, device and terminal device |
-
2017
- 2017-10-30 CN CN201711030272.6A patent/CN107862265B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2546714A (en) * | 2013-03-28 | 2017-07-26 | Paycasso Verify Ltd | Method, system and computer program for comparing images |
CN104182721A (en) * | 2013-05-22 | 2014-12-03 | 华硕电脑股份有限公司 | Image processing system and image processing method capable of improving face identification rate |
CN103699877A (en) * | 2013-12-02 | 2014-04-02 | 广东欧珀移动通信有限公司 | Method and system for improving face recognition effects |
CN103927802A (en) * | 2014-04-18 | 2014-07-16 | 深圳市威富安防有限公司 | Door lock control method and system |
CN106210516A (en) * | 2016-07-06 | 2016-12-07 | 广东欧珀移动通信有限公司 | One is taken pictures processing method and terminal |
CN106971159A (en) * | 2017-03-23 | 2017-07-21 | 中国联合网络通信集团有限公司 | A kind of image definition recognition methods, identity identifying method and device |
CN107249105A (en) * | 2017-06-16 | 2017-10-13 | 广东欧珀移动通信有限公司 | Exposure compensation, device and terminal device |
Non-Patent Citations (1)
Title |
---|
李晓东: "《基于子空间和流形学习的人脸识别算法研究》", 30 June 2013 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190448A (en) * | 2018-07-09 | 2019-01-11 | 深圳市科脉技术股份有限公司 | Face identification method and device |
CN109117725A (en) * | 2018-07-09 | 2019-01-01 | 深圳市科脉技术股份有限公司 | Face identification method and device |
CN109255796A (en) * | 2018-09-07 | 2019-01-22 | 浙江大丰实业股份有限公司 | Stage equipment security solution platform |
CN109255796B (en) * | 2018-09-07 | 2022-01-28 | 浙江大丰实业股份有限公司 | Safety analysis platform for stage equipment |
CN109766759A (en) * | 2018-12-12 | 2019-05-17 | 成都云天励飞技术有限公司 | Emotion identification method and Related product |
CN109840885A (en) * | 2018-12-27 | 2019-06-04 | 深圳云天励飞技术有限公司 | Image interfusion method and Related product |
CN109840885B (en) * | 2018-12-27 | 2023-03-14 | 深圳云天励飞技术有限公司 | Image fusion method and related product |
CN111818308A (en) * | 2019-03-19 | 2020-10-23 | 温州洪启信息科技有限公司 | Security monitoring probe analysis processing method based on big data |
CN111818308B (en) * | 2019-03-19 | 2022-02-08 | 江苏海内软件科技有限公司 | Security monitoring probe analysis processing method based on big data |
CN110032852A (en) * | 2019-04-15 | 2019-07-19 | 维沃移动通信有限公司 | Unlocking screen method and terminal device |
CN110837416A (en) * | 2019-09-24 | 2020-02-25 | 深圳市火乐科技发展有限公司 | Memory management method, intelligent projector and related product |
CN110765502A (en) * | 2019-10-30 | 2020-02-07 | Oppo广东移动通信有限公司 | Information processing method and related product |
CN110765502B (en) * | 2019-10-30 | 2022-02-18 | Oppo广东移动通信有限公司 | Information processing method and related product |
CN111080543A (en) * | 2019-12-09 | 2020-04-28 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN111080543B (en) * | 2019-12-09 | 2024-03-22 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
WO2021238373A1 (en) * | 2020-05-26 | 2021-12-02 | 华为技术有限公司 | Method for unlocking by means of gaze and electronic device |
CN111866393B (en) * | 2020-07-31 | 2022-01-14 | Oppo广东移动通信有限公司 | Display control method, device and storage medium |
CN111866393A (en) * | 2020-07-31 | 2020-10-30 | Oppo广东移动通信有限公司 | Display control method, device and storage medium |
CN112730257A (en) * | 2021-01-26 | 2021-04-30 | 江苏盟星智能科技有限公司 | AVI product optical detection system and method |
CN114125145A (en) * | 2021-10-19 | 2022-03-01 | 华为技术有限公司 | Method and equipment for unlocking display screen |
CN114125145B (en) * | 2021-10-19 | 2022-11-18 | 华为技术有限公司 | Method for unlocking display screen, electronic equipment and storage medium |
CN116189083A (en) * | 2023-01-11 | 2023-05-30 | 广东汇通信息科技股份有限公司 | Dangerous goods identification method for community security inspection assistance |
CN116189083B (en) * | 2023-01-11 | 2024-01-09 | 广东汇通信息科技股份有限公司 | Dangerous goods identification method for community security inspection assistance |
Also Published As
Publication number | Publication date |
---|---|
CN107862265B (en) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107862265A (en) | Image processing method and related product | |
CN107679482A (en) | Solve lock control method and Related product | |
CN107609514B (en) | Face recognition method and related product | |
CN107590461B (en) | Face recognition method and related product | |
CN107423699B (en) | Biopsy method and Related product | |
CN107832675A (en) | Processing method of taking pictures and Related product | |
CN109241908A (en) | Face identification method and relevant apparatus | |
CN107506687A (en) | Biopsy method and Related product | |
CN109117725A (en) | Face identification method and device | |
CN107480496A (en) | Solve lock control method and Related product | |
CN104135609B (en) | Auxiliary photo-taking method, apparatus and terminal | |
CN107679481A (en) | Solve lock control method and Related product | |
CN107657218B (en) | Face recognition method and related product | |
CN107292285A (en) | Living iris detection method and Related product | |
CN107463818A (en) | Solve lock control method and Related product | |
CN107633499A (en) | Image processing method and related product | |
CN107403147B (en) | Iris living body detection method and related product | |
CN106558025A (en) | A kind for the treatment of method and apparatus of picture | |
CN107451454B (en) | Unlocking control method and related product | |
CN107862266A (en) | Image processing method and related product | |
CN107451446A (en) | Solve lock control method and Related product | |
CN107506708B (en) | Unlocking control method and related product | |
CN107480488A (en) | Solve lock control method and Related product | |
CN107633235A (en) | Solve lock control method and Related product | |
CN107613550A (en) | Solve lock control method and Related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |