CN107633499A - Image processing method and related product - Google Patents

Image processing method and related product Download PDF

Info

Publication number
CN107633499A
CN107633499A CN201710889988.5A CN201710889988A CN107633499A CN 107633499 A CN107633499 A CN 107633499A CN 201710889988 A CN201710889988 A CN 201710889988A CN 107633499 A CN107633499 A CN 107633499A
Authority
CN
China
Prior art keywords
image
color component
component images
facial image
barycenter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710889988.5A
Other languages
Chinese (zh)
Other versions
CN107633499B (en
Inventor
周海涛
王健
郭子青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710889988.5A priority Critical patent/CN107633499B/en
Publication of CN107633499A publication Critical patent/CN107633499A/en
Application granted granted Critical
Publication of CN107633499B publication Critical patent/CN107633499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the invention discloses a kind of image processing method and Related product, wherein, method includes:Under noctovision environment, facial image is obtained;Obtain the color component images of default face template;The facial image and the color component images are subjected to image co-registration, obtain target facial image.The embodiment of the present invention can be after facial image be got, color component images are obtained from default face template, the colouring information of the facial image collected under noctovision deficiency is made up with the color of template, so as to, the abundant facial image of colouring information is obtained, improves Consumer's Experience.

Description

Image processing method and Related product
Technical field
The present invention relates to technical field of mobile terminals, and in particular to a kind of image processing method and Related product.
Background technology
With a large amount of popularization and applications of mobile terminal (mobile phone, tablet personal computer etc.), the application that mobile terminal can be supported is got over Come more, function is stronger and stronger, and mobile terminal develops towards variation, personalized direction, and turning into can not in user's life The appliance and electronic lacked.
At present, recognition of face is increasingly favored by mobile terminal production firm, in the case of, will can absorb Face be presented on the display screen of mobile terminal.But under noctovision environment, because the colouring information of camera collection is less, Therefore, gray level image can be presented in the facial image obtained, and therefore, bandwagon effect is bad, how to lift the people under noctovision environment The problem of bandwagon effect of face image, is urgently to be resolved hurrily.
The content of the invention
The embodiments of the invention provide a kind of image processing method and Related product, can lift the people under noctovision environment The bandwagon effect of face image.
In a first aspect, the embodiment of the present invention provides a kind of mobile terminal, including application processor (Application Processor, AP), and the face identification device being connected with the AP, wherein,
The face identification device, under noctovision environment, obtaining facial image;
The AP, for obtaining the color component images of default face template;And by the facial image and the face Colouring component image carries out image co-registration, obtains target facial image.
Second aspect, the embodiments of the invention provide a kind of image processing method, applied to including application processor AP, with And the mobile terminal with the AP face identification devices being connected, methods described include:
The face identification device obtains facial image under noctovision environment;
The AP obtains the color component images of default face template;And by the facial image and the color component Image carries out image co-registration, obtains target facial image.
The third aspect, the embodiments of the invention provide a kind of image processing method, including:
Under noctovision environment, facial image is obtained;
Obtain the color component images of default face template;
The facial image and the color component images are subjected to image co-registration, obtain target facial image.
Fourth aspect, the embodiments of the invention provide a kind of image processing apparatus, including:
First acquisition unit, under noctovision environment, obtaining facial image;
Second acquisition unit, for obtaining the color component images of default face template;
Image fusion unit, for the facial image and the color component images to be carried out into image co-registration, obtain mesh Mark facial image.
5th aspect, the embodiments of the invention provide a kind of mobile terminal, including:Application processor AP and memory;With And one or more programs, one or more of programs are stored in the memory, and it is configured to by the AP Perform, described program includes being used for such as the instruction of the part or all of step described in the third aspect.
6th aspect, the embodiments of the invention provide a kind of computer-readable recording medium, wherein, it is described computer-readable Storage medium is used to store computer program, wherein, the computer program causes computer to perform such as the embodiment of the present invention the The instruction of part or all of step described in three aspects.
7th aspect, the embodiments of the invention provide a kind of computer program product, wherein, the computer program product Non-transient computer-readable recording medium including storing computer program, the computer program are operable to make calculating Machine is performed such as the part or all of step described in the third aspect of the embodiment of the present invention.The computer program product can be one Individual software installation bag.
Implement the embodiment of the present invention, have the advantages that:
As can be seen that image processing method and Related product described in the embodiment of the present invention, under noctovision environment, Facial image is obtained, obtains the color component images of default face template, facial image and color component images are subjected to image Fusion, obtains target facial image, it is thus possible to after facial image is got, color is obtained from default face template Component image, the colouring information of the facial image collected under noctovision deficiency is made up with the color of template, so as to obtain face The informative facial image of color, improves Consumer's Experience.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Figure 1A is a kind of configuration diagram of Example mobile terminals provided in an embodiment of the present invention;
Figure 1B is a kind of structural representation of mobile terminal provided in an embodiment of the present invention;
Fig. 1 C are a kind of schematic flow sheets of image processing method disclosed in the embodiment of the present invention;
Fig. 1 D are the demonstrating effect figures of facial image disclosed in the embodiment of the present invention;
Fig. 2 is the schematic flow sheet of another image processing method disclosed in the embodiment of the present invention;
Fig. 3 is a kind of another structural representation of mobile terminal provided in an embodiment of the present invention;
Fig. 4 A are a kind of structural representations of image processing apparatus provided in an embodiment of the present invention;
Fig. 4 B are the structures of the image fusion unit of the image processing apparatus described by Fig. 4 A provided in an embodiment of the present invention Schematic diagram;
Fig. 4 C are the structures of the image co-registration module of the image fusion unit described by Fig. 4 B provided in an embodiment of the present invention Schematic diagram;
Fig. 4 D are a kind of another structural representations of image processing apparatus provided in an embodiment of the present invention;
Fig. 4 E are a kind of another structural representations of image processing apparatus provided in an embodiment of the present invention;
Fig. 5 is the structural representation of another mobile terminal disclosed in the embodiment of the present invention.
Embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention Accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only Part of the embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Term " first ", " second " in description and claims of this specification and above-mentioned accompanying drawing etc. are to be used to distinguish Different objects, rather than for describing particular order.In addition, term " comprising " and " having " and their any deformations, it is intended that It is to cover non-exclusive include.Such as process, method, system, product or the equipment for containing series of steps or unit do not have The step of being defined in the step of having listed or unit, but alternatively also including not listing or unit, or alternatively also wrap Include for other intrinsic steps of these processes, method, product or equipment or unit.
Referenced herein " embodiment " is it is meant that the special characteristic, structure or the characteristic that describe can wrap in conjunction with the embodiments In at least one embodiment of the present invention.Each position in the description occur the phrase might not each mean it is identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly understand, embodiment described herein can be combined with other embodiments.
Mobile terminal involved by the embodiment of the present invention can include the various handheld devices with radio communication function, Mobile unit, wearable device, computing device or other processing equipments for being connected to radio modem, and various forms User equipment (User Equipment, UE), mobile station (Mobile Station, MS), terminal device (terminal Device) etc..For convenience of description, apparatus mentioned above is referred to as mobile terminal.
The embodiment of the present invention is described in detail below.A kind of Example mobile terminals 1000 as shown in Figure 1A, the shifting The face identification device of dynamic terminal 1000 can be camera module 21, and the camera module can be dual camera, above-mentioned double Camera can be visible image capturing head with one, and one is infrared camera, or, both visible image capturing heads, for example, One camera is visible image capturing head, and another camera is infrared camera, in another example, a camera is taken the photograph for visible ray As head and another camera is also visible image capturing head, or above-mentioned camera module can be single camera, such as, it is seen that Light video camera head, or, infrared camera.Above-mentioned camera module 21 can be front camera or rear camera.
Figure 1B is referred to, Figure 1B is a kind of structural representation of shown mobile terminal 100, and the mobile terminal 100 wraps Include:Application processor AP110, face identification device 120, wherein, the AP110 connects face identification device by bus 150 120。
Based on the mobile terminal described by Figure 1A-Figure 1B, can be used for implementing function such as:
The face identification device 120, under noctovision environment, obtaining facial image;
The AP110, for obtaining the color component images of default face template;And by the facial image with it is described Color component images carry out image co-registration, obtain target facial image.
In a possible example, the facial image and the color component images are subjected to image co-registration described Aspect, the AP110 are specifically used for:
The facial image is converted into gray level image;
The gray level image and the color component images are subjected to image co-registration.
In a possible example, the gray level image and the color component images are subjected to image co-registration described Aspect, the AP110 are specifically used for:
Determine the first barycenter of the gray level image and the second barycenter of the color component images;
The gray level image and the color component images are carried out by weight according to first barycenter and second barycenter Folded processing, first barycenter overlap completely with second barycenter, and to gray level image progress size adjusting, obtain the One image so that the first vertical distance of described first image is equal with the second vertical distance of the color component images, its In, first vertical distance is through human face region and by the vertical line segment of first barycenter in described first image Length, second vertical distance are through human face region and by the vertical of second barycenter in the color component images The length of line segment;
Described first image and the color component images are synthesized.
In a possible example, the AP110 is also particularly useful for also including:
Determine facial angle corresponding to the facial image;
The default face template corresponding to the facial angle is chosen from default face template storehouse, performs the acquisition The step of color component images of default face template.
In a possible example, the AP110 also particularly useful for:
The facial image is matched with the default face template, and in the facial image and the default people During the success of face template matches, the color component images for obtaining default face template are performed.
Based on the mobile terminal described by above-mentioned Figure 1A-Figure 1B, available for a kind of execution image procossing as described below Method, it is specific as follows:
The face identification device 120 obtains facial image under noctovision environment;
The AP110 obtains the color component images of default face template;And by the facial image and the color Component image carries out image co-registration, obtains target facial image.
As can be seen that the image processing method described in the embodiment of the present invention, under noctovision environment, obtains face figure Picture, obtains the color component images of default face template, and facial image and color component images are carried out into image co-registration, obtain mesh Facial image is marked, it is thus possible to after facial image is got, color component images are obtained from default face template, is used The color of template is insufficient to make up the colouring information of the facial image collected under noctovision, so as to obtain colouring information and enrich Facial image, improve Consumer's Experience.
Based on the mobile terminal described by Figure 1A-Figure 1B, Fig. 1 C are referred to, are a kind of image provided in an embodiment of the present invention The embodiment schematic flow sheet of processing method.Image processing method described in the present embodiment, it may include following steps:
101st, under noctovision environment, facial image is obtained.
Wherein it is possible to by being focused to face, facial image is obtained, facial image can be the figure comprising face Picture, or, the only only stingy figure image of face.Above-mentioned noctovision environment can be detected to obtain by ambient light sensor.
Wherein, before above-mentioned steps 101, may include steps of:
A1, obtain target environment parameter;
A2, determine target acquisition parameters corresponding with the target environment parameter;
Then, above-mentioned steps 101, facial image is obtained, can be implemented as follows:
Face is shot according to the target acquisition parameters, obtains the facial image.
Wherein, above-mentioned target environment parameter can be detected to obtain by environmental sensor, and above-mentioned environmental sensor can be used for Ambient parameter is detected, environmental sensor can be following at least one:Breathing detection sensor, ambient light sensor, electromagnetism inspection Survey sensor, ambient color temperature detection sensor, alignment sensor, temperature sensor, humidity sensor etc., ambient parameter can be with For following at least one:Respiration parameter, ambient brightness, environment colour temperature, environmental magnetic field interference coefficient, weather condition, environment light source Number, geographical position etc., respiration parameter can be following at least one:Respiration rate, respiratory rate, Breathiness, breathing Curve etc..
Further, the corresponding relation between acquisition parameters and ambient parameter can be prestored in mobile terminal, and then, Corresponding with target environment parameter target acquisition parameters are determined according to the corresponding relation, above-mentioned acquisition parameters can be included but not only It is limited to:Focal length, exposure time, aperture size, exposal model, sensitivity ISO, white balance parameter etc..In this way, it can obtain Optimal image under the environment.
Alternatively, before above-mentioned steps 101 are performed, can also comprise the following steps:
Current environment brightness is obtained, when the current environment brightness is less than predetermined luminance threshold value, confirms that current environment is Noctovision environment.
Wherein, above-mentioned predetermined luminance threshold value can voluntarily be set or system default by user.It is low in current environment brightness When predetermined luminance threshold value, then it is considered that being presently at noctovision environment.
102nd, the color component images of default face template are obtained.
Wherein, above-mentioned default face template can pre-save in a memory in the mobile terminal.Can be first by default people Face template carries out color space conversion, for example, YUV color spaces are transformed into, or, HIS color spaces etc., and then, extraction Color component images.
Alternatively, between above-mentioned steps 101 and step 102, can also comprise the following steps:
B1, determine facial angle corresponding to the facial image;
B2, the default face template corresponding to the facial angle is chosen from default face template storehouse, described in execution The step of obtaining the color component images of default face template.
Wherein, in face recognition process, face can also possess different facial angles, for example, positive face and side face, its people Face angle is different, and therefore, each facial image can correspond to a facial angle, certainly, can also be deposited in mobile terminal The default face template storehouse of storage, multiple default face templates can be included by presetting in face template storehouse, and each default face template can be right A facial angle is answered, and then, it can be chosen from default face template storehouse and face template is preset corresponding to facial angle, entered And the color component images of default face template corresponding with the facial angle of facial image are obtained, in this way, this mode causes Color component images can be merged preferably in facial image, to make up the deficiency of facial image colouring information.
Still optionally further, between above-mentioned steps B2 and 102, can also comprise the following steps:
The facial image is matched with the default face template, and in the facial image and the default people During the success of face template matches, the color component images for obtaining default face template are performed.
Wherein it is possible to which facial image is matched with the default face template, if it fails to match, can not perform Step 102, if the match is successful, step 102 can be performed.
103rd, the facial image and the color component images are subjected to image co-registration, obtain target facial image.
Wherein, due to lacking colouring information in facial image, and more colouring information is included in color component images, because And both carry out image co-registration, target facial image can be obtained, more color letter can be shown in the target facial image Target facial image, can be illustrated on the display screen of mobile terminal, show colorized face images, improve user's body by breath Test.
Alternatively, in above-mentioned steps 103, the facial image and the color component images are subjected to image co-registration, can Comprise the following steps:
31st, the facial image is converted into gray level image;
32nd, the gray level image and the color component images are subjected to image co-registration.
Wherein, because although facial image lacks colouring information, but it also includes a part of colouring information, if directly existing Upper color component images are merged on facial image, it is even to show the irregular colour of facial image, i.e. face complexion distortion, because This, in the embodiment of the present invention, gray level image is converted into by facial image, and then, gray level image and color component images are carried out Image co-registration, such composograph color is more uniform, and the colour of skin is more natural.
Still optionally further, the gray level image and the color component images are carried out image co-registration by above-mentioned steps 32, It may include following steps:
321st, the first barycenter of the gray level image and the second barycenter of the color component images are determined;
322nd, the gray level image and the color component images are entered according to first barycenter and second barycenter Row overlap processing, first barycenter overlap completely with second barycenter, and carry out size adjusting to the gray level image, obtain To the first image so that the second vertical distance phase of the first vertical distance of described first image and the color component images Deng, wherein, first vertical distance is through human face region and by the vertical of first barycenter in described first image The length of line segment, second vertical distance are through human face region and by second barycenter in the color component images Vertical line segment length;
323rd, described first image and the color component images are synthesized.
Wherein, mass centre's abbreviation barycenter, referring to is considered as mass concentration in this image point on material system.When So, image can also include a barycenter, and an image also only has a unique barycenter.In the embodiment of the present invention, Ke Yitong Cross geometric ways and obtain the first barycenter of gray level image and the second barycenter of color component images, and then, according to the first barycenter and Gray level image and color component images are carried out overlap processing by the second barycenter, and the first barycenter overlaps completely with the second barycenter, and right Gray level image carries out size adjusting, obtains the first image so that the first vertical distance of the first image and color component images Second vertical distance is equal, wherein, the first vertical distance is through human face region and by the first barycenter in the first image The length of vertical line segment, the second vertical distance are through human face region and by the vertical curve of the second barycenter in color component images The length of section, above-mentioned size adjusting can include enhanced processing or diminution is handled.For example, as shown in figure iD, wherein, show Barycenter, and the first vertical distance.Further, in this way, having obtained two components of a coloured image, luminance component, with And color component, i.e. the first image is luminance component, and color component images are color component, and then, it can enter between the two Enter synthesis, for example, the superposition of both pixels is shown, obtain target facial image, or, the image after synthesizing is again RGB color is transformed into, obtains target facial image, the target facial image can be shown on the display screen of mobile terminal.
Alternatively, between above-mentioned steps 322 and step 323, can also comprise the following steps:
Interpolation processing is carried out to described first image;
Then, in above-mentioned steps 323, described first image and the color component images are synthesized, can be according to such as Under type is implemented:
Described first image after interpolation processing and the color component images are synthesized.
Wherein, because the first image has carried out certain adjustment, therefore, interpolation processing can be carried out to it so that its image Transition between middle pixel is naturally, above-mentioned interpolation processing can be following at least one:Linear interpolation, quadratic interpolation, bilinearity are inserted Value, or, non-linear interpolation etc..
Alternatively, between above-mentioned steps 31 and 32, can also comprise the following steps:
Image enhancement processing is carried out to the gray level image;
Then, above-mentioned steps 32, the gray level image and the color component images are subjected to image co-registration, can be according to such as Under type is implemented:
The gray level image after image enhancement processing and the color component images are subjected to image co-registration.
Wherein, above-mentioned image enhancement processing may include but be not limited only to:Image denoising is (for example, wavelet transformation carries out image Denoising), image restoration (for example, Wiener filtering), noctovision enhancing algorithm (for example, histogram equalization, gray scale stretching etc.), After image enhancement processing is carried out to gray level image, the quality of gray level image can get a promotion to a certain extent.
Alternatively, before above-mentioned steps carry out image enhancement processing to the gray level image, following step can also be included Suddenly:
Image quality evaluation is carried out to the gray level image, image quality evaluation values are obtained, in described image quality evaluation When value is less than predetermined quality threshold, image enhancement processing is carried out to the gray level image.
Wherein, above-mentioned predetermined quality threshold can voluntarily be set or system default by user, and first gray level image can be carried out Image quality evaluation, an image quality evaluation values are obtained, the quality of the gray level image is judged by the image quality evaluation values It is good or bad, when image quality evaluation values are more than or equal to predetermined quality threshold, it is believed that grayscale image quality is good, is scheming When being less than predetermined quality threshold as quality evaluation value, it is believed that grayscale image quality is poor, and then, image can be carried out to gray level image Enhancing is handled.
Wherein, it is above-mentioned that image quality evaluation is carried out to the gray level image, it can implement as follows;
Image quality evaluation is carried out to gray level image using at least one image quality evaluation index, so as to obtain image Quality evaluation value.
In specific matching, when evaluating gray level image, multiple images quality evaluation index, each image matter can be included Measure evaluation index and also correspond to a weight, in this way, when each image quality evaluation index carries out image quality evaluation to image, An evaluation result is can obtain, finally, is weighted, also just obtains final image quality evaluation values.Picture quality is commented Valency index may include but be not limited only to:Average, standard deviation, entropy, definition, signal to noise ratio etc..
It should be noted that due to when use single evaluation index is evaluated picture quality, there is certain limitation Property, therefore, picture quality can be evaluated using multiple images quality evaluation index, certainly, picture quality is evaluated When, not image quality evaluation index is The more the better, because image quality evaluation index is more, the meter of image quality assessment process It is higher to calculate complexity, it is better also to may not be certain image quality evaluation effect, therefore, higher situation is being required to image quality evaluation Under, picture quality can be evaluated using 2~10 image quality evaluation indexs.Specifically, image quality evaluation is chosen to refer to Target number and which index, according to depending on specific implementation situation.Certainly, specifically scene selection picture quality must be also combined to comment Valency index, carry out carrying out the image quality index of image quality evaluation selection under dark situation under image quality evaluation and bright ring border Can be different.
Alternatively, in the case of not high to image quality evaluation required precision, an image quality evaluation index can be used Evaluated, for example, carrying out image quality evaluation values to pending image with entropy, it is believed that entropy is bigger, then illustrates picture quality It is better, on the contrary, entropy is smaller, then illustrate that picture quality is poorer.
Alternatively, in the case of higher to image quality evaluation required precision, multiple images quality evaluation can be used Index is evaluated image, and when multiple images quality evaluation index carries out image quality evaluation to image, it is more that this can be set The weight of each image quality evaluation index in individual image quality evaluation index, can obtain multiple images quality evaluation value, according to The plurality of image quality evaluation values and its corresponding weight can obtain final image quality evaluation values, for example, three image matter Measuring evaluation index is respectively:A indexs, B indexs and C indexs, A weight is a1, and B weight is a2, and C weight is a3, is used A, when B and C carries out image quality evaluation to a certain image, image quality evaluation values corresponding to A are b1, picture quality corresponding to B Evaluation of estimate is b2, and image quality evaluation values corresponding to C are b3, then, last image quality evaluation values=a1b1+a2b2+ a3b3.Under normal circumstances, image quality evaluation values are bigger, illustrate that picture quality is better.
Alternatively, after above-mentioned steps 103, can also comprise the following steps:
The target facial image is matched with the default face template, the target facial image with it is described Default face template is unlocked operation when the match is successful.
Wherein, when the match is successful for target facial image and default face template, operation can be unlocked, in target person It fails to match with default face template for face image, then user can be prompted to re-start recognition of face.It is above-mentioned to be unlocked operation, It can be following at least one situation:To be put out for example, mobile terminal is under screen state, unblock operation can light screen, and Into the homepage of mobile terminal, or specified page;Mobile terminal is under bright screen state, and unblock operation can be entered The homepage of mobile terminal, or specified page;The unblock page of a certain application of mobile terminal, unblock operation can be completed Unblock, into the page after unblock, for example, mobile terminal may be at paying the page, unblock operation can be paid. Above-mentioned specified page can be following at least one:The page of some application, or, the page that user voluntarily specifies.
As can be seen that the image processing method described in the embodiment of the present invention, under noctovision environment, obtains face figure Picture, obtains the color component images of default face template, and facial image and color component images are carried out into image co-registration, obtain mesh Facial image is marked, it is thus possible to after facial image is got, color component images are obtained from default face template, is used The color of template is insufficient to make up the colouring information of the facial image collected under noctovision, so as to obtain colouring information and enrich Facial image, improve Consumer's Experience.
Consistent with the abovely, referring to Fig. 2, embodiment stream for a kind of image processing method provided in an embodiment of the present invention Journey schematic diagram.Image processing method described in the present embodiment, it may include following steps:
201st, under noctovision environment, facial image is obtained.
202nd, the facial image is matched with default face template.
Wherein, when the match is successful for facial image and default face template, it is believed that the facial image comes from owner, Subsequent step can be then performed, and then, target facial image is shown on the display screen of mobile terminal.
Alternatively, in above-mentioned steps 202, the facial image is matched with default face template, it may include as follows Step:
21st, choose definition in the facial image and meet the target area of preset requirement, and the target area is entered Row feature point extraction, obtain fisrt feature point set;
22nd, the circumference of the facial image is extracted, obtains the first profile;
23rd, the first profile is matched with the second profile of the default face template, and by described first Feature point set is matched with the default face template;
24th, in the first profile and the success of the second outline of the default face template and the fisrt feature point Collection with the default face template the match is successful when, confirm the match is successful;In the first profile and the default face template The failure of the second outline, or, the fisrt feature point set and the default face template confirm matching when it fails to match Failure.
Wherein, in the embodiment of the present invention, target area can be chosen from facial image, if target area, collection Feature is complete, therefore, is advantageous to lift recognition of face efficiency, on the other hand, because target area is subregion, Ke Nengcun Matched in contingency, or, identification region is less, therefore carries out contours extract to facial image, obtains the first profile, was matching Cheng Zhong, the characteristic point of target area is matched with default face template, meanwhile, also by the first profile and default face template When being matched, and needing the both of which to match, just confirm that the match is successful, if both any one of it fails to match, match Failure, in this way, while success rate is ensured, also ensure that matching speed and security.
Alternatively, above-mentioned definition can also be defined with feature point number, and after all, image is more clear, then it is included Characteristic point is more, then, above-mentioned preset requirement is then:Feature point number is more than predetermined number threshold value, above-mentioned predetermined number threshold value Can voluntarily it be set by user or system default, then above-mentioned steps 21 can be implemented as follows:Determine the face figure The region that feature point number is more than predetermined number threshold value as in is the target area.
Alternatively, above-mentioned definition can be calculated with specific formula, be described in the related art, no longer superfluous herein State, then, above-mentioned preset requirement is then:Definition values are more than default clarity threshold, and above-mentioned default clarity threshold can be by User voluntarily sets or system default, then above-mentioned steps 21 can be implemented as follows:Determine clear in the facial image The region that clear angle value is more than default clarity threshold is the target area.
In addition, features described above extraction can use following algorithm to realize:Harris Corner Detection Algorithms, scale invariant feature become Change, SUSAN Corner Detection Algorithms etc., will not be repeated here.Contours extract in above-mentioned steps 22 can be following algorithm:Suddenly Husband converts, haar or canny etc..
203rd, when the match is successful for the facial image and the default face template, the default face template is obtained Color component images.
204th, the facial image and the color component images are subjected to image co-registration, obtain target facial image.
Wherein, the specific descriptions of above-mentioned steps 201- steps 204 can refer to pair of the image processing method described by Fig. 1 C Step is answered, will not be repeated here.
As can be seen that the image processing method described in the embodiment of the present invention, under noctovision environment, obtains face figure Picture, facial image is matched with default face template, if the match is successful, obtain the color component figure of default face template Picture, facial image and color component images are subjected to image co-registration, target facial image is obtained, it is thus possible to get people After face image, color component images are obtained from default face template, are collected with the color of template to make up under noctovision Facial image colouring information deficiency, so as to obtain the abundant facial image of colouring information, improve Consumer's Experience.
Referring to Fig. 3, Fig. 3 is a kind of mobile terminal provided in an embodiment of the present invention, including:Application processor AP and storage Device;And one or more programs, one or more of programs are stored in the memory, and it is configured to by institute AP execution is stated, described program includes being used for the instruction for performing following steps:
Under noctovision environment, facial image is obtained;
Obtain the color component images of default face template;
The facial image and the color component images are subjected to image co-registration, obtain target facial image.
It is described that the facial image and the color component images are subjected to image co-registration in a possible example, Described program includes being used for the instruction for performing following steps:
The facial image is converted into gray level image;
The gray level image and the color component images are subjected to image co-registration.
It is described that the gray level image and the color component images are subjected to image co-registration in a possible example, Described program includes being used for the instruction for performing following steps:
Determine the first barycenter of the gray level image and the second barycenter of the color component images;
The gray level image and the color component images are carried out by weight according to first barycenter and second barycenter Folded processing, first barycenter overlap completely with second barycenter, and to gray level image progress size adjusting, obtain the One image so that the first vertical distance of described first image is equal with the second vertical distance of the color component images, its In, first vertical distance is through human face region and by the vertical line segment of first barycenter in described first image Length, second vertical distance are through human face region and by the vertical of second barycenter in the color component images The length of line segment;
Described first image and the color component images are synthesized.
In a possible example, described program also includes being used for the instruction for performing following steps:
Determine facial angle corresponding to the facial image;
The default face template corresponding to the facial angle is chosen from default face template storehouse, performs the acquisition The step of color component images of default face template.
In a possible example, described program also includes being used for the instruction for performing following steps:
The facial image is matched with the default face template, and in the facial image and the default people During the success of face template matches, the color component images for obtaining default face template are performed.
It is the device for implementing above-mentioned image processing method below, it is specific as follows:
Fig. 4 A are referred to, Fig. 4 A are a kind of structural representations for image processing apparatus that the present embodiment provides.At the image Reason device includes first acquisition unit 401, second acquisition unit 402 and image fusion unit 403, wherein,
First acquisition unit 401, under noctovision environment, obtaining facial image;
Second acquisition unit 402, for obtaining the color component images of default face template;
Image fusion unit 403, for the facial image and the color component images to be carried out into image co-registration, obtain Target facial image.
Alternatively, if Fig. 4 B, Fig. 4 B are the specific thin of the image fusion unit 403 of the image processing apparatus described by Fig. 4 A Nodule structure, described image integrated unit 403 may include:Conversion module 4031 and image co-registration module 4032, it is specific as follows:
Conversion module 4031, for the facial image to be converted into gray level image;
Image co-registration module 4032, for the gray level image and the color component images to be carried out into image co-registration.
Alternatively, if Fig. 4 C, Fig. 4 C are the tools of the image co-registration module 4032 of the image fusion unit 403 described by Fig. 4 B Body detailed structure, described image Fusion Module 4032, it may include:Determining module 501, adjusting module 502 and synthesis module 503, It is specific as follows:
Determining module 501, for determining the first barycenter of the gray level image and the second matter of the color component images The heart;
Adjusting module 502, for according to first barycenter and second barycenter by the gray level image and the face Colouring component image carries out overlap processing, and first barycenter is overlapped completely with second barycenter, and the gray level image is entered Row size adjusting, obtain the first image so that the of the first vertical distance of described first image and the color component images Two vertical distances are equal, wherein, first vertical distance is through human face region and by described the in described first image The length of the vertical line segment of one barycenter, second vertical distance are to run through human face region and process in the color component images The length of the vertical line segment of second barycenter;
Synthesis module 503, for described first image and the color component images to be synthesized.
Alternatively, such as Fig. 4 D, the modification structures of image processing apparatus of Fig. 4 D described by Fig. 4 A, it is compared with Fig. 4 A Compared with may also include:Determining unit 404 and selection unit 405, it is specific as follows:
Determining unit 404, for determining facial angle corresponding to the facial image;
Unit 405 is chosen, for choosing the default face corresponding to the facial angle from default face template storehouse Template, described the step of obtaining the color component images for presetting face template, is performed by the second acquisition unit 502.
Alternatively, such as Fig. 4 E, the modification structures of image processing apparatus of Fig. 4 E described by Fig. 4 A, it is compared with Fig. 4 A Compared with may also include:Matching unit 406, it is specific as follows:
Matching unit 406, for the facial image to be matched with the default face template, and in the face Image and the default face template perform the color component images for obtaining default face template when the match is successful.
As can be seen that the image processing apparatus described in the embodiment of the present invention, under noctovision environment, obtains face figure Picture, darkening processing is carried out to default face template, obtains default face template, by facial image and the progress of default face template Match somebody with somebody, when the match is successful for facial image and default face template, be unlocked operation, it is thus possible under noctovision environment, it is right Face template carries out darkening processing, reduces the quality of face template so that it is between the facial image under noctovision environment Matching value lifted, therefore, recognition of face efficiency can be lifted.
It is understood that the function of each program module of the image processing apparatus of the present embodiment can be real according to the above method The method specific implementation in example is applied, its specific implementation process is referred to the associated description of above method embodiment, herein no longer Repeat.
The embodiment of the present invention additionally provides another mobile terminal, as shown in figure 5, for convenience of description, illustrate only with The related part of the embodiment of the present invention, particular technique details do not disclose, refer to present invention method part.The movement Terminal can be to include mobile phone, tablet personal computer, PDA (Personal Digital Assistant, personal digital assistant), POS Any terminal device such as (Point of Sales, point-of-sale terminal), vehicle-mounted computer, so that mobile terminal is mobile phone as an example:
Fig. 5 is illustrated that the block diagram of the part-structure of the mobile phone related to mobile terminal provided in an embodiment of the present invention.Ginseng Fig. 5 is examined, mobile phone includes:Radio frequency (Radio Frequency, RF) circuit 910, memory 920, input block 930, sensor 950th, voicefrequency circuit 960, Wireless Fidelity (Wireless Fidelity, WiFi) module 970, application processor AP980 and The grade part of power supply 990.It will be understood by those skilled in the art that the handset structure shown in Fig. 5 does not form the restriction to mobile phone, It can include than illustrating more or less parts, either combine some parts or different parts arrangement.
Each component parts of mobile phone is specifically introduced with reference to Fig. 5:
Input block 930 can be used for the numeral or character information for receiving input, and produce with the user of mobile phone set with And the key signals input that function control is relevant.Specifically, input block 930 may include touching display screen 933, face identification device 931 and other input equipments 932.Face identification device 931 can refer to said structure, and concrete structure composition can refer to above-mentioned retouch State, do not repeat excessively herein.Input block 930 can also include other input equipments 932.Specifically, other input equipments 932 Physical button, function key (such as volume control button, switch key etc.), trace ball, mouse, operation can be included but is not limited to One or more in bar etc..
Wherein, the AP980, for performing following steps:
Under noctovision environment, facial image is obtained;
Obtain the color component images of default face template;
The facial image and the color component images are subjected to image co-registration, obtain target facial image.
AP980 is the control centre of mobile phone, using various interfaces and the various pieces of connection whole mobile phone, passes through fortune Row performs the software program and/or module being stored in memory 920, and calls the data being stored in memory 920, The various functions and processing data of mobile phone are performed, so as to carry out integral monitoring to mobile phone.Optionally, AP980 may include one or Multiple processing units, the processing unit can be artificial intelligent chip, quantum chip;Preferably, AP980 can integrate application processor And modem processor, wherein, application processor mainly handles operating system, user interface and application program etc., modulatedemodulate Processor is adjusted mainly to handle radio communication.It is understood that above-mentioned modem processor can not also be integrated into AP980 In.
In addition, memory 920 can include high-speed random access memory, nonvolatile memory, example can also be included Such as at least one disk memory, flush memory device or other volatile solid-state parts.
RF circuits 910 can be used for the reception and transmission of information.Generally, RF circuits 910 include but is not limited to antenna, at least one Individual amplifier, transceiver, coupler, low-noise amplifier (Low Noise Amplifier, LNA), duplexer etc..In addition, RF circuits 910 can also be communicated by radio communication with network and other equipment.Above-mentioned radio communication can use any communication Standard or agreement, including but not limited to global system for mobile communications (Global System of Mobile Communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code division it is more Location (Code Division Multiple Access, CDMA), WCDMA (Wideband Code Division Multiple Access, WCDMA), Long Term Evolution (Long Term Evolution, LTE), Email, Short Message Service (Short Messaging Service, SMS) etc..
Mobile phone may also include at least one sensor 950, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor may include environmental sensor and proximity transducer, wherein, environmental sensor can be according to the bright of ambient light Secretly adjust the brightness of touching display screen, proximity transducer can close touching display screen and/or the back of the body when mobile phone is moved in one's ear Light.As one kind of motion sensor, accelerometer sensor can detect in all directions the size of (generally three axles) acceleration, Size and the direction of gravity are can detect that when static, application (such as horizontal/vertical screen switching, related trip available for identification mobile phone posture Play, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;The gyro that can also configure as mobile phone The other sensors such as instrument, barometer, hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Voicefrequency circuit 960, loudspeaker 961, microphone 962 can provide the COBBAIF between user and mobile phone.Audio-frequency electric Electric signal after the voice data received conversion can be transferred to loudspeaker 961, sound is converted to by loudspeaker 961 by road 960 Signal plays;On the other hand, the voice signal of collection is converted to electric signal by microphone 962, is turned after being received by voicefrequency circuit 960 It is changed to voice data, then after voice data is played into AP980 processing, through RF circuits 910 to be sent to such as another mobile phone, or Voice data is played to memory 920 further to handle.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronicses postal by WiFi module 970 Part, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 5 is shown WiFi module 970, but it is understood that, it is simultaneously not belonging to must be configured into for mobile phone, can not change as needed completely Become in the essential scope of invention and omit.
Mobile phone also includes the power supply 990 (such as battery) to all parts power supply, it is preferred that power supply can pass through power supply pipe Reason system and AP980 are logically contiguous, so as to realize the work(such as management charging, electric discharge and power managed by power-supply management system Energy.
Although being not shown, mobile phone can also include camera, bluetooth module etc., will not be repeated here.
In embodiment shown in earlier figures 1C or Fig. 2, each step method flow can based on the mobile phone structure realize.
In embodiment shown in earlier figures 3, Fig. 4 A~Fig. 4 E, each unit function can based on the mobile phone structure realize.
The embodiment of the present invention also provides a kind of computer-readable storage medium, wherein, the computer-readable storage medium is stored for electricity The computer program that subdata exchanges, it is any as described in above-mentioned embodiment of the method that the computer program make it that computer performs A kind of part or all of step of image processing method.
The embodiment of the present invention also provides a kind of computer program product, and the computer program product includes storing calculating The non-transient computer-readable recording medium of machine program, the computer program are operable to make computer perform side as described above The part or all of step of any image processing method described in method embodiment.
It should be noted that for foregoing each method embodiment, in order to be briefly described, therefore it is all expressed as a series of Combination of actions, but those skilled in the art should know, the present invention is not limited by described sequence of movement because According to the present invention, some steps can use other orders or carry out simultaneously.Secondly, those skilled in the art should also know Know, embodiment described in this description belongs to preferred embodiment, and involved action and module are not necessarily of the invention It is necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not have the portion being described in detail in some embodiment Point, it may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed device, can be by another way Realize.For example, device embodiment described above is only schematical, such as the division of the unit, it is only one kind Division of logic function, can there is an other dividing mode when actually realizing, such as multiple units or component can combine or can To be integrated into another system, or some features can be ignored, or not perform.Another, shown or discussed is mutual Coupling direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING or communication connection of device or unit, Can be electrical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list Member can both be realized in the form of hardware, can also be realized in the form of software program module.
If the integrated unit is realized in the form of software program module and is used as independent production marketing or use When, it can be stored in a computer-readable access to memory.Based on such understanding, technical scheme substantially or Person say the part to be contributed to prior art or the technical scheme all or part can in the form of software product body Reveal and, the computer software product is stored in a memory, including some instructions are causing a computer equipment (can be personal computer, server or network equipment etc.) performs all or part of each embodiment methods described of the present invention Step.And foregoing memory includes:USB flash disk, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. are various can be with the medium of store program codes.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can To instruct the hardware of correlation to complete by program, the program can be stored in a computer-readable memory, memory It can include:Flash disk, read-only storage (English:Read-Only Memory, referred to as:ROM), random access device (English: Random Access Memory, referred to as:RAM), disk or CD etc..
The embodiment of the present invention is described in detail above, specific case used herein to the principle of the present invention and Embodiment is set forth, and the explanation of above example is only intended to help the method and its core concept for understanding the present invention; Meanwhile for those of ordinary skill in the art, according to the thought of the present invention, can in specific embodiments and applications There is change part, in summary, this specification content should not be construed as limiting the invention.

Claims (14)

  1. A kind of 1. mobile terminal, it is characterised in that including application processor AP, and the recognition of face dress being connected with the AP Put, wherein,
    The face identification device, under noctovision environment, obtaining facial image;
    The AP, for obtaining the color component images of default face template;And the facial image and the color are divided Spirogram picture carries out image co-registration, obtains target facial image.
  2. 2. mobile terminal according to claim 1, it is characterised in that described by the facial image and the color point In terms of spirogram picture carries out image co-registration, the AP is specifically used for:
    The facial image is converted into gray level image;
    The gray level image and the color component images are subjected to image co-registration.
  3. 3. mobile terminal according to claim 2, it is characterised in that described by the gray level image and the color point In terms of spirogram picture carries out image co-registration, the AP is specifically used for:
    Determine the first barycenter of the gray level image and the second barycenter of the color component images;
    The gray level image and the color component images are carried out by overlapping according to first barycenter and second barycenter Reason, first barycenter overlap completely with second barycenter, and carry out size adjusting to the gray level image, obtain the first figure Picture so that the first vertical distance of described first image is equal with the second vertical distance of the color component images, wherein, institute Length of first vertical distance for the vertical line segment in described first image through human face region and by first barycenter is stated, Second vertical distance is through human face region and by the vertical line segment of second barycenter in the color component images Length;
    Described first image and the color component images are synthesized.
  4. 4. according to the mobile terminal described in any one of claims 1 to 3, it is characterised in that the AP is also particularly useful for also including:
    Determine facial angle corresponding to the facial image;
    The default face template corresponding to the facial angle is chosen from default face template storehouse, described obtain is performed and presets The step of color component images of face template.
  5. 5. according to the mobile terminal described in any one of Claims 1-4, it is characterised in that the AP also particularly useful for:
    The facial image is matched with the default face template, and in the facial image and the default face mould Plate performs the color component images for obtaining default face template when the match is successful.
  6. A kind of 6. image processing method, it is characterised in that applied to including application processor AP, and the people being connected with the AP The mobile terminal of face identification device, methods described include:
    The face identification device obtains facial image under noctovision environment;
    The AP obtains the color component images of default face template;And by the facial image and the color component images Image co-registration is carried out, obtains target facial image.
  7. A kind of 7. image processing method, it is characterised in that including:
    Under noctovision environment, facial image is obtained;
    Obtain the color component images of default face template;
    The facial image and the color component images are subjected to image co-registration, obtain target facial image.
  8. 8. according to the method for claim 7, it is characterised in that described by the facial image and the color component images Image co-registration is carried out, including:
    The facial image is converted into gray level image;
    The gray level image and the color component images are subjected to image co-registration.
  9. 9. according to the method for claim 8, it is characterised in that described by the gray level image and the color component images Image co-registration is carried out, including:
    Determine the first barycenter of the gray level image and the second barycenter of the color component images;
    The gray level image and the color component images are carried out by overlapping according to first barycenter and second barycenter Reason, first barycenter overlap completely with second barycenter, and carry out size adjusting to the gray level image, obtain the first figure Picture so that the first vertical distance of described first image is equal with the second vertical distance of the color component images, wherein, institute Length of first vertical distance for the vertical line segment in described first image through human face region and by first barycenter is stated, Second vertical distance is through human face region and by the vertical line segment of second barycenter in the color component images Length;
    Described first image and the color component images are synthesized.
  10. 10. according to the method described in any one of claim 7 to 9, it is characterised in that methods described also includes:
    Determine facial angle corresponding to the facial image;
    The default face template corresponding to the facial angle is chosen from default face template storehouse, described obtain is performed and presets The step of color component images of face template.
  11. 11. according to the method described in any one of claim 7 to 10, it is characterised in that methods described also includes:
    The facial image is matched with the default face template, and in the facial image and the default face mould Plate performs the color component images for obtaining default face template when the match is successful.
  12. A kind of 12. image processing apparatus, it is characterised in that including:
    First acquisition unit, under noctovision environment, obtaining facial image;
    Second acquisition unit, for obtaining the color component images of default face template;
    Image fusion unit, for the facial image and the color component images to be carried out into image co-registration, obtain target person Face image.
  13. A kind of 13. mobile terminal, it is characterised in that including:Application processor AP and memory;And one or more programs, One or more of programs are stored in the memory, and are configured to be performed by the AP, and described program includes Instruction for such as any one of claim 7-11 methods.
  14. A kind of 14. computer-readable recording medium, it is characterised in that it is used to store computer program, wherein, the computer Program causes computer to perform the method as described in claim any one of 7-11.
CN201710889988.5A 2017-09-27 2017-09-27 Image processing method and related product Active CN107633499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710889988.5A CN107633499B (en) 2017-09-27 2017-09-27 Image processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710889988.5A CN107633499B (en) 2017-09-27 2017-09-27 Image processing method and related product

Publications (2)

Publication Number Publication Date
CN107633499A true CN107633499A (en) 2018-01-26
CN107633499B CN107633499B (en) 2020-09-01

Family

ID=61102727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710889988.5A Active CN107633499B (en) 2017-09-27 2017-09-27 Image processing method and related product

Country Status (1)

Country Link
CN (1) CN107633499B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345470A (en) * 2018-09-07 2019-02-15 华南理工大学 Facial image fusion method and system
CN109816628A (en) * 2018-12-20 2019-05-28 深圳云天励飞技术有限公司 Face evaluation method and Related product
CN110162953A (en) * 2019-05-31 2019-08-23 Oppo(重庆)智能科技有限公司 Biometric discrimination method and Related product
CN110236509A (en) * 2018-03-07 2019-09-17 台北科技大学 Analyze the method for physiological characteristic in real time in video
CN110969046A (en) * 2018-09-28 2020-04-07 深圳云天励飞技术有限公司 Face recognition method, face recognition device and computer-readable storage medium
CN112102623A (en) * 2020-08-24 2020-12-18 深圳云天励飞技术股份有限公司 Traffic violation identification method and device and intelligent wearable device
CN112178427A (en) * 2020-09-25 2021-01-05 广西中科云创智能科技有限公司 Anti-damage structure of face recognition snapshot camera
CN113556465A (en) * 2021-06-10 2021-10-26 深圳胜力新科技有限公司 AI-based video linkage perception monitoring system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065127A (en) * 2012-12-30 2013-04-24 信帧电子技术(北京)有限公司 Method and device for recognizing human face in fog day image
CN103914820A (en) * 2014-03-31 2014-07-09 华中科技大学 Image haze removal method and system based on image layer enhancement
CN106156730A (en) * 2016-06-30 2016-11-23 腾讯科技(深圳)有限公司 The synthetic method of a kind of facial image and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065127A (en) * 2012-12-30 2013-04-24 信帧电子技术(北京)有限公司 Method and device for recognizing human face in fog day image
CN103914820A (en) * 2014-03-31 2014-07-09 华中科技大学 Image haze removal method and system based on image layer enhancement
CN106156730A (en) * 2016-06-30 2016-11-23 腾讯科技(深圳)有限公司 The synthetic method of a kind of facial image and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
连伟龙: "基于多尺度变换的图像融合技术算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110236509A (en) * 2018-03-07 2019-09-17 台北科技大学 Analyze the method for physiological characteristic in real time in video
CN109345470A (en) * 2018-09-07 2019-02-15 华南理工大学 Facial image fusion method and system
CN109345470B (en) * 2018-09-07 2021-11-23 华南理工大学 Face image fusion method and system
CN110969046A (en) * 2018-09-28 2020-04-07 深圳云天励飞技术有限公司 Face recognition method, face recognition device and computer-readable storage medium
CN110969046B (en) * 2018-09-28 2023-04-07 深圳云天励飞技术有限公司 Face recognition method, face recognition device and computer-readable storage medium
CN109816628A (en) * 2018-12-20 2019-05-28 深圳云天励飞技术有限公司 Face evaluation method and Related product
CN110162953A (en) * 2019-05-31 2019-08-23 Oppo(重庆)智能科技有限公司 Biometric discrimination method and Related product
CN112102623A (en) * 2020-08-24 2020-12-18 深圳云天励飞技术股份有限公司 Traffic violation identification method and device and intelligent wearable device
CN112178427A (en) * 2020-09-25 2021-01-05 广西中科云创智能科技有限公司 Anti-damage structure of face recognition snapshot camera
CN113556465A (en) * 2021-06-10 2021-10-26 深圳胜力新科技有限公司 AI-based video linkage perception monitoring system

Also Published As

Publication number Publication date
CN107633499B (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN107633499A (en) Image processing method and related product
CN107679482A (en) Solve lock control method and Related product
CN107832675A (en) Processing method of taking pictures and Related product
CN107862265A (en) Image processing method and related product
CN107480496A (en) Solve lock control method and Related product
CN107590461A (en) Face identification method and Related product
CN107679481A (en) Solve lock control method and Related product
CN107609514A (en) Face identification method and Related product
CN107292285A (en) Living iris detection method and Related product
CN109241908A (en) Face identification method and relevant apparatus
CN107506687A (en) Biopsy method and Related product
CN107423699A (en) Biopsy method and Related product
CN107463818A (en) Solve lock control method and Related product
CN107451455A (en) Solve lock control method and Related product
CN107506696A (en) Anti-fake processing method and related product
CN107657218A (en) Face identification method and Related product
CN109117725A (en) Face identification method and device
CN106558025A (en) A kind for the treatment of method and apparatus of picture
CN107862266A (en) Image processing method and related product
CN107392832A (en) Image processing method and related product
CN107633235A (en) Solve lock control method and Related product
CN107644219A (en) Face registration method and related product
CN107451454A (en) Solve lock control method and Related product
CN107613550A (en) Solve lock control method and Related product
CN107403147A (en) Living iris detection method and Related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant