CN107423699B - Biopsy method and Related product - Google Patents
Biopsy method and Related product Download PDFInfo
- Publication number
- CN107423699B CN107423699B CN201710576784.6A CN201710576784A CN107423699B CN 107423699 B CN107423699 B CN 107423699B CN 201710576784 A CN201710576784 A CN 201710576784A CN 107423699 B CN107423699 B CN 107423699B
- Authority
- CN
- China
- Prior art keywords
- target area
- brightness value
- eye image
- region
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/34—Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Ophthalmology & Optometry (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of biopsy method and Related products, which comprises obtains eye image;From the target area determined in pupil region in the eye image, the first average brightness value of the target area is greater than the first preset threshold;Judge the eye image whether from living body according to the target area.Using the embodiment of the present invention can from the highlight regions isolated in eye image in pupil region, in turn, according to the highlight regions confirmation eye image whether from living body, it can be achieved that In vivo detection.
Description
Technical field
The present invention relates to technical field of mobile terminals, and in particular to a kind of biopsy method and Related product.
Background technique
With a large amount of popularization and applications of mobile terminal (mobile phone, tablet computer etc.), the application that mobile terminal can be supported is got over
Come more, function is stronger and stronger, and mobile terminal develops towards diversification, personalized direction, and becoming can not in user's life
The appliance and electronic lacked.
At present, favor of the iris recognition increasingly by mobile terminal production firm, the safety of iris recognition
It is one of the major issue of its concern.For security consideration, it is generally the case that can before iris recognition, first to iris into
Row In vivo detection, the problem of how realizing In vivo detection, are urgently to be resolved.
Summary of the invention
The embodiment of the invention provides a kind of biopsy method and Related products, and In vivo detection may be implemented.
The embodiment of the invention provides a kind of biopsy methods, which comprises
Obtain eye image;
From the target area determined in the eye image in pupil region, the first average brightness value of the target area
Greater than the first preset threshold;
Judge the eye image whether from living body according to the target area.
Second aspect, the embodiment of the invention provides a kind of mobile terminals, including camera and application processor
(Application Processor, AP), wherein
The camera is sent to the AP for obtaining eye image, and by the eye image;
The AP, for from the target area determined in the eye image in pupil region, the of the target area
One average brightness value is greater than the first preset threshold;
Whether the AP is also used to judge the eye image from living body according to the target area.
The third aspect, the embodiment of the invention provides a kind of living body detection device, the living body detection device includes obtaining
Unit, the first determination unit and judging unit, wherein
The acquiring unit, for obtaining eye image;
First determination unit, for from the eye image determine pupil region in target area, the mesh
The first average brightness value for marking region is greater than the first preset threshold;
The judging unit, for judging the eye image whether from living body according to the target area.
Fourth aspect, the embodiment of the invention provides a kind of mobile terminals, including camera, application processor AP and storage
Device;And one or more programs, one or more of programs are stored in the memory, and are configured to by institute
AP execution is stated, described program includes the instruction for executing the step some or all of as described in first aspect.
5th aspect, the embodiment of the invention provides a kind of computer readable storage mediums, wherein described computer-readable
Storage medium is for storing computer program, wherein the computer program executes computer such as the embodiment of the present invention the
Step some or all of described in one side.
6th aspect, the embodiment of the invention provides a kind of computer program products, wherein the computer program product
Non-transient computer readable storage medium including storing computer program, the computer program are operable to make to calculate
Machine executes the step some or all of as described in first aspect of the embodiment of the present invention.The computer program product can be one
A software installation packet.
The implementation of the embodiments of the present invention has the following beneficial effects:
As can be seen that eye image is obtained in the embodiment of the present invention, from the target determined in pupil region in eye image
Region, the first average brightness value of target area are greater than the first preset threshold, according to target area judge eye image whether come
From in living body, thus, it can be true according to the highlight regions from the highlight regions isolated in eye image in pupil region, in turn
Recognize eye image whether from living body, it can be achieved that In vivo detection.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Figure 1A is a kind of structural schematic diagram of smart phone provided in an embodiment of the present invention;
Figure 1B is a kind of structural schematic diagram of mobile terminal provided in an embodiment of the present invention;
Fig. 1 C is a kind of another structural schematic diagram of mobile terminal provided in an embodiment of the present invention;
Fig. 1 D is a kind of flow diagram of biopsy method disclosed by the embodiments of the present invention;
Fig. 1 E is the demonstration schematic diagram of human eyes structure disclosed by the embodiments of the present invention;
Fig. 1 F is another demonstration schematic diagram of human eyes structure disclosed by the embodiments of the present invention;
Fig. 2 is the flow diagram of another biopsy method disclosed by the embodiments of the present invention;
Fig. 3 is a kind of structural schematic diagram of mobile terminal provided in an embodiment of the present invention;
Fig. 4 A is a kind of structural schematic diagram of living body detection device provided in an embodiment of the present invention;
Fig. 4 B is the structural representation of the acquiring unit of living body detection device described in Fig. 4 A provided in an embodiment of the present invention
Figure;
Fig. 4 C is the structural representation of the first extraction module of acquiring unit described in Fig. 4 B provided in an embodiment of the present invention
Figure;
Fig. 4 D is a kind of another structural schematic diagram of living body detection device provided in an embodiment of the present invention;
Fig. 4 E is a kind of another structural schematic diagram of living body detection device provided in an embodiment of the present invention;
Fig. 5 is the structural schematic diagram of another mobile terminal disclosed by the embodiments of the present invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Description and claims of this specification and term " first " in above-mentioned attached drawing, " second " etc. are for distinguishing
Different objects, are not use to describe a particular order.In addition, term " includes " and " having " and their any deformations, it is intended that
It is to cover and non-exclusive includes.Such as the process, method, system, product or equipment for containing a series of steps or units do not have
It is defined in listed step or unit, but optionally further comprising the step of not listing or unit, or optionally also wrap
Include other step or units intrinsic for these process, methods, product or equipment.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments
Containing at least one embodiment of the present invention.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
Mobile terminal involved by the embodiment of the present invention may include the various handheld devices with wireless communication function,
Mobile unit, wearable device calculate equipment or are connected to other processing equipments and various forms of radio modem
User equipment (User Equipment, UE), mobile station (Mobile Station, MS), terminal device (terminal
Device) etc..For convenience of description, apparatus mentioned above is referred to as mobile terminal.The embodiment of the present invention is carried out below detailed
It is thin to introduce.A kind of example smart phone 100 as shown in Figure 1A, the iris identification device of the smart phone 100 may include red
Outer light compensating lamp 21 and infrared camera 22, in the iris identification device course of work, the light of infrared light compensating lamp 21 gets to iris
After upper, infrared camera 22 is returned by iris reflex, iris identification device acquires iris image, in addition, visible image capturing head
23 be front camera, and light compensating lamp 24 can be visible light light compensating lamp, be can be used under noctovision environment, auxiliary front mounted camera is complete
At shooting.
Please refer to Figure 1B, Figure 1B be shown in a kind of structural schematic diagram of mobile terminal 100, the mobile terminal 100 wraps
It includes: application processor AP110, camera 120, iris identification device 130, wherein iris identification device 130 can be with camera
120 integrate, alternatively, iris identification device can be individually present with camera 120, wherein the AP110 passes through bus
150 connection cameras 120 and iris identification device 130 further please refer to Fig. 1 C, and Fig. 1 C is to move described in Figure 1B
A kind of modification structures of terminal 100, for Figure 1B, Fig. 1 C further includes light compensating lamp 160, is visible light light compensating lamp, main
It is used for when camera 120 is taken pictures, carries out light filling.
In some possible embodiments, the camera 120, for obtaining eye image, and by the eye image
It is sent to the AP110;
The AP110, for from the eye image determine pupil region in target area, the target area
First average brightness value is greater than the first preset threshold;
Whether the AP110 is also used to judge the eye image from living body according to the target area.
In some possible embodiments, it is described according to the target area judge the eye image whether from
Living body, the AP110 are specifically used for:
Feature extraction is carried out to the target area, obtains target signature collection;Using default In vivo detection classifier to institute
State target signature collection to be trained, obtain training result, and according to the training result judge the eye image whether from
Living body.
In some possible embodiment, feature extraction is carried out to the target area described, obtains target signature collection,
The AP110 is specifically used for:
Image denoising processing is carried out to the target area;According to the corresponding pass between brightness value and smoothing processing coefficient
System, determines the corresponding smooth target processing coefficient of first average brightness value;According to the smooth target processing coefficient to figure
As the target area after denoising is smoothed;Feature is carried out to the target area after smoothing processing to mention
It takes, obtains the target signature collection.
In some possible embodiments, the AP110 also particularly useful for:
Determine the iris region in the eye image;Determine corresponding second average brightness value of the iris region, and
When difference between first average brightness value and second average brightness value is greater than the second preset threshold, described in execution
The step of whether eye image is from living body is judged according to the target area.
In some possible embodiments, the mobile terminal is provided with light compensating lamp 160;
The light compensating lamp 160 is specifically used for starting the light compensating lamp when ambient brightness is lower than predetermined luminance threshold value, adjusting
The brightness of the light compensating lamp, and control the camera 120 and take pictures.
Fig. 1 D is please referred to, Fig. 1 is answered the embodiment of the invention provides a kind of flow diagram of living iris detection method
For including the mobile terminal of camera and application processor AP, mobile terminal pictorial diagram and structure chart can refer to Figure 1A-figure
1C, this living iris detection method include:
101, eye image is obtained.
Wherein, eye image is obtained using iris identification device, alternatively, eye image can be obtained by camera, this
In inventive embodiments iris identification device be mountable to the same side region of touching display screen, for example, be embedded in touching display screen, or
Person, it is mountable beside front camera.Above-mentioned eye image can be entire human eye, and part human eye can also be referred to (for example, iris
Image), it can also refer to the image (for example, facial image) comprising eye image.
102, from the target area determined in pupil region in the eye image, the first of the target area is average bright
Angle value is greater than the first preset threshold.
Wherein, as shown in Fig. 1 E, 1F, Fig. 1 E shows the rough schematic of human eyes structure, it is seen then that can wrap in eye image
Containing pupil region, white of the eye region and iris region, in the case where external light source influences, highlight bar there is also in pupil region
Domain, referring to Fig. 1 F.The embodiment of the present invention can extract pupil region from eye image, further, from the pupil region
It extracts target area (highlight regions), it is default that the average brightness value (i.e. the first average brightness value) of the target area is greater than first
Threshold value, above-mentioned first preset threshold can be set by the user himself or system default.Under normal conditions, in user's shooting process
In, will appear a part of region in the pupil region of living body is that brightness is apparently higher than other regions.
Optionally, above-mentioned steps 102, from the target area determined in the eye image in pupil region, it may include such as
Lower step:
A11, maximum brightness value in the pupil region is determined;
A12 chooses the region within the scope of the pre-set radius using centered on the maximum brightness value as the target area.
Wherein, above-mentioned pre-set radius range can be set by the user himself or system default.Pupil region can be obtained respectively
In each pixel brightness value, in turn, choose maximum brightness value.It can be from the pre-set radius centered on choosing the maximum brightness value
The region of range is as target area.
Optionally, above-mentioned steps 102, from the target area determined in the eye image in pupil region, it may include such as
Lower step:
B11, the pupil region is divided into X region, the X is the integer greater than 1;
B12, the average brightness value for calculating separately each region in the X region, obtain the X average brightness value;
B13, the corresponding region of maximum average brightness value is chosen from the X average brightness value as the target area
Domain.
Wherein, above-mentioned X region can size it is the same, above-mentioned X can be set by the user himself or system default.Into
And pupil region can be divided into X region, X is integer greater than 1, calculates separately being averaged for each region in the X region
X average brightness value can be obtained in brightness value, chooses the maximum brightness value in the X average brightness value, and by its corresponding region
As target area.
Optionally, it between above-mentioned steps 101 and step 102, can also comprise the following steps:
Image enhancement processing is carried out to the eye image.
Wherein, image enhancement processing may include, but are not limited to: image denoising (for example, wavelet transformation carries out image denoising,
Alternatively, bloom denoise), image restoration (for example, Wiener filtering), noctovision enhancing algorithm (for example, histogram equalization, gray scale
Stretch etc.), after carrying out image enhancement processing to iris image, the quality of iris image can be mentioned to a certain extent
It rises.Further, it during executing step 102, is determined in pupil region in the eye image after can handling enhancing
Target area.
Optionally, it between above-mentioned steps 101 and step 102, can also comprise the following steps:
A1, image quality evaluation is carried out to the eye image, obtains image quality evaluation values;
A2, described image quality evaluation value be greater than predetermined quality threshold when, execute step 102.
Wherein, above-mentioned predetermined quality threshold can be set by the user himself or system default, can first carry out to eye image
Image quality evaluation obtains an image quality evaluation values, the quality of the eye image is judged by the image quality evaluation values
Be it is good or bad, when image quality evaluation values are greater than predetermined quality threshold, it is believed that eye image is high-quality, executes step
102, when image quality evaluation values are less than predetermined quality threshold, it is believed that eye image is of poor quality, can not execute step 102.
Wherein, in above-mentioned steps A1, at least one image quality evaluation index can be used, image matter is carried out to iris image
Amount evaluation, thus, obtain image quality evaluation values.
It may include multiple images quality evaluation index, each image quality evaluation index also corresponds to a weight, in this way, often
When one image quality evaluation index carries out image quality evaluation to iris image, an evaluation result can be obtained, finally, carry out
Ranking operation also just obtains final image quality evaluation values.Image quality evaluation index may include, but are not limited to: mean value,
Standard deviation, entropy, clarity, signal-to-noise ratio etc..
It should be noted that there is certain limitation when due to evaluating using single evaluation index picture quality
Property, therefore, multiple images quality evaluation index, which can be used, evaluates picture quality, certainly, evaluates picture quality
When, not image quality evaluation index is The more the better, because image quality evaluation index is more, the meter of image quality assessment process
Calculation complexity is higher, and also not necessarily image quality evaluation effect is better, therefore, in the situation more demanding to image quality evaluation
Under, 2~10 image quality evaluation indexs can be used, picture quality is evaluated.Specifically, image quality evaluation is chosen to refer to
Target number and which index, depending on specific implementation situation.Certainly, it also obtains and is commented in conjunction with specifically scene selection picture quality
Valence index carries out the image quality index that image quality evaluation selection is carried out under image quality evaluation and bright ring border under dark situation
It can be different.
Optionally, in the case where not high to image quality evaluation required precision, an image quality evaluation index can be used
It is evaluated, for example, carrying out image quality evaluation values to image to be processed with entropy, it is believed that entropy is bigger, then illustrates picture quality
It is better, on the contrary, entropy is smaller, then illustrate that picture quality is poorer.
Optionally, in the higher situation of image quality evaluation required precision, multiple images quality evaluation can be used
Index evaluates image, and when multiple images quality evaluation index carries out image quality evaluation to image, settable this is more
The weight of each image quality evaluation index in a image quality evaluation index, can be obtained multiple images quality evaluation value, according to
Final image quality evaluation values can be obtained in multiple image quality evaluation values and its corresponding weight, for example, three image matter
Amount evaluation index is respectively as follows: A index, B index and C index, and the weight of A is a1, and the weight of B is a2, and the weight of C is a3, uses
A, when B and C carries out image quality evaluation to a certain image, the corresponding image quality evaluation values of A are b1, the corresponding picture quality of B
Evaluation of estimate is b2, and the corresponding image quality evaluation values of C are b3, then, last image quality evaluation values=a1b1+a2b2+
a3b3.Under normal conditions, image quality evaluation values are bigger, illustrate that picture quality is better.
103, judge the eye image whether from living body according to the target area.
Wherein, eye image can be judged according to the target area whether from living body, for example, it is bright to obtain current environment
Degree, and the corresponding target of current environment brightness is determined according to the mapping relations between preset ambient brightness and target area brightness
Brightness range then confirms eye image from living body when first average brightness value belongs to subject brightness range.
Optionally, in above-mentioned steps 103, according to the target area judge the eye image whether from living body,
May include following steps:
31, feature extraction is carried out to the target area, obtains target signature collection;
32, the target signature collection is trained using default In vivo detection classifier, obtains training result, and according to
Whether the training result judges the eye image from living body.
Wherein, above-mentioned In vivo detection classifier may include, but are not limited to: support vector machines (Support Vector
Machine, SVM), genetic algorithm class device, neural network algorithm classifier, cascade classifier (such as Genetic algorithms~+ SVM)
Deng.Feature extraction can be carried out to target area, target signature collection can be obtained.Features described above extraction can be used following algorithm and realize:
Harris Corner Detection Algorithm, Scale invariant features transform (Scale Invariant Feature Transform, SIFT),
SUSAN Corner Detection Algorithm etc., details are not described herein.In turn, default In vivo detection classifier can be used to target signature collection
It is trained, obtains training result, and judge eye image whether from living body according to the training result.Wherein, training knot
Fruit can be a probability value, for example, probability value is 80%, then it is believed that eye image is from living body, lower than then thinking human eye
For image from non-living body, which can be following one kind: the human eye in the human eye of 3D printing, photo is alternatively, without life
The human eye of feature.
Wherein, above-mentioned default In vivo detection classifier can be arranged before executing the embodiments of the present invention, mainly set
Set may include following steps B1-B7:
B1, positive sample collection is obtained, the positive sample collection includes the above-mentioned target area of A living body, and the A is positive integer;
B2, negative sample collection is obtained, the negative sample collection includes the above-mentioned target area of B non-living body, and the A is positive whole
Number;
B3, feature extraction is carried out to the positive sample collection, obtains the A group feature;
B4, feature extraction is carried out to the negative sample collection, obtains the B group feature;
B5, the A group feature is trained using the first specified classifier, obtains first kind object classifiers;
B6, the B group feature is trained using the second specified classifier, obtains the second class object classifiers;
B7, divide using the first kind object classifiers and the second class object classifiers as the default In vivo detection
Class device.
Wherein, above-mentioned target area refers both to the region that average brightness value in pupil region is greater than the first preset threshold, A and B
Can be by user setting, particular number is bigger, and positive sample collection includes A positive sample, and each positive sample is the pupil of living body
Target area, negative sample collection include B positive sample, each negative sample is the target area of the pupil of non-living body, then classifies
Device classifying quality is better.The concrete mode of feature extraction in above-mentioned steps B3, B4 can refer to above-mentioned steps 31, in addition, first
Specified classifier and the second specified classifier can be same classifier or different classifiers, the either first specified classifier
Or the second specified classifier may each comprise but be not limited only to: support vector machines, genetic algorithm class device, neural network algorithm point
Class device, cascade classifier (such as Genetic algorithms~+ SVM).
Optionally, in above-mentioned steps 31, feature extraction is carried out to the target area, obtains target signature collection, it may include
Following steps:
311, image denoising processing is carried out to the target area;
312, according to the corresponding relationship between brightness value and smoothing processing coefficient, determine that first average brightness value is corresponding
Smooth target processing coefficient;
313, according to the smooth target processing coefficient, to image denoising, treated that the target area is smoothly located
Reason;
314, feature extraction is carried out to the target area after smoothing processing, obtains the target signature collection.
Wherein, above-mentioned image denoising processing may include, but are not limited to: being denoised using wavelet transformation, is filtered using mean value
Wave device denoised, denoising is carried out using median filter, denoise using morphology noise filter etc..To mesh
It, can be according to the corresponding relationship between pre-set brightness value and smoothing processing coefficient, really after marking region progress image denoising
The fixed corresponding smooth target processing coefficient of first average brightness, in turn, according to the smooth target processing coefficient to image denoising at
Target area after reason is smoothed, and to a certain extent, can promote the picture quality of target area, and can be to smooth
Target area after processing carries out feature extraction, obtains target signature collection.In this way, can more be extracted from target area
Feature.Above-mentioned target signature collection can be the set of multiple characteristic points.
Optionally, it between above-mentioned steps 102 and step 103, can also comprise the following steps:
C1, iris region in the eye image is determined;
C2, determine corresponding second average brightness value of the iris region, and first average brightness value with it is described
When difference between second average brightness value is greater than the second preset threshold, execution is described to judge the people according to the target area
The step of whether eye image is from living body.
Wherein, above-mentioned second preset threshold can be set by the user himself or system default.It can be determined from eye image
Iris region out, for example, iris region can be extracted from eye image by image segmentation mode, it may be determined that the iris region
Average brightness value, obtain the second average brightness value, certainly, iris region brightness and the brightness of target area can exist certain
Difference is deposited, in turn, can determine whether that difference between the first average brightness value and the second average brightness value is greater than whether to buy second default
When threshold value, if so, 103 are thened follow the steps, if it is not, then illustrating the eye image from non-living body.
As can be seen that eye image is obtained in the embodiment of the present invention, from the target determined in pupil region in eye image
Region, the first average brightness value of target area are greater than the first preset threshold, according to target area judge eye image whether come
From in living body, thus, it can be true according to the highlight regions from the highlight regions isolated in eye image in pupil region, in turn
Recognize eye image whether from living body, it can be achieved that In vivo detection.In practical applications, living human eye will appear reflective feelings
Condition, non-living body human eye are then not in this phenomenon, especially pupil, thus, In vivo detection can be carried out according to this feature, into
And it realizes to iris In vivo detection.
Referring to Fig. 2, Fig. 2 is answered the embodiment of the invention provides a kind of flow diagram of living iris detection method
For the mobile terminal including camera, light compensating lamp and application processor AP, mobile terminal pictorial diagram can refer to structure chart
Figure 1A-Fig. 1 C, this living iris detection method include:
201, when ambient brightness is lower than predetermined luminance threshold value, start the light compensating lamp, adjust the brightness of the light compensating lamp.
Wherein, above-mentioned predetermined luminance threshold value can be set by the user himself or system default.Ambient light sensor can be used
Ambient brightness, openable light compensating lamp are detected, which cooperates iris identification device or face identification device to carry out human eye figure
As obtaining.In turn, the brightness of light compensating lamp is adjusted, specifically, can be preset ambient brightness and light compensating lamp adjustment factor it
Between corresponding relationship, in turn, can ambient brightness determine after, the also brightness of adjustable light compensating lamp, in turn, can control camera shooting
Head is taken pictures, and output image is obtained, and obtains eye image from output image.
202, control camera is taken pictures, and obtains eye image.
203, from the target area determined in pupil region in the eye image, the first of the target area is average bright
Angle value is greater than the first preset threshold.
204, judge the eye image whether from living body according to the target area.
Wherein, the specific descriptions of above-mentioned steps 202- step 204 can refer to the correspondence of biopsy method described in Fig. 1
Step, details are not described herein.
As can be seen that when ambient brightness is lower than predetermined luminance threshold value, starting light compensating lamp in the embodiment of the present invention, adjusting
The brightness of light compensating lamp, control camera are taken pictures, and eye image is obtained, from the target determined in pupil region in eye image
Region, the first average brightness value of target area are greater than the first preset threshold, according to target area judge eye image whether come
From in living body, thus, it can be true according to the highlight regions from the highlight regions isolated in eye image in pupil region, in turn
Recognize eye image whether from living body, it can be achieved that In vivo detection.In this way, can be applicable under noctovision environment, and realize living body
Detection.In practical applications, living human eye will appear reflective situation, and non-living body human eye is then not in this phenomenon, especially
It is pupil, thus, In vivo detection can be carried out according to this feature, in turn, realized to iris In vivo detection.
Referring to Fig. 3, Fig. 3 is a kind of mobile terminal provided in an embodiment of the present invention, comprising: application processor AP and storage
Device, the mobile terminal may also include camera and light compensating lamp;And one or more programs, one or more of program quilts
Storage in the memory, and is configured to be executed by the AP, and described program includes the finger for executing following steps
It enables:
Obtain eye image;
From the target area determined in the eye image in pupil region, the first average brightness value of the target area
Greater than the first preset threshold;
Judge the eye image whether from living body according to the target area.
It is described to judge the eye image whether from work according to the target area in a possible example
Body, described program include the instruction for executing following steps:
Feature extraction is carried out to the target area, obtains target signature collection;
The target signature collection is trained using default In vivo detection classifier, obtains training result, and according to this
Whether training result judges the eye image from living body.
In a possible example, feature extraction is carried out to the target area described, obtains target signature collection side
Face, described program include the instruction for executing following steps:
Image denoising processing is carried out to the target area;
According to the corresponding relationship between brightness value and smoothing processing coefficient, the corresponding mesh of first average brightness value is determined
Mark smoothing processing coefficient;
According to the smooth target processing coefficient, to image denoising, treated that the target area is smoothed;
Feature extraction is carried out to the target area after smoothing processing, obtains the target signature collection.
In a possible example, described program further includes the instruction for executing following steps:
Determine the iris region in the eye image;
Determine corresponding second average brightness value of the iris region, and in first average brightness value and described second
When difference between average brightness value is greater than the second preset threshold, execution is described to judge the human eye figure according to the target area
Seem it is no from living body the step of.
In a possible example, which is characterized in that the mobile terminal is provided with light compensating lamp, and described program further includes
For executing the instruction of following steps:
When ambient brightness is lower than predetermined luminance threshold value, starts the light compensating lamp, adjust the brightness of the light compensating lamp, and control
The camera is made to take pictures.
Fig. 4 A is please referred to, Fig. 4 A is a kind of structural schematic diagram of living body detection device provided in this embodiment.Living body inspection
Survey device and be applied to the mobile terminal including camera and application processor AP, living body detection device include acquiring unit 401,
First determination unit 402 and judging unit 403, wherein
The acquiring unit 401 obtains eye image for controlling the camera;
First determination unit 402, for from the eye image determine pupil region in target area, it is described
First average brightness value of target area is greater than the first preset threshold;
The judging unit 403, for judging the eye image whether from living body according to the target area.
Optionally, such as detail knot that Fig. 4 B, Fig. 4 B are the judging units 403 of living body detection device described in Fig. 4 A
Structure, the judging unit 403 can include: the first extraction module 4031 and training module 4032, specific as follows:
First extraction module 4031 obtains target signature collection for carrying out feature extraction to the target area;
Training module 4032 is obtained for being trained using default In vivo detection classifier to the target signature collection
Training result, and judge the eye image whether from living body according to the training result.
Optionally, if Fig. 4 C, Fig. 4 C are the specific thin of the first extraction module 4031 of judging unit 403 described in Fig. 4 B
Section structure, first extraction module 4031 can include: denoising module 510, determining module 520, processing module 530 and second mention
Modulus block 540, specific as follows:
Module 510 is denoised, for carrying out image denoising processing to the target area;
Determining module 520, for determining that described first is flat according to the corresponding relationship between brightness value and smoothing processing coefficient
The corresponding smooth target processing coefficient of equal brightness value;
Processing module 530, for according to the smooth target processing coefficient to image denoising treated the target area
Domain is smoothed;
Second extraction module 540 obtains the mesh for carrying out feature extraction to the target area after smoothing processing
Mark feature set.
Optionally, such as Fig. 4 D, Fig. 4 D is the modification structures of living body detection device described in Fig. 4 A, and described device can also wrap
It includes: the second determination unit 404, specific as follows:
Second determination unit 404, for determining the iris region in the eye image;
Second determination unit 404 is also used to determine corresponding second average brightness value of the iris region, and in institute
When stating the difference between the first average brightness value and second average brightness value greater than the second preset threshold, and by the judgement
Unit 403, which executes, described judges the step of whether eye image is from living body according to the target area.
Optionally, the mobile terminal is provided with light compensating lamp, if Fig. 4 E, Fig. 4 E are living body detection device described in Fig. 4 A
Modification structures, described device may also include that start unit 405 and adjust unit 406, it is specific as follows:
Start unit 405 starts the benefit for controlling the light compensating lamp when ambient brightness is lower than predetermined luminance threshold value
Light lamp;
It adjusts unit 406 and the camera is controlled by the acquiring unit 401 for adjusting the brightness of the light compensating lamp
It takes pictures, and obtains eye image.
It can be seen that living body detection device described in the embodiment of the present invention, eye image obtained, from eye image
Determine the target area in pupil region, the first average brightness value of target area is greater than the first preset threshold, according to target area
Domain judge eye image whether from living body, thus, can from the highlight regions isolated in eye image in pupil region, into
And according to the highlight regions confirmation eye image whether from living body, it can be achieved that In vivo detection.In practical applications, living body
Human eye will appear reflective situation, and non-living body human eye is then not in that this phenomenon, especially pupil thus can be according to the spies
Sign carries out In vivo detection, in turn, realizes to iris In vivo detection.
It is understood that the function of each program module of the living body detection device of the present embodiment can be according to above method reality
The method specific implementation in example is applied, specific implementation process is referred to the associated description of above method embodiment, herein no longer
It repeats.
The embodiment of the invention also provides another mobile terminals, as shown in figure 5, for ease of description, illustrate only with
The relevant part of the embodiment of the present invention, it is disclosed by specific technical details, please refer to present invention method part.The movement
Terminal can be include mobile phone, tablet computer, PDA (Personal Digital Assistant, personal digital assistant), POS
Any terminal device such as (Point of Sales, point-of-sale terminal), vehicle-mounted computer, by taking mobile terminal is mobile phone as an example:
Fig. 5 shows the block diagram of the part-structure of mobile phone relevant to mobile terminal provided in an embodiment of the present invention.Ginseng
Fig. 5 is examined, mobile phone includes: radio frequency (Radio Frequency, RF) circuit 910, memory 920, input unit 930, sensor
950, voicefrequency circuit 960, Wireless Fidelity (Wireless Fidelity, WiFi) module 970, application processor AP980 and
The components such as power supply 990.It will be understood by those skilled in the art that handset structure shown in Fig. 5 does not constitute the restriction to mobile phone,
It may include perhaps combining certain components or different component layouts than illustrating more or fewer components.
It is specifically introduced below with reference to each component parts of the Fig. 5 to mobile phone:
Input unit 930 can be used for receiving the number or character information of input, and generate with the user setting of mobile phone with
And the related key signals input of function control.Specifically, input unit 930 may include touching display screen 933, iris identification device
931 and other input equipments 932.The structure of iris identification device 931 can refer to Figure 1A-Fig. 1 C, and details are not described herein.Input
Unit 930 can also include other input equipments 932.Specifically, other input equipments 932 can include but is not limited to physics by
In key, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick, camera, light compensating lamp etc.
It is one or more.
The application processor AP980, for performing the following operations:
Obtain eye image;
From the target area determined in the eye image in pupil region, the first average brightness value of the target area
Greater than the first preset threshold;
Judge the eye image whether from living body according to the target area.
AP980 is the control centre of mobile phone, using the various pieces of various interfaces and connection whole mobile phone, passes through fortune
Row executes the software program and/or module being stored in memory 920, and calls the data being stored in memory 920,
The various functions and processing data for executing mobile phone, to carry out integral monitoring to mobile phone.Optionally, AP980 may include one or
Multiple processing units;Preferably, AP980 can integrate application processor and modem processor, wherein application processor is main
Processing operation system, user interface and application program etc., modem processor mainly handle wireless communication.It is understood that
It is that above-mentioned modem processor can not also be integrated into AP980.
In addition, memory 920 may include high-speed random access memory, it can also include nonvolatile memory, example
Such as at least one disk memory, flush memory device or other volatile solid-state parts.
RF circuit 910 can be used for sending and receiving for information.In general, RF circuit 910 includes but is not limited to antenna, at least one
A amplifier, transceiver, coupler, low-noise amplifier (Low Noise Amplifier, LNA), duplexer etc..In addition,
RF circuit 910 can also be communicated with network and other equipment by wireless communication.Any communication can be used in above-mentioned wireless communication
Standard or agreement, including but not limited to global system for mobile communications (Global System of Mobile
Communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code it is point more
Location (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division
Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), Email, short message service
(Short Messaging Service, SMS) etc..
Mobile phone may also include at least one sensor 950, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light
Light and shade adjust the brightness of touching display screen, proximity sensor can when mobile phone is moved in one's ear, close touching display screen and/
Or backlight.As a kind of motion sensor, accelerometer sensor can detect (generally three axis) acceleration in all directions
Size can detect that size and the direction of gravity when static, can be used to identify mobile phone posture application (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;It can also configure as mobile phone
The other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, details are not described herein.
Voicefrequency circuit 960, loudspeaker 961, microphone 962 can provide the audio interface between user and mobile phone.Audio-frequency electric
Electric signal after the audio data received conversion can be transferred to loudspeaker 961, be converted to sound by loudspeaker 961 by road 960
Signal plays;On the other hand, the voice signal of collection is converted to electric signal by microphone 962, is turned after being received by voicefrequency circuit 960
It is changed to audio data, then by after audio data broadcasting AP980 processing, such as another mobile phone is sent to through RF circuit 910, or
Audio data is played to memory 920 to be further processed.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronics postal by WiFi module 970
Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 5 is shown
WiFi module 970, but it is understood that, and it is not belonging to must be configured into for mobile phone, it can according to need do not changing completely
Become in the range of the essence of invention and omits.
Mobile phone further includes the power supply 990 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply pipe
Reason system and AP980 are logically contiguous, to realize the function such as management charging, electric discharge and power managed by power-supply management system
Energy.
Although being not shown, mobile phone can also include camera, bluetooth module etc., and details are not described herein.
In earlier figures 1D, embodiment shown in Fig. 2, each step method process can be realized based on the structure of the mobile phone.
In embodiment shown in earlier figures 3, Fig. 4 A~Fig. 4 E, each unit function can be realized based on the structure of the mobile phone.
The embodiment of the present invention also provides a kind of computer storage medium, wherein computer storage medium storage is for electricity
The computer program of subdata exchange, it is as any in recorded in above method embodiment which execute computer
A kind of some or all of biopsy method step.
The embodiment of the present invention also provides a kind of computer program product, and the computer program product includes storing calculating
The non-transient computer readable storage medium of machine program, the computer program are operable to that computer is made to execute such as above-mentioned side
Some or all of any biopsy method recorded in method embodiment step.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because
According to the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules is not necessarily of the invention
It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed device, it can be by another way
It realizes.For example, the apparatus embodiments described above are merely exemplary, such as the division of the unit, it is only a kind of
Logical function partition, there may be another division manner in actual implementation, such as multiple units or components can combine or can
To be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Coupling, direct-coupling or communication connection can be through some interfaces, the indirect coupling or communication connection of device or unit,
It can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also be realized in the form of software program module.
If the integrated unit is realized in the form of software program module and sells or use as independent product
When, it can store in a computer-readable access to memory.Based on this understanding, technical solution of the present invention substantially or
Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products
Reveal and, which is stored in a memory, including some instructions are used so that a computer equipment
(can be personal computer, server or network equipment etc.) executes all or part of each embodiment the method for the present invention
Step.And memory above-mentioned includes: USB flash disk, read-only memory (ROM, Read-Only Memory), random access memory
The various media that can store program code such as (RAM, Random Access Memory), mobile hard disk, magnetic or disk.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can store in a computer-readable memory, memory
May include: flash disk, read-only memory (English: Read-Only Memory, referred to as: ROM), random access device (English:
Random Access Memory, referred to as: RAM), disk or CD etc..
The embodiment of the present invention has been described in detail above, specific case used herein to the principle of the present invention and
Embodiment is expounded, and the above description of the embodiment is only used to help understand the method for the present invention and its core ideas;
At the same time, for those skilled in the art can in specific embodiments and applications according to the thought of the present invention
There is change place, in conclusion the contents of this specification are not to be construed as limiting the invention.
Claims (7)
1. a kind of biopsy method, which is characterized in that be applied to mobile terminal, which comprises
When current environment brightness is lower than predetermined luminance threshold value, starts light compensating lamp, adjust the brightness of the light compensating lamp, and clapped
According to obtaining output image, obtain eye image from the output image, wherein the brightness for adjusting the light compensating lamp has
Body are as follows: according to the corresponding relationship preset between ambient brightness and the adjustment factor of light compensating lamp, determine that the current environment is bright
Corresponding adjustment factor is spent, the brightness of the light compensating lamp is adjusted according to the adjustment factor;
Target area from pupil region determining in the eye image, specifically: the pupil region is divided into X area
Domain, the X are the integer greater than 1;The average brightness value for calculating separately each region in the X region obtains the X and puts down
Equal brightness value;The corresponding region of maximum average brightness value is chosen from the X average brightness value as the target area, institute
The first average brightness value for stating target area is greater than the first preset threshold;
Judge the eye image whether from living body according to the target area;
It is wherein, described to judge the eye image whether from living body according to the target area, comprising:
Feature extraction is carried out to the target area, obtains target signature collection;
The target signature collection is trained using default In vivo detection classifier, obtains training result, and according to the training
As a result judge the eye image whether from living body;
Wherein, described that feature extraction is carried out to the target area, obtain target signature collection, comprising:
Image denoising processing is carried out to the target area;
According to the corresponding relationship between brightness value and smoothing processing coefficient, determine that the corresponding target of first average brightness value is flat
Sliding processing coefficient;
According to the smooth target processing coefficient, to image denoising, treated that the target area is smoothed;
Feature extraction is carried out to the target area after smoothing processing, obtains the target signature collection.
2. the method according to claim 1, wherein the method also includes:
Determine the iris region in the eye image;
Determine corresponding second average brightness value of the iris region, and average with described second in first average brightness value
When difference between brightness value is greater than the second preset threshold, execution is described to judge that the eye image is according to the target area
It is no from living body the step of.
3. a kind of mobile terminal, which is characterized in that including camera and application processor AP and light compensating lamp, wherein
The light compensating lamp, for starting the light compensating lamp, adjusting the benefit when current environment brightness is lower than predetermined luminance threshold value
The brightness of light lamp, wherein the brightness for adjusting the light compensating lamp, specifically: foundation presets ambient brightness and light compensating lamp
Adjustment factor between corresponding relationship, the corresponding adjustment factor of the current environment brightness is determined, according to the adjustment factor tune
Save the brightness of the light compensating lamp;
The camera obtains output image for taking pictures, and obtains eye image from the output image, and by institute
It states eye image and is sent to the AP;
The AP, for from the eye image determine pupil region in target area, specifically: by the pupil region
It is divided into X region, the X is the integer greater than 1;The average brightness value for calculating separately each region in the X region, obtains
To the X average brightness value;It is chosen from the X average brightness value described in the corresponding region conduct of maximum average brightness value
First average brightness value of target area, the target area is greater than the first preset threshold;
Whether the AP is also used to judge the eye image from living body according to the target area;
Whether the eye image is judged from living body according to the target area described, the AP is specifically used for:
Feature extraction is carried out to the target area, obtains target signature collection;Using default In vivo detection classifier to the mesh
Mark feature set is trained, and obtains training result, and judge the eye image whether from living body according to the training result;
Wherein, feature extraction is carried out to the target area described, obtains target signature collection, the AP is specifically used for:
Image denoising processing is carried out to the target area;According to the corresponding relationship between brightness value and smoothing processing coefficient, really
Determine the corresponding smooth target processing coefficient of first average brightness value;According to the smooth target processing coefficient to image denoising
Treated, and the target area is smoothed;Feature extraction is carried out to the target area after smoothing processing, is obtained
The target signature collection.
4. mobile terminal according to claim 3, which is characterized in that the AP also particularly useful for:
Determine the iris region in the eye image;Determine corresponding second average brightness value of the iris region, and in institute
When stating the difference between the first average brightness value and second average brightness value greater than the second preset threshold, the basis is executed
The target area judges the step of whether eye image is from living body.
5. a kind of living body detection device, which is characterized in that the living body detection device include acquiring unit, the first determination unit and
Judging unit, wherein
The acquiring unit, for starting light compensating lamp, adjusting the light filling when current environment brightness is lower than predetermined luminance threshold value
The brightness of lamp, and take pictures, output image is obtained, obtains eye image from the output image, wherein the adjusting institute
The brightness of light compensating lamp is stated, specifically: according to the corresponding relationship preset between ambient brightness and the adjustment factor of light compensating lamp, really
Determine the corresponding adjustment factor of the current environment brightness, the brightness of the light compensating lamp is adjusted according to the adjustment factor;
First determination unit, for from the eye image determine pupil region in target area, specifically: by institute
It states pupil region and is divided into X region, the X is the integer greater than 1;Calculate separately the flat of each region in the X region
Equal brightness value obtains the X average brightness value;It is corresponding that maximum average brightness value is chosen from the X average brightness value
Region is greater than the first preset threshold as the target area, the first average brightness value of the target area;
The judging unit, for according to the target area judge the eye image whether from living body, specifically: it is right
The target area carries out feature extraction, obtains target signature collection;Using default In vivo detection classifier to the target signature
Collection is trained, and obtains training result, and judge the eye image whether from living body according to the training result;
Wherein, feature extraction is carried out to the target area described, in terms of obtaining target signature collection, the judging unit is specific
For:
Image denoising processing is carried out to the target area;
According to the corresponding relationship between brightness value and smoothing processing coefficient, determine that the corresponding target of first average brightness value is flat
Sliding processing coefficient;
According to the smooth target processing coefficient, to image denoising, treated that the target area is smoothed;
Feature extraction is carried out to the target area after smoothing processing, obtains the target signature collection.
6. a kind of mobile terminal characterized by comprising camera, application processor AP and memory;And it is one or more
Program, one or more of programs are stored in the memory, and are configured to be executed by the AP, described program
Including the instruction for any one of such as claims 1 or 2 method.
7. a kind of computer readable storage medium, which is characterized in that it is used to store computer program, wherein the computer
Program makes computer execute such as the described in any item methods of claims 1 or 2.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710576784.6A CN107423699B (en) | 2017-07-14 | 2017-07-14 | Biopsy method and Related product |
PCT/CN2018/094964 WO2019011206A1 (en) | 2017-07-14 | 2018-07-09 | Living body detection method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710576784.6A CN107423699B (en) | 2017-07-14 | 2017-07-14 | Biopsy method and Related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107423699A CN107423699A (en) | 2017-12-01 |
CN107423699B true CN107423699B (en) | 2019-09-13 |
Family
ID=60426898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710576784.6A Active CN107423699B (en) | 2017-07-14 | 2017-07-14 | Biopsy method and Related product |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107423699B (en) |
WO (1) | WO2019011206A1 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107423699B (en) * | 2017-07-14 | 2019-09-13 | Oppo广东移动通信有限公司 | Biopsy method and Related product |
CN107992866B (en) * | 2017-11-15 | 2018-06-29 | 上海聚虹光电科技有限公司 | Biopsy method based on video flowing eye reflective spot |
CN109190522B (en) * | 2018-08-17 | 2021-05-07 | 浙江捷尚视觉科技股份有限公司 | Living body detection method based on infrared camera |
CN110570873B (en) * | 2019-09-12 | 2022-08-05 | Oppo广东移动通信有限公司 | Voiceprint wake-up method and device, computer equipment and storage medium |
CN112906440A (en) * | 2019-12-04 | 2021-06-04 | 深圳君正时代集成电路有限公司 | Anti-cracking method for living body identification |
CN111079688A (en) * | 2019-12-27 | 2020-04-28 | 中国电子科技集团公司第十五研究所 | Living body detection method based on infrared image in face recognition |
CN111507201B (en) * | 2020-03-27 | 2023-04-18 | 北京万里红科技有限公司 | Human eye image processing method, human eye recognition method, human eye image processing device and storage medium |
CN112052726A (en) * | 2020-07-28 | 2020-12-08 | 北京极豪科技有限公司 | Image processing method and device |
CN112149580B (en) * | 2020-09-25 | 2024-05-14 | 江苏邦融微电子有限公司 | Image processing method for distinguishing real face from photo |
CN112668396A (en) * | 2020-12-03 | 2021-04-16 | 浙江大华技术股份有限公司 | Two-dimensional false target identification method, device, equipment and medium |
CN112507923B (en) * | 2020-12-16 | 2023-10-31 | 平安银行股份有限公司 | Certificate copying detection method and device, electronic equipment and medium |
CN114973426B (en) * | 2021-06-03 | 2023-08-15 | 中移互联网有限公司 | Living body detection method, device and equipment |
CN116030042B (en) * | 2023-02-24 | 2023-06-16 | 智慧眼科技股份有限公司 | Diagnostic device, method, equipment and storage medium for doctor's diagnosis |
CN117952859B (en) * | 2024-03-27 | 2024-06-07 | 吉林大学 | Pressure damage image optimization method and system based on thermal imaging technology |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002312772A (en) * | 2001-04-13 | 2002-10-25 | Oki Electric Ind Co Ltd | Individual identification device and eye forgery judgment method |
CN1809316A (en) * | 2003-07-04 | 2006-07-26 | 松下电器产业株式会社 | Organism eye judgment method and organism eye judgment device |
CN1842296A (en) * | 2004-08-03 | 2006-10-04 | 松下电器产业株式会社 | Living body determination device, authentication device using the device, and living body determination method |
CN105138996A (en) * | 2015-09-01 | 2015-12-09 | 北京上古视觉科技有限公司 | Iris identification system with living body detecting function |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001034754A (en) * | 1999-07-19 | 2001-02-09 | Sony Corp | Iris authentication device |
CN107423699B (en) * | 2017-07-14 | 2019-09-13 | Oppo广东移动通信有限公司 | Biopsy method and Related product |
-
2017
- 2017-07-14 CN CN201710576784.6A patent/CN107423699B/en active Active
-
2018
- 2018-07-09 WO PCT/CN2018/094964 patent/WO2019011206A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002312772A (en) * | 2001-04-13 | 2002-10-25 | Oki Electric Ind Co Ltd | Individual identification device and eye forgery judgment method |
CN1809316A (en) * | 2003-07-04 | 2006-07-26 | 松下电器产业株式会社 | Organism eye judgment method and organism eye judgment device |
CN1842296A (en) * | 2004-08-03 | 2006-10-04 | 松下电器产业株式会社 | Living body determination device, authentication device using the device, and living body determination method |
CN105138996A (en) * | 2015-09-01 | 2015-12-09 | 北京上古视觉科技有限公司 | Iris identification system with living body detecting function |
Non-Patent Citations (1)
Title |
---|
"基于卷积神经网络的虹膜活体检测算法研究";李志明;《计算机工程》;20160531(第5期);第239-243、248页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107423699A (en) | 2017-12-01 |
WO2019011206A1 (en) | 2019-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107423699B (en) | Biopsy method and Related product | |
CN107590461B (en) | Face recognition method and related product | |
US11074466B2 (en) | Anti-counterfeiting processing method and related products | |
CN107862265B (en) | Image processing method and related product | |
CN107679482B (en) | Unlocking control method and related product | |
CN107609514B (en) | Face recognition method and related product | |
CN107292285B (en) | Iris living body detection method and related product | |
CN107506687B (en) | Living body detection method and related product | |
CN107657218B (en) | Face recognition method and related product | |
US11055547B2 (en) | Unlocking control method and related products | |
CN107679481B (en) | Unlocking control method and related product | |
WO2019011098A1 (en) | Unlocking control method and relevant product | |
CN110113515B (en) | Photographing control method and related product | |
CN107633499B (en) | Image processing method and related product | |
CN107451444B (en) | Solve lock control method and Related product | |
CN107451454B (en) | Unlocking control method and related product | |
EP3623973A1 (en) | Unlocking control method and related product | |
US11151398B2 (en) | Anti-counterfeiting processing method, electronic device, and non-transitory computer-readable storage medium | |
CN107392135A (en) | Biopsy method and Related product | |
CN107368791A (en) | Living iris detection method and Related product | |
CN107613550A (en) | Solve lock control method and Related product | |
CN110807769B (en) | Image display control method and device | |
CN111416936B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN107358183A (en) | Living iris detection method and Related product | |
CN117392071A (en) | Image frame picking processing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |