CN109165587A - intelligent image information extraction method - Google Patents
intelligent image information extraction method Download PDFInfo
- Publication number
- CN109165587A CN109165587A CN201810912357.5A CN201810912357A CN109165587A CN 109165587 A CN109165587 A CN 109165587A CN 201810912357 A CN201810912357 A CN 201810912357A CN 109165587 A CN109165587 A CN 109165587A
- Authority
- CN
- China
- Prior art keywords
- iris
- pixel
- image
- training
- image set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
Abstract
The present invention provides a kind of intelligent image information extraction methods, this method comprises: mobile terminal camera acquires optical data;Training image collection is obtained by iris image convolution window and filtering processing;Training image collection is matched with sample graph image set, realizes iris recognition.The invention proposes a kind of intelligent image information extraction methods, dimensionality reduction is carried out to the iris image original series that needs identify by quantization characteristic parameter, Symbol processing is carried out to the training image collection obtained after dimensionality reduction again, reduced sample matching process, reduce the complexity of calculating and the requirement to device orientation, allow user to execute more flexiblely and watch movement attentively, enhances user experience.
Description
Technical field
The present invention relates to artificial intelligence, in particular to a kind of intelligent image information extraction method.
Background technique
Living things feature recognition suffers from very important application in identification and smart machine.As a branch,
Iris recognition technology is the utilization of computer image processing technology and mode identification technology in identification field.Iris recognition tool
The advantages that standby high stability, high-accuracy, height antifalsification, uniqueness, generality and non-infringement property, before having wide utilization
Scape and important researching value.The key point of iris recognition technology be accurately to extract collected iris image between
Between pupil and sclera, the effective coverage of iris is obtained, and obtain to reflect line deeply using reasonable texture blending method
The code of information is managed, which will preferably influence in view of rotation, translation bring.However, existing iris recognition technology
Acquisition requires excessively high, generally requires on-line synchronous recognition and cannot handle offline iris information, and in the occasion of Non-synergic
It is difficult to reach preferable robustness.Only reasonable precision, speed and robustness can just meet user demand.These are all urgently
The problem of to be solved and improvement.
Summary of the invention
To solve the problems of above-mentioned prior art, the invention proposes a kind of intelligent image information extraction method,
Include:
Mobile terminal camera acquires optical data;
Training image collection is obtained by iris image convolution window and filtering processing;
Training image collection is matched with sample graph image set, realizes iris recognition.
Preferably, before executing iris recognition, further includes:
Acquisition user's optical data is trained to obtain quantization characteristic parameter and sample graph image set.
Preferably, further comprise:
After collecting iris image, RGB data is taken out from image buffer storage and adds convolution window respectively;
Sampled from image buffer storage simultaneously according to scheduled frequency, and with the convolution window of pre- fixed step size to sampled data into
Row process of convolution obtains the original image set of predetermined length;
The original image set of the predetermined length obtained after convolution is filtered, to filter out interference noise;I.e. to pre-
The pixel being filtered on each component of the original image set of measured length chooses adjacent on the left of the pixel make a reservation for
The pixel of number and the pixel for choosing predetermined number adjacent on the right side of the pixel, calculate the equal of the pixel selected
It is worth and is replaced by the mean value numerical value of the pixel of filtering processing.
Preferably, the original image set to the predetermined length obtained after convolution is filtered, and further comprises:
By presetting time most adjacent number K, K, any one pixel left side adjacent pixels point and the right K
Value of the mean value of sequence composed by a adjacent pixels point as the pixel after filtering processing;
For each channel image collection in RGB data, filtering are as follows:
Wherein, N is the length of image set, i.e. the size of convolution window, and K is the neighbours' number chosen in advance, that is, chooses certain
Left and right each K most adjacent neighbours of one pixel, axjFor picture signal ajComponent on current color channel, a'xiIt is
axjCorresponding filtered data.
The present invention compared with prior art, has the advantage that
The invention proposes a kind of intelligent image information extraction methods, pass through the iris image original series identified to needs
Dimensionality reduction and Symbol processing are carried out, the complexity of calculating and the requirement to device orientation are reduced, user is allowed to hold more flexiblely
Row watches movement attentively, enhances user experience.
Detailed description of the invention
Fig. 1 is the flow chart of intelligent image information extraction method according to an embodiment of the present invention.
Specific embodiment
Retouching in detail to one or more embodiment of the invention is hereafter provided together with the attached drawing of the diagram principle of the invention
It states.The present invention is described in conjunction with such embodiment, but the present invention is not limited to any embodiments.The scope of the present invention is only by right
Claim limits, and the present invention covers many substitutions, modification and equivalent.Illustrate in the following description many details with
Just it provides a thorough understanding of the present invention.These details are provided for exemplary purposes, and without in these details
Some or all details can also realize the present invention according to claims.
An aspect of of the present present invention provides a kind of intelligent image information extraction method.Fig. 1 is according to an embodiment of the present invention
Intelligent image information extraction method flow chart.
The present invention acquires user's optical data in advance and is trained, and obtains quantization characteristic parameter and sample graph image set, and benefit
The intrinsic dimensionality of training image collection is reduced with the quantization characteristic parameter, computation complexity is reduced with this and when user stares to setting
The requirement in standby orientation.By the way that the low-dimensional image set after dimensionality reduction is carried out symbol conversion, the noise in image set is further removed, is mentioned
High accuracy of identification.The iris feature code of training image collection is matched with sample graph image set finally, can be realized accurately
Iris recognition improves user experience.
The method that the present invention identifies iris includes: to obtain user's optical data, is trained, obtains to user's optical data
Quantization characteristic parameter and sample graph image set, further comprising:
Step 1, acquisition needs to be implemented user's optical data of iris recognition, obtains original image set;Know executing iris
Preferably further include a sample training process before not, user's optical data is acquired during sample training and is trained
Obtain quantization characteristic parameter and sample graph image set.Preferably, before executing all iris recognitions, pass through a sample training mistake
Journey obtains quantization characteristic parameter and sample graph image set and for subsequent all iris recognition.
Step 2, feature extraction is carried out to original image set using quantization characteristic parameter, reduces the feature dimensions of original image set
Number, the training image collection after obtaining dimensionality reduction;
Step 3, training image collection is converted to discrete iris feature code, obtains the iris feature generation of training image collection
Code;
Step 4, the iris feature code of training image collection is matched with sample graph image set, when successful match, really
Recognizing presented iris image is the corresponding iris image of sample graph image set.
Preferably, training in advance obtains one or more sample graph image sets, the corresponding user's rainbow of each sample graph image set
Film stores sample graph image set, can be used the sample graph image set without being trained again when subsequent trained.
The sample training includes the following steps: that mobile terminal camera acquires optical data;Iris image convolution window
And filtering processing;Training image collection processing.Wherein training image collection processing, which is specifically included, schemes training using support vector machines
Image set carries out Data Dimensionality Reduction processing;Symbol polymerization approaches;Obtain sample graph image set.Camera when sample training acquires iris number
Essentially identical according to the processing step for acquiring iris data with the camera in identification process, difference is needs pair when sample training
The same iris multi collect data, and when executing iris recognition, the data of any iris actually occurred are acquired.
After collecting iris image, RGB data is taken out from image buffer storage and adds convolution window respectively.According to pre-
Fixed frequency is sampled from image buffer storage simultaneously, and carries out process of convolution to sampled data with the convolution window of pre- fixed step size, is obtained
To the original image set of predetermined length.
The original image set of the predetermined length obtained after convolution is filtered, to filter out interference noise.I.e. to pre-
The pixel being filtered on each component of the original image set of measured length chooses adjacent on the left of the pixel make a reservation for
The pixel of number and the pixel for choosing predetermined number adjacent on the right side of the pixel, calculate the equal of the pixel selected
It is worth and is replaced by the mean value numerical value of the pixel of filtering processing.
Preferably, the present invention is filtered using K-MEANS filtering.By presetting time most adjacent number
K, using the mean value of sequence composed by K, any one pixel left side adjacent pixels point and K, the right adjacent pixels point as filter
The value of the pixel after wave processing.
For the R channel image collection in RGB data, K-MEANS filtering are as follows:
Wherein, N is the length of image set, i.e. the size of convolution window, and K is the neighbours' number chosen in advance, that is, chooses certain
Left and right each K most adjacent neighbours of one pixel, axjFor picture signal ajComponent on the channel R, a'xiIt is axjIt is corresponding
Filtered data.
Next, the process handled original image set specifically includes:
Training image collection is trained using support vector machines during sample training, realize to training image collection into
Row feature extraction.Each training image collection of acquisition is filtered, and filtered training image collection is carried out at regularization
Reason, that is, being transformed to mean value is 0, the image set that variance is 1.
Specifically, setting N × P matrix of the composition of RGB training image collection obtained in three convolution windows as A=[A1,
...AP], wherein N is the length of convolution window, and P is intrinsic dimensionality, and P=3 is preset in the present invention, i.e. original image set is three-dimensional
Data, the element representation in the matrix A are aij, i=1 ... N;J=1 ... P.
Calculate training image collection covariance matrix all characteristic values and the corresponding unit character of each characteristic value to
Amount.Each component mean value M={ M of original RGB training image collection is calculated firstar, Mag, MabAnd covariance vector Ω={ Ωar,
Ωag, Ωab}。
Calculate covariance matrix Ω=(S of the matrix A of training image collection compositionij)P×P, in which:
Respectively akiAnd akjThe mean value of (k=1,2 ..., N), i.e., calculating each component of RGB training image collection is equal
Value, i=1 ... P;J=1 ... P.
Find out the eigenvalue λ of covariance matrix ΩiAnd corresponding Orthogonal Units feature vector ui;
If the eigenvalue λ of covariance matrix Ω1≥λ2≥…≥λP> 0, corresponding unit character vector are u1, u2...,
uP。A1, A2... APPrincipal component be exactly using the feature vector of covariance matrix Ω as the linear combination of coefficient.
If sometime collected RGB training data a={ ar, ag, ab, then λiCorresponding unit character vector ui=
{ui1, ui2, ui3It is exactly principal component FiAbout the combination coefficient of training image collection a, then i-th of principal component of RGB training image collection
FiAre as follows:
Fi=aui=axui1+ayui2+azui3
Preceding m principal component is selected from characteristic value to indicate the information of training image collection, the determination of m is by G (m) come really
It is fixed:
The eigenmatrix constituted using the corresponding unit character vector of best eigenvalue, carries out at dimensionality reduction training image collection
Reason calculates mapping of the training image collection on eigenmatrix, the training image collection after obtaining dimensionality reduction.Utilize obtained picture number
According to each component mean value M={ Mar, Mag, MabAnd covariance vector Ω={ Ωar, Ωag, ΩabAnd eigenmatrix u={ u11,
u12, u13}.Filtered original image set is handled as follows:
In three convolution windows, each component image data are carried out using each component mean value M and covariance vector Ω
Regularization:
a'r=(ar-Mar)/Ωar
a'g=(ag-Mag)/Ωag
a'b=(ab-Mab)/Ωab
Using eigenmatrix, feature extraction is carried out to the original image set after Regularization, reduces original image set
Intrinsic dimensionality, the training image collection after obtaining dimensionality reduction.By the original image set after regularization multiplied by eigenmatrix u, dimensionality reduction is obtained
One-dimensional sequence afterwards:
D=a ' U=a 'ru11+a’gu12+a’bu13
The corresponding one-dimensional characteristic code combination of original image set is obtained, is instructed the one-dimensional data after the dimensionality reduction as one
Practice image set.Or the one-dimensional sequence is further subjected to framing, the average value of each frame is sought, then forms each frame average value
Image set as a training image collection, further remove noise.
After obtaining one-dimensional characteristic code combination, using symbol polymerization approach the training image collection is converted to it is discrete
Iris feature code, specifically, setting one-dimensional original image set as A=a1, a2..., aN.N is sequence length.It is tired by being segmented
Product approximate processing obtains the symbol sebolic addressing that length is W;The length of training image collection is fallen below into W from N.W represents one after dimensionality reduction
The length of dimensional feature code combination.
The entire value range of image set is divided into r equiprobable sections, i.e., under Gaussian probability density curve, is drawn
It is divided into r part of area equation, and indicates the sequential digit values in the same section with identical letter character, to obtains
The symbol of numerical value indicates.
Then to indicating that the iris feature code of iris traverses, the direction of neighbor pixel is found out, then by direction value
Most similar direction value in the direction value group for being converted to and setting, and save as direction sequence;To Tongfang continuous in direction sequence
Upward pixel merges, and the vector that distance is less than threshold value between continuous equidirectional upper pixel is gone as noise spot
It removes;Continuous equidirectional point is remerged, the vector at this moment extracted is end to end, to react the feature of iris.Then opposite
The distance of amount carries out regularization, saves as the sequence of sampling.
A paths are found using the processing of local optimum makes the amount distortion between two feature vectors minimum;By sample
Two sequence datas corresponding to notebook data and iris image to be matched are expressed as r [i] and t [j], distance value D (r
[i], t [j]) it indicates, path starting point is selected, makes it towards prescribed direction Dynamic Programming using local path constraint;
The number N for setting iris sampling pixel points, according to the length W of the training image collection after dimensionality reduction, by N number of point according to W/
The distance that N is obtained is evenly distributed on iris track;The coordinate of the N number of point finally distributed is as sampled point;
Then iris image is rendered to the image of N*N size, first by image scaled to unified size, then root
The sequence is finally returned to as sampling according to the specific gravity of the fractional part judgement point of coordinate points to fill the sequence of N*N sized images
As a result;
The sample sequence of uniform length is obtained after transformed samples regularization, is calculated in d dimension space two o'clock a=[a1,
a2..., ad], b=[b1, b2..., bd] similarity:
It is as most matched according to being calculated with the highest iris sample image of iris image similarity to be matched
Image.
In further embodiment of the present invention, the iris recognition step based on user's eye video processing offline user.
User's eye video data is subjected to temporal segmentation first, successively extracts NFA key frame, and mentioned centered on each key frame
Take iris feature figure in default time domain to construct physical training condition collection, further construction correspond to the training of each physical training condition collection to
Amount group:
O={ oI, j, k| i ∈ [1, NF], j ∈ [1, NC], k ∈ [1, NV], wherein NCFor the number of segment after iris video segmentation;
NVFor the sample sequence number of presented iris.The Vector Groups are divided into test set and training set, are respectively used to the parameter of identification model
Estimation and training.
To give the training vector group O={ o of iris mI, j, k| i ∈ [1, NF], j ∈ [1, NC], k ∈ [1, NV] it is training number
According to 3 parameters A, B and ω in iris recognition model λ m of the solution based on condition random field.
A is state-transition matrix: A={ aij=P (Sj|Si), 1≤i≤NF, 1≤j≤NF, expression is in t moment state
SiUnder conditions of, it is S in the state at t+1 momentjProbability.
B is error matrix: Β={ bij=P (Oj|Si), 1≤i≤NF, 1≤j≤NF, it indicates in t moment, sneak condition
For SiUnder conditions of, physical training condition OjProbability.
In the iris recognition based on sample sequence, the reliability of initiation parameter is assessed using given training data,
And by reducing these parameters of error transfer factor.Give the training vector group S of some iris mm={ sk| k ∈ [1, Nv], foundation pair
It should be in the iris recognition model λ of iris mm=(A, B, ω).Given iris cycle tests OmAnd corresponding conditional random field models λm
Initiation parameter, definitionIt is located at sneak condition S for t momentiLocal probability:
Define ρt(i, j) is that t moment is located at sneak condition SiAnd the t+1 moment is converted to sneak condition SjLocal probability:
ρt(i, j)=P (qt=Si, qt+1=Sj|Vm, λm)。
In λmOn the basis of initial parameter value, a is usedijTo λmParameter A be iterated refinement, it is final to obtain one group of part
Optimal parameter value (A, B, ω), in which:
In actual iris identification application, using the method for condition random field parameter self modification, for different illumination conditions
Iris data, using with the consistent data point reuse conditional random field models parameter of training environment, it is correct that identification can be greatly improved
Rate.
In conjunction with priori knowledge and from knowledge obtained in correction data is reviewed one's lessons by oneself, in condition random field initial parameter value and self-correction
Linear interpolation is carried out between the mean value of data, to obtain the mean vector after self-correction.When self-correction data volume is sufficiently large,
Model converges on the model according to hands-on data re -training, has preferable consistency and gradual.Before remembering self-correction
Conditional random field models be distributed λm=(μij, Ωij), after corresponding self-correction model be distributed as λ~m=(μ~ij, Ω~ij),
Wherein μijWith μ~ijRespectively j-th of normal distribution mean value of self-correction state i before and after the processing, ΩijWith Ω~ijRespectively certainly
Correct the covariance matrix of front and back.Given self-correction iris cycle tests OAm={ vi| i ∈ [1, Nv], setting μ~ij=K μij+
εij, wherein εijFor residual error, K is regression matrix.
Therefore iris recognition problem is converted to the evaluation problem of one group of condition random field, wherein iris training vector is concentrated
Each vector vkThe one-dimensional characteristic code O that a corresponding length is Tk=ok1ok2…okT.It is corresponding successively to calculate each user
Condition random field iris recognition model generates the mathematical expectation of probability of all cycle tests in given iris training vector group:
And be ranked up, can the corresponding iris image of the maximum condition random field of decision probability be exactly most possible knowledge
Other target.The probability that each condition random field generates given iris cycle tests V is calculated, specifically:
Step 1, the mathematical expectation of probability P that all iris recognition models generate iris training vector group V is successively calculatedm:
Step 2, it sorts and takes the corresponding iris m of mathematical expectation of probability maximum model
Wherein, above-mentioned that Regularization is carried out to filtered training image collection, it preferably further comprises:
1. the picture signal of the same iris sample is expressed as X (i, j), i indicates described image signal sampling equipment
The serial number of sampling channel, and i ∈ [1, F], j indicate time series number.Using the maximum value of the absolute value of F channel image signal
|X|mAs regularization standard.The discrete-time series of picture signal after regularization are indicated are as follows:
2. selecting at least one feature as the primary of corresponding iris from the multiple feature of the F sampling channel
Feature code combination, forms corresponding eigenmatrix by the unit character vector of this feature code combination.
It after constitutive characteristic matrix, further include that discrimination highest and the minimum spy of error rate are determined from multiple samples
Levy matrix;Determining eigenmatrix is subjected to model training using CNN to form the CNN model for defining iris.Specifically,
Random initializtion weight matrix first;Regularization is carried out to eigenmatrix;Canonical turns to the same feature in the channel multiple sample F
Maximum difference;Determine the number of nodes k of list hidden layer:
Wherein a is input layer number, and b is output layer number of nodes,For constant.
P learning sample is sequentially input, and recording currently entered is p-th of sample.
Successively calculate the output of each layer;Wherein the neuron j input of hidden layer is netpj=∑iwjioji;AndopjIt is the output of neuron j, wjiIt is weight of i-th of neuron to j-th of neuron,
The output of output layer neuron are as follows: opl=∑jwliopj
The error performance index of p-th of sampleIn formula, tplIt is the target output of neuron l;
If p=P, the weight of each layer is corrected;The connection weight w of output layer and hidden layerljCorrection are as follows:The connection weight w of hidden layer and input layerjiLearning algorithm: N is the number of iterations, and η is learning rate, η ∈ [0,1];
Then pondage factor α is added to the weight of each layer, weight at this time are as follows:
wlj(n+1)=wlj(n)+Εwlj+α(wlj(n+1)-wlj(n));
wji(n+1)=wji(n)+Εwji+α(wji(n+1)-wji(n));Wherein, the value α ∈ [0,1] of pondage factor;
The output of each layer is recalculated according to new weight, if each sample is all satisfied output and the difference of target output is small
In predefined thresholds, or preset study number is reached, then process stops.
To the situation of above-mentioned offline iris video image, in a further embodiment, self-similarity A is definedSWith mutually it is similar
Spend BSAnd two similarity values are calculated, according to ASAnd BSTo calculate the final similarity distance of iris video.There are two iris verification is total
Stage: acquisition phase and cognitive phase.In acquisition phase, iris video is collected and saves as sample frame;In cognitive phase, depending on
Frequency is collected and is matched with sample frame, to determine whether they are the same client iris.
It is registrated firstly, treating matched iris video with iris sample frame.Sample frame after registration can indicate
For E={ FE 1, FE 2..., FE k, iris video to be identified is expressed as C={ FC 1, FC 2..., FC k, wherein k is indicated in video
The iris image number for including, FE i、FC iIndicate the i-th width iris image.
In acquisition phase, the A of sample frame is calculatedS, the specific method is as follows:
The similarity distance for calculating any two width iris image of sample frame, can be obtained k (k-1)/2 similarity distance, takes them
A of the mean value as the videoS.That is:
WhereinIndicate FE iAnd FE jSimilarity distance.
In cognitive phase, the B of iris video to be matched is calculatedS:
Wherein,Indicate FE iWith the similarity distance of the maximum iris image of area in template,Indicate area in template
Maximum iris image and Fc jSimilarity distance.
Final similarity distance merges ASAnd BSTo calculate the formula of two final similarity distances of iris video are as follows:
S=BS+w(BS-AS), w is to adjust weight.By ASAnd BSAs two features of a sample, therefore a sample
It can be by a two-dimensional feature vector (AS, BS) indicate.Thus judgement matching problem is converted into sample classification.
In classified calculating, arbitrary sample x is expressed as feature vector: < a1(x), a2(x) ..., an(x) >, wherein ak
(x) the ith attribute value of sample x is indicated.Two sample xiAnd xjDistance definition are as follows:
To dispersive target function f:Rn→ V, wherein RnIt is the point of n-dimensional space, V is finite aggregate { v1, v2..., vs,
Return value f (xq) it is calculated as distance xqMost common f value in k nearest training sample.Wherein:
Wherein function Λ (a, b) is defined as:
If a=b, Λ (a, b)=1, otherwise Λ (a, b)=0.
In conclusion the invention proposes a kind of intelligent image information extraction method, by quantization characteristic parameter to needs
The iris image original series of identification carry out dimensionality reduction, then carry out Symbol processing to the training image collection obtained after dimensionality reduction, simplify sample
This matching process, reduces the complexity of calculating and the requirement to device orientation, allow user execute more flexiblely watch attentively it is dynamic
Make, enhances user experience.
Obviously, it should be appreciated by those skilled in the art, each module of the above invention or each steps can be with general
Computing system realize that they can be concentrated in single computing system, or be distributed in multiple computing systems and formed
Network on, optionally, they can be realized with the program code that computing system can be performed, it is thus possible to they are stored
It is executed within the storage system by computing system.In this way, the present invention is not limited to any specific hardware and softwares to combine.
It should be understood that above-mentioned specific embodiment of the invention is used only for exemplary illustration or explains of the invention
Principle, but not to limit the present invention.Therefore, that is done without departing from the spirit and scope of the present invention is any
Modification, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.In addition, appended claims purport of the present invention
Covering the whole variations fallen into attached claim scope and boundary or this range and the equivalent form on boundary and is repairing
Change example.
Claims (4)
1. a kind of intelligent image information extraction method characterized by comprising
Mobile terminal camera acquires optical data;
Training image collection is obtained by iris image convolution window and filtering processing;
Training image collection is matched with sample graph image set, realizes iris recognition.
2. the method according to claim 1, wherein before executing iris recognition, further includes:
Acquisition user's optical data is trained to obtain quantization characteristic parameter and sample graph image set.
3. the method according to claim 1, wherein further comprising:
After collecting iris image, RGB data is taken out from image buffer storage and adds convolution window respectively;
It samples from image buffer storage according to scheduled frequency, and sampled data is rolled up simultaneously with the convolution window of pre- fixed step size
Product processing, obtains the original image set of predetermined length;
The original image set of the predetermined length obtained after convolution is filtered, to filter out interference noise;I.e. to pre- fixed length
The pixel being filtered on each component of the original image set of degree chooses predetermined number adjacent on the left of the pixel
Pixel and choose the pixel of adjacent predetermined number on the right side of the pixel, calculate the mean value of the pixel selected simultaneously
By the numerical value of the pixel of mean value replacement filtering processing.
4. according to the method described in claim 3, it is characterized in that, the original image to the predetermined length obtained after convolution
Collection is filtered, and further comprises:
It is by presetting time most adjacent number K, K, any one pixel left side adjacent pixels point and K, the right is adjacent
Connect value of the mean value of sequence composed by pixel as the pixel after filtering processing;
For each channel image collection in RGB data, filtering are as follows:
Wherein, N is the length of image set, i.e. the size of convolution window, and K is the neighbours' number chosen in advance, that is, chooses some
Left and right each K most adjacent neighbours of pixel, axjFor picture signal ajComponent on current color channel, a'xiIt is axjIt is right
The filtered data answered.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810912357.5A CN109165587B (en) | 2018-08-11 | 2018-08-11 | Intelligent image information extraction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810912357.5A CN109165587B (en) | 2018-08-11 | 2018-08-11 | Intelligent image information extraction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109165587A true CN109165587A (en) | 2019-01-08 |
CN109165587B CN109165587B (en) | 2022-12-09 |
Family
ID=64895508
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810912357.5A Active CN109165587B (en) | 2018-08-11 | 2018-08-11 | Intelligent image information extraction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109165587B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008347A (en) * | 2019-11-25 | 2020-04-14 | 杭州安恒信息技术股份有限公司 | Website identification method, device and system and computer readable storage medium |
CN111738194A (en) * | 2020-06-29 | 2020-10-02 | 深圳力维智联技术有限公司 | Evaluation method and device for similarity of face images |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101894256A (en) * | 2010-07-02 | 2010-11-24 | 西安理工大学 | Iris identification method based on odd-symmetric 2D Log-Gabor filter |
CN101916363A (en) * | 2010-05-28 | 2010-12-15 | 深圳大学 | Iris characteristic designing and coding method and iris identifying system |
US20140044320A1 (en) * | 2012-08-10 | 2014-02-13 | EyeVerify LLC | Texture features for biometric authentication |
CN104267835A (en) * | 2014-09-12 | 2015-01-07 | 西安闻泰电子科技有限公司 | Self-adaption gesture recognition method |
CN105184325A (en) * | 2015-09-23 | 2015-12-23 | 歌尔声学股份有限公司 | Human body action recognition method and mobile intelligent terminal |
CN105242779A (en) * | 2015-09-23 | 2016-01-13 | 歌尔声学股份有限公司 | Method for identifying user action and intelligent mobile terminal |
CN105975960A (en) * | 2016-06-16 | 2016-09-28 | 湖北润宏科技有限公司 | Iris identification method based on texture-direction energy characteristic |
CN106293057A (en) * | 2016-07-20 | 2017-01-04 | 西安中科比奇创新科技有限责任公司 | Gesture identification method based on BP neutral net |
CN107273834A (en) * | 2017-06-06 | 2017-10-20 | 宋友澂 | A kind of iris identification method and identifier |
-
2018
- 2018-08-11 CN CN201810912357.5A patent/CN109165587B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101916363A (en) * | 2010-05-28 | 2010-12-15 | 深圳大学 | Iris characteristic designing and coding method and iris identifying system |
CN101894256A (en) * | 2010-07-02 | 2010-11-24 | 西安理工大学 | Iris identification method based on odd-symmetric 2D Log-Gabor filter |
US20140044320A1 (en) * | 2012-08-10 | 2014-02-13 | EyeVerify LLC | Texture features for biometric authentication |
CN104267835A (en) * | 2014-09-12 | 2015-01-07 | 西安闻泰电子科技有限公司 | Self-adaption gesture recognition method |
CN105184325A (en) * | 2015-09-23 | 2015-12-23 | 歌尔声学股份有限公司 | Human body action recognition method and mobile intelligent terminal |
CN105242779A (en) * | 2015-09-23 | 2016-01-13 | 歌尔声学股份有限公司 | Method for identifying user action and intelligent mobile terminal |
CN105975960A (en) * | 2016-06-16 | 2016-09-28 | 湖北润宏科技有限公司 | Iris identification method based on texture-direction energy characteristic |
CN106293057A (en) * | 2016-07-20 | 2017-01-04 | 西安中科比奇创新科技有限责任公司 | Gesture identification method based on BP neutral net |
CN107273834A (en) * | 2017-06-06 | 2017-10-20 | 宋友澂 | A kind of iris identification method and identifier |
Non-Patent Citations (1)
Title |
---|
杨恒伏等: ""基于 HVS 特性的自适应 K 近邻均值滤波算法"", 《计算机工程与应用》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008347A (en) * | 2019-11-25 | 2020-04-14 | 杭州安恒信息技术股份有限公司 | Website identification method, device and system and computer readable storage medium |
CN111738194A (en) * | 2020-06-29 | 2020-10-02 | 深圳力维智联技术有限公司 | Evaluation method and device for similarity of face images |
CN111738194B (en) * | 2020-06-29 | 2024-02-02 | 深圳力维智联技术有限公司 | Method and device for evaluating similarity of face images |
Also Published As
Publication number | Publication date |
---|---|
CN109165587B (en) | 2022-12-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111476302B (en) | fast-RCNN target object detection method based on deep reinforcement learning | |
CN109697434B (en) | Behavior recognition method and device and storage medium | |
CN110210513B (en) | Data classification method and device and terminal equipment | |
CN111161311A (en) | Visual multi-target tracking method and device based on deep learning | |
CN107633226B (en) | Human body motion tracking feature processing method | |
Esmaeili et al. | Fast-at: Fast automatic thumbnail generation using deep neural networks | |
WO2015180042A1 (en) | Learning deep face representation | |
CN107169117B (en) | Hand-drawn human motion retrieval method based on automatic encoder and DTW | |
Yang et al. | Geodesic clustering in deep generative models | |
CN113592911B (en) | Apparent enhanced depth target tracking method | |
CN110458235B (en) | Motion posture similarity comparison method in video | |
CN109190505A (en) | The image-recognizing method that view-based access control model understands | |
CN110570443A (en) | Image linear target extraction method based on structural constraint condition generation model | |
CN111126155B (en) | Pedestrian re-identification method for generating countermeasure network based on semantic constraint | |
CN109165587A (en) | intelligent image information extraction method | |
CN111462184A (en) | Online sparse prototype tracking method based on twin neural network linear representation model | |
CN109165586A (en) | intelligent image processing method for AI chip | |
Nair et al. | T2V-DDPM: Thermal to visible face translation using denoising diffusion probabilistic models | |
CN113191361B (en) | Shape recognition method | |
CN108875445B (en) | Pedestrian re-identification method and device | |
Kirstein et al. | Rapid online learning of objects in a biologically motivated recognition architecture | |
CN116129417A (en) | Digital instrument reading detection method based on low-quality image | |
CN109784291A (en) | Pedestrian detection method based on multiple dimensioned convolution feature | |
CN111681748B (en) | Medical behavior action normalization evaluation method based on intelligent visual perception | |
Rai et al. | Improved attribute manipulation in the latent space of stylegan for semantic face editing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20221123 Address after: No.21 Hubin South Road, Siming District, Xiamen City, Fujian Province Applicant after: XIAMEN ELECTRIC POWER SUPPLY COMPANY OF STATE GRID FUJIAN ELECTRIC POWER Co.,Ltd. Applicant after: Xiamen Lide Group Co.,Ltd. Address before: 641103 No. 23, group 5, yangjiachong village, Shuangqiao Township, Dongxing District, Neijiang City, Sichuan Province Applicant before: Shi Xiuying |
|
GR01 | Patent grant | ||
GR01 | Patent grant |