CN109165586A - intelligent image processing method for AI chip - Google Patents

intelligent image processing method for AI chip Download PDF

Info

Publication number
CN109165586A
CN109165586A CN201810910512.XA CN201810910512A CN109165586A CN 109165586 A CN109165586 A CN 109165586A CN 201810910512 A CN201810910512 A CN 201810910512A CN 109165586 A CN109165586 A CN 109165586A
Authority
CN
China
Prior art keywords
iris
sample
layer
training
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810910512.XA
Other languages
Chinese (zh)
Other versions
CN109165586B (en
Inventor
石修英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Technology Technology Co.,Ltd.
Original Assignee
石修英
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 石修英 filed Critical 石修英
Priority to CN201810910512.XA priority Critical patent/CN109165586B/en
Publication of CN109165586A publication Critical patent/CN109165586A/en
Application granted granted Critical
Publication of CN109165586B publication Critical patent/CN109165586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Neurology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of intelligent image processing methods for AI chip, this method comprises: acquisition user's optical data is trained, training image collection progress symbol is converted to iris feature code;The iris feature code of training image collection is matched with sample graph image set, realizes iris recognition.The invention proposes a kind of intelligent image processing methods for AI chip, dimensionality reduction is carried out to the iris image original series that needs identify by quantization characteristic parameter, Symbol processing is carried out to the training image collection obtained after dimensionality reduction again, reduced sample matching process, reduce the complexity of calculating and the requirement to device orientation, allow user to execute more flexiblely and watch movement attentively, enhances user experience.

Description

Intelligent image processing method for AI chip
Technical field
The present invention relates to artificial intelligence, in particular to a kind of intelligent image processing method for AI chip.
Background technique
Living things feature recognition suffers from very important application in identification and smart machine.As a branch, Iris recognition technology is the utilization of computer image processing technology and mode identification technology in identification field.Iris recognition tool The advantages that standby high stability, high-accuracy, height antifalsification, uniqueness, generality and non-infringement property, before having wide utilization Scape and important researching value.The key point of iris recognition technology be accurately to extract collected iris image between Between pupil and sclera, the effective coverage of iris is obtained, and obtain to reflect line deeply using reasonable texture blending method The code of information is managed, which will preferably influence in view of rotation, translation bring.However, existing iris recognition technology Acquisition requires excessively high, generally requires on-line synchronous recognition and cannot handle offline iris information, and in the occasion of Non-synergic It is difficult to reach preferable robustness.Only reasonable precision, speed and robustness can just meet user demand.These are all urgently The problem of to be solved and improvement.
Summary of the invention
To solve the problems of above-mentioned prior art, the invention proposes at a kind of intelligent image for AI chip Reason method, comprising:
Acquisition user's optical data is trained, and training image collection progress symbol is converted to iris feature code;
The iris feature code of training image collection is matched with sample graph image set, realizes iris recognition.
Preferably, described that training image collection progress symbol is converted to iris feature code, further comprise:
Select at least one feature as a feature code of corresponding iris from multiple features of multiple sampling channels Combination, forms corresponding eigenmatrix by the unit character vector of this feature code combination;
Discrimination highest and the minimum eigenmatrix of error rate are determined from multiple samples;Determining eigenmatrix is utilized CNN process carries out model training to form the CNN model for defining iris;
Random initializtion weight matrix first;Regularization is carried out to eigenmatrix;It is same that canonical turns to the channel multiple sample F The maximum difference of a feature;Determine the number of nodes k of list hidden layer:
Wherein a is input layer number, and b is output layer number of nodes,For constant;
P learning sample is sequentially input, and recording currently entered is p-th of sample;
Successively calculate the output of each layer;
Wherein the neuron j input of hidden layer is netpj=∑iwjioji;AndopjIt is neuron j Output, wjiIt is weight of i-th of neuron to j-th of neuron,
The output of output layer neuron are as follows: opl=∑jwliopj
The error performance index of p-th of sample
tplIt is the target output of neuron l;
If p=P, the weight of each layer is corrected;The connection weight w of output layer and hidden layerljCorrection are as follows:
The connection weight w of hidden layer and input layerjiLearning process are as follows:
N is the number of iterations, and η is learning rate, η ∈ [0,1];
Then pondage factor α is added to the weight of each layer, weight at this time are as follows:
wlj(n+1)=wlj(n)+Εwlj+α(wlj(n+1)-wlj(n));
wji(n+1)=wji(n)+Εwji+α(wji(n+1)-wji(n));
Wherein, the value α ∈ [0,1] of pondage factor;
The output of each layer is recalculated according to new weight, if each sample is all satisfied output and the difference of target output is small In predefined thresholds, or preset study number is reached, then process stops.
The present invention compared with prior art, has the advantage that
The invention proposes a kind of intelligent image processing methods for AI chip, pass through the iris image identified to needs Original series carry out dimensionality reduction and Symbol processing, reduce the complexity of calculating and the requirement to device orientation, allow user more It neatly executes and watches movement attentively, enhance user experience.
Detailed description of the invention
Fig. 1 is the flow chart of the intelligent image processing method according to an embodiment of the present invention for AI chip.
Specific embodiment
Retouching in detail to one or more embodiment of the invention is hereafter provided together with the attached drawing of the diagram principle of the invention It states.The present invention is described in conjunction with such embodiment, but the present invention is not limited to any embodiments.The scope of the present invention is only by right Claim limits, and the present invention covers many substitutions, modification and equivalent.Illustrate in the following description many details with Just it provides a thorough understanding of the present invention.These details are provided for exemplary purposes, and without in these details Some or all details can also realize the present invention according to claims.
An aspect of of the present present invention provides a kind of intelligent image processing method for AI chip.Fig. 1 is according to the present invention The intelligent image processing method flow chart for AI chip of embodiment.
The present invention acquires user's optical data in advance and is trained, and obtains quantization characteristic parameter and sample graph image set, and benefit The intrinsic dimensionality of training image collection is reduced with the quantization characteristic parameter, computation complexity is reduced with this and when user stares to setting The requirement in standby orientation.By the way that the low-dimensional image set after dimensionality reduction is carried out symbol conversion, the noise in image set is further removed, is mentioned High accuracy of identification.The iris feature code of training image collection is matched with sample graph image set finally, can be realized accurately Iris recognition improves user experience.
The method that the present invention identifies iris includes: to obtain user's optical data, is trained, obtains to user's optical data Quantization characteristic parameter and sample graph image set, further comprising:
Step 1, acquisition needs to be implemented user's optical data of iris recognition, obtains original image set;Know executing iris Preferably further include a sample training process before not, user's optical data is acquired during sample training and is trained Obtain quantization characteristic parameter and sample graph image set.Preferably, before executing all iris recognitions, pass through a sample training mistake Journey obtains quantization characteristic parameter and sample graph image set and for subsequent all iris recognition.
Step 2, feature extraction is carried out to original image set using quantization characteristic parameter, reduces the feature dimensions of original image set Number, the training image collection after obtaining dimensionality reduction;
Step 3, training image collection is converted to discrete iris feature code, obtains the iris feature generation of training image collection Code;
Step 4, the iris feature code of training image collection is matched with sample graph image set, when successful match, really Recognizing presented iris image is the corresponding iris image of sample graph image set.
Preferably, training in advance obtains one or more sample graph image sets, the corresponding user's rainbow of each sample graph image set Film stores sample graph image set, can be used the sample graph image set without being trained again when subsequent trained.
The sample training includes the following steps: that mobile terminal camera acquires optical data;Iris image convolution window And filtering processing;Training image collection processing.Wherein training image collection processing, which is specifically included, schemes training using support vector machines Image set carries out Data Dimensionality Reduction processing;Symbol polymerization approaches;Obtain sample graph image set.Camera when sample training acquires iris number Essentially identical according to the processing step for acquiring iris data with the camera in identification process, difference is needs pair when sample training The same iris multi collect data, and when executing iris recognition, the data of any iris actually occurred are acquired.
After collecting iris image, RGB data is taken out from image buffer storage and adds convolution window respectively.According to pre- Fixed frequency is sampled from image buffer storage simultaneously, and carries out process of convolution to sampled data with the convolution window of pre- fixed step size, is obtained To the original image set of predetermined length.
The original image set of the predetermined length obtained after convolution is filtered, to filter out interference noise.I.e. to pre- The pixel being filtered on each component of the original image set of measured length chooses adjacent on the left of the pixel make a reservation for The pixel of number and the pixel for choosing predetermined number adjacent on the right side of the pixel, calculate the equal of the pixel selected It is worth and is replaced by the mean value numerical value of the pixel of filtering processing.
Preferably, the present invention is filtered using K-MEANS filtering.By presetting time most adjacent number K, using the mean value of sequence composed by K, any one pixel left side adjacent pixels point and K, the right adjacent pixels point as filter The value of the pixel after wave processing.
For the R channel image collection in RGB data, K-MEANS filtering are as follows:
Wherein, N is the length of image set, i.e. the size of convolution window, and K is the neighbours' number chosen in advance, that is, chooses certain Left and right each K most adjacent neighbours of one pixel, axjFor picture signal ajComponent on the channel R, a'xiIt is axjIt is corresponding Filtered data.
Next, the process handled original image set specifically includes:
Training image collection is trained using support vector machines during sample training, realize to training image collection into Row feature extraction.Each training image collection of acquisition is filtered, and filtered training image collection is carried out at regularization Reason, that is, being transformed to mean value is 0, the image set that variance is 1.
Specifically, setting N × P matrix of the composition of RGB training image collection obtained in three convolution windows as A=[A1, ...AP], wherein N is the length of convolution window, and P is intrinsic dimensionality, and P=3 is preset in the present invention, i.e. original image set is three-dimensional Data, the element representation in the matrix A are aij, i=1 ... N;J=1 ... P.
Calculate training image collection covariance matrix all characteristic values and the corresponding unit character of each characteristic value to Amount.Each component mean value M={ M of original RGB training image collection is calculated firstar, Mag, MabAnd covariance vector Ω={ Ωar, Ωag, Ωab}。
Calculate covariance matrix Ω=(S of the matrix A of training image collection compositionij)P×P, in which:
Respectively akiAnd akjThe mean value of (k=1,2 ..., N), i.e., calculating each component of RGB training image collection is equal Value, i=1 ... P;J=1 ... P.
Find out the eigenvalue λ of covariance matrix ΩiAnd corresponding Orthogonal Units feature vector ui
If the eigenvalue λ of covariance matrix Ω1≥λ2≥…≥λP> 0, corresponding unit character vector are u1, u2..., uP。A1, A2... APPrincipal component be exactly using the feature vector of covariance matrix Ω as the linear combination of coefficient.
If sometime collected RGB training data a={ ar, ag, ab, then λiCorresponding unit character vector ui= {ui1, ui2, ui3It is exactly principal component FiAbout the combination coefficient of training image collection a, then i-th of principal component of RGB training image collection FiAre as follows:
Fi=aui=axui1+ayui2+azui3
Preceding m principal component is selected from characteristic value to indicate the information of training image collection, the determination of m is by G (m) come really It is fixed:
The eigenmatrix constituted using the corresponding unit character vector of best eigenvalue, carries out at dimensionality reduction training image collection Reason calculates mapping of the training image collection on eigenmatrix, the training image collection after obtaining dimensionality reduction.Utilize obtained picture number According to each component mean value M={ Mar, Mag, MabAnd covariance vector Ω={ Ωar, Ωag, ΩabAnd eigenmatrix u={ u11, u12, u13}.Filtered original image set is handled as follows:
In three convolution windows, each component image data are carried out using each component mean value M and covariance vector Ω Regularization:
a'r=(ar-Mar)/Ωar
a'g=(ag-Mag)/Ωag
a'b=(ab-Mab)/Ωab
Using eigenmatrix, feature extraction is carried out to the original image set after Regularization, reduces original image set Intrinsic dimensionality, the training image collection after obtaining dimensionality reduction.By the original image set after regularization multiplied by eigenmatrix u, dimensionality reduction is obtained One-dimensional sequence afterwards:
D=a ' U=a 'ru11+a’gu12+a’bu13
The corresponding one-dimensional characteristic code combination of original image set is obtained, is instructed the one-dimensional data after the dimensionality reduction as one Practice image set.Or the one-dimensional sequence is further subjected to framing, the average value of each frame is sought, then forms each frame average value Image set as a training image collection, further remove noise.
After obtaining one-dimensional characteristic code combination, using symbol polymerization approach the training image collection is converted to it is discrete Iris feature code, specifically, setting one-dimensional original image set as A=a1, a2..., aN.N is sequence length.It is tired by being segmented Product approximate processing obtains the symbol sebolic addressing that length is W;The length of training image collection is fallen below into W from N.W represents one after dimensionality reduction The length of dimensional feature code combination.
The entire value range of image set is divided into r equiprobable sections, i.e., under Gaussian probability density curve, is drawn It is divided into r part of area equation, and indicates the sequential digit values in the same section with identical letter character, to obtains The symbol of numerical value indicates.
Then to indicating that the iris feature code of iris traverses, the direction of neighbor pixel is found out, then by direction value Most similar direction value in the direction value group for being converted to and setting, and save as direction sequence;To Tongfang continuous in direction sequence Upward pixel merges, and the vector that distance is less than threshold value between continuous equidirectional upper pixel is gone as noise spot It removes;Continuous equidirectional point is remerged, the vector at this moment extracted is end to end, to react the feature of iris.Then opposite The distance of amount carries out regularization, saves as the sequence of sampling.
A paths are found using the processing of local optimum makes the amount distortion between two feature vectors minimum;By sample Two sequence datas corresponding to notebook data and iris image to be matched are expressed as r [i] and t [j], distance value D (r [i], t [j]) it indicates, path starting point is selected, makes it towards prescribed direction Dynamic Programming using local path constraint;
The number N for setting iris sampling pixel points, according to the length W of the training image collection after dimensionality reduction, by N number of point according to W/ The distance that N is obtained is evenly distributed on iris track;The coordinate of the N number of point finally distributed is as sampled point;
Then iris image is rendered to the image of N*N size, first by image scaled to unified size, then root The sequence is finally returned to as sampling according to the specific gravity of the fractional part judgement point of coordinate points to fill the sequence of N*N sized images As a result;
The sample sequence of uniform length is obtained after transformed samples regularization, is calculated in d dimension space two o'clock a=[a1, a2..., ad], b=[b1, b2..., bd] similarity:
It is as most matched according to being calculated with the highest iris sample image of iris image similarity to be matched Image.
In further embodiment of the present invention, the iris recognition step based on user's eye video processing offline user. User's eye video data is subjected to temporal segmentation first, successively extracts NFA key frame, and mentioned centered on each key frame Take iris feature figure in default time domain to construct physical training condition collection, further construction correspond to the training of each physical training condition collection to Amount group:
O={ oI, j, k| i ∈ [1, NF], j ∈ [1, NC], k ∈ [1, NV], wherein NCFor the number of segment after iris video segmentation; NVFor the sample sequence number of presented iris.The Vector Groups are divided into test set and training set, are respectively used to the parameter of identification model Estimation and training.
To give the training vector group O={ o of iris mI, j, k| i ∈ [1, NF], j ∈ [1, NC], k ∈ [1, NV] it is training number According to 3 parameters A, B and ω in iris recognition model λ m of the solution based on condition random field.
A is state-transition matrix: A={ aij=P (Sj|Si), 1≤i≤NF, 1≤j≤NF, expression is in t moment state SiUnder conditions of, it is S in the state at t+1 momentjProbability.
B is error matrix: Β={ bij=P (Oj|Si), 1≤i≤NF, 1≤j≤NF, it indicates in t moment, sneak condition For SiUnder conditions of, physical training condition OjProbability.
In the iris recognition based on sample sequence, the reliability of initiation parameter is assessed using given training data, And by reducing these parameters of error transfer factor.Give the training vector group S of some iris mm={ sk| k ∈ [1, Nv], foundation pair It should be in the iris recognition model λ of iris mm=(A, B, ω).Given iris cycle tests OmAnd corresponding conditional random field models λm Initiation parameter, definitionIt is located at sneak condition S for t momentiLocal probability:
Define ρt(i, j) is that t moment is located at sneak condition SiAnd the t+1 moment is converted to sneak condition SjLocal probability:
ρt(i, j)=P (qt=Si, qt+1=Sj|Vm, λm)。
In λmOn the basis of initial parameter value, a is usedijTo λmParameter A be iterated refinement, it is final to obtain one group of part Optimal parameter value (A, B, ω), in which:
In actual iris identification application, using the method for condition random field parameter self modification, for different illumination conditions Iris data, using with the consistent data point reuse conditional random field models parameter of training environment, it is correct that identification can be greatly improved Rate.
In conjunction with priori knowledge and from knowledge obtained in correction data is reviewed one's lessons by oneself, in condition random field initial parameter value and self-correction Linear interpolation is carried out between the mean value of data, to obtain the mean vector after self-correction.When self-correction data volume is sufficiently large, Model converges on the model according to hands-on data re -training, has preferable consistency and gradual.Before remembering self-correction Conditional random field models be distributed λm=(μij, Ωij), after corresponding self-correction model be distributed as λ~m=(μ~ij, Ω~ij), Wherein μijWith μ~ijRespectively j-th of normal distribution mean value of self-correction state i before and after the processing, ΩijWith Ω~ijRespectively certainly Correct the covariance matrix of front and back.Given self-correction iris cycle tests OAm={ vi| i ∈ [1, Nv], setting μ~ij=K μij+ εij, wherein εijFor residual error, K is regression matrix.
Therefore iris recognition problem is converted to the evaluation problem of one group of condition random field, wherein iris training vector is concentrated Each vector vkThe one-dimensional characteristic code O that a corresponding length is Tk=ok1ok2…okT.It is corresponding successively to calculate each user Condition random field iris recognition model generates the mathematical expectation of probability of all cycle tests in given iris training vector group:
And be ranked up, can the corresponding iris image of the maximum condition random field of decision probability be exactly most possible knowledge Other target.The probability that each condition random field generates given iris cycle tests V is calculated, specifically:
Step 1, the mathematical expectation of probability P that all iris recognition models generate iris training vector group V is successively calculatedm:
Step 2, it sorts and takes the corresponding iris m of mathematical expectation of probability maximum model
Wherein, above-mentioned that Regularization is carried out to filtered training image collection, it preferably further comprises:
1. the picture signal of the same iris sample is expressed as X (i, j), i indicates described image signal sampling equipment The serial number of sampling channel, and i ∈ [1, F], j indicate time series number.Using the maximum value of the absolute value of F channel image signal |X|mAs regularization standard.The discrete-time series of picture signal after regularization are indicated are as follows:
2. selecting at least one feature as the primary of corresponding iris from the multiple feature of the F sampling channel Feature code combination, forms corresponding eigenmatrix by the unit character vector of this feature code combination.
It after constitutive characteristic matrix, further include that discrimination highest and the minimum spy of error rate are determined from multiple samples Levy matrix;Determining eigenmatrix is subjected to model training using CNN to form the CNN model for defining iris.Specifically, Random initializtion weight matrix first;Regularization is carried out to eigenmatrix;Canonical turns to the same feature in the channel multiple sample F Maximum difference;Determine the number of nodes k of list hidden layer:
Wherein a is input layer number, and b is output layer number of nodes,For constant.
P learning sample is sequentially input, and recording currently entered is p-th of sample.
Successively calculate the output of each layer;Wherein the neuron j input of hidden layer is netpj=∑ iwjioji;AndopjIt is the output of neuron j, wjiIt is weight of i-th of neuron to j-th of neuron,
The output of output layer neuron are as follows: opl=∑jwliopj
The error performance index of p-th of sampleIn formula, tplIt is the target output of neuron l;
If p=P, the weight of each layer is corrected;The connection weight w of output layer and hidden layerljCorrection are as follows:
The connection weight w of hidden layer and input layerjiLearning algorithm: N is the number of iterations, and η is learning rate, η ∈ [0,1];
Then pondage factor α is added to the weight of each layer, weight at this time are as follows:
wlj(n+1)=wlj(n)+Εwlj+α(wlj(n+1)-wlj(n));
wji(n+1)=wji(n)+Εwji+α(wji(n+1)-wji(n));Wherein, the value α ∈ [0,1] of pondage factor;
The output of each layer is recalculated according to new weight, if each sample is all satisfied output and the difference of target output is small In predefined thresholds, or preset study number is reached, then process stops.
To the situation of above-mentioned offline iris video image, in a further embodiment, self-similarity A is definedSWith mutually it is similar Spend BSAnd two similarity values are calculated, according to ASAnd BSTo calculate the final similarity distance of iris video.There are two iris verification is total Stage: acquisition phase and cognitive phase.In acquisition phase, iris video is collected and saves as sample frame;In cognitive phase, depending on Frequency is collected and is matched with sample frame, to determine whether they are the same client iris.
It is registrated firstly, treating matched iris video with iris sample frame.Sample frame after registration can indicate For E={ FE 1, FE 2..., FE k, iris video to be identified is expressed as C={ FC 1, FC 2..., FC k, wherein k is indicated in video The iris image number for including, FE i、FC iIndicate the i-th width iris image.
In acquisition phase, the A of sample frame is calculatedS, the specific method is as follows:
The similarity distance for calculating any two width iris image of sample frame, can be obtained k (k-1)/2 similarity distance, takes them A of the mean value as the videoS.That is:
WhereinIndicate FE iAnd FE jSimilarity distance.
In cognitive phase, the B of iris video to be matched is calculatedS:
Wherein,Indicate FE iWith the similarity distance of the maximum iris image of area in template,Indicate area in template Maximum iris image and Fc jSimilarity distance.
Final similarity distance merges ASAnd BSTo calculate the formula of two final similarity distances of iris video are as follows:
S=BS+w(BS-AS), w is to adjust weight.By ASAnd BSAs two features of a sample, therefore a sample It can be by a two-dimensional feature vector (AS, BS) indicate.Thus judgement matching problem is converted into sample classification.
In classified calculating, arbitrary sample x is expressed as feature vector: < a1(x), a2(x) ..., an(x) >, wherein ak (x) the ith attribute value of sample x is indicated.Two sample xiAnd xjDistance definition are as follows:
To dispersive target function f:Rn→ V, wherein RnIt is the point of n-dimensional space, V is finite aggregate { v1, v2..., vs,
Return value f (xq) it is calculated as distance xqMost common f value in k nearest training sample.Wherein:
Wherein function Λ (a, b) is defined as:
If a=b, Λ (a, b)=1, otherwise Λ (a, b)=0.
In conclusion being joined the invention proposes a kind of intelligent image processing method for AI chip by quantization characteristic The several pairs of iris image original series for needing to identify carry out dimensionality reduction, then carry out at symbol to the training image collection obtained after dimensionality reduction Reason, reduced sample matching process reduce the complexity of calculating and the requirement to device orientation, and user is allowed to hold more flexiblely Row watches movement attentively, enhances user experience.
Obviously, it should be appreciated by those skilled in the art, each module of the above invention or each steps can be with general Computing system realize that they can be concentrated in single computing system, or be distributed in multiple computing systems and formed Network on, optionally, they can be realized with the program code that computing system can be performed, it is thus possible to they are stored It is executed within the storage system by computing system.In this way, the present invention is not limited to any specific hardware and softwares to combine.
It should be understood that above-mentioned specific embodiment of the invention is used only for exemplary illustration or explains of the invention Principle, but not to limit the present invention.Therefore, that is done without departing from the spirit and scope of the present invention is any Modification, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.In addition, appended claims purport of the present invention Covering the whole variations fallen into attached claim scope and boundary or this range and the equivalent form on boundary and is repairing Change example.

Claims (2)

1. a kind of intelligent image processing method for AI chip characterized by comprising
Acquisition user's optical data is trained, and training image collection progress symbol is converted to iris feature code;
The iris feature code of training image collection is matched with sample graph image set, realizes iris recognition.
2. the method according to claim 1, wherein described be converted to iris for training image collection progress symbol Feature code further comprises:
Feature code combination of at least one feature as corresponding iris is selected from multiple features of multiple sampling channels, Corresponding eigenmatrix is formed by the unit character vector of this feature code combination;
Discrimination highest and the minimum eigenmatrix of error rate are determined from multiple samples;Determining eigenmatrix is utilized into CNN Process carries out model training to form the CNN model for defining iris;
Random initializtion weight matrix first;Regularization is carried out to eigenmatrix;Canonical turns to the same spy in the channel multiple sample F The maximum difference of sign;Determine the number of nodes k of list hidden layer:
Wherein a is input layer number, and b is output layer number of nodes,For constant;
P learning sample is sequentially input, and recording currently entered is p-th of sample;
Successively calculate the output of each layer;
Wherein the neuron j input of hidden layer is netpj=∑iwjioji;AndopjIt is the defeated of neuron j Out, wjiIt is weight of i-th of neuron to j-th of neuron,
The output of output layer neuron are as follows: opl=∑jwliopj
The error performance index of p-th of sample
tplIt is the target output of neuron l;
If p=P, the weight of each layer is corrected;The connection weight w of output layer and hidden layerljCorrection are as follows:
The connection weight w of hidden layer and input layerjiLearning process are as follows:
N is the number of iterations, and η is learning rate, η ∈ [0,1];
Then pondage factor α is added to the weight of each layer, weight at this time are as follows:
wlj(n+1)=wlj(n)+Εwlj+α(wlj(n+1)-wlj(n));
wji(n+1)=wji(n)+Εwji+α(wji(n+1)-wji(n));
Wherein, the value α ∈ [0,1] of pondage factor;
The output of each layer is recalculated according to new weight, if each sample is all satisfied output and the difference of target output is less than in advance Threshold value is defined, or has reached preset study number, then process stops.
CN201810910512.XA 2018-08-11 2018-08-11 Intelligent image processing method for AI chip Active CN109165586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810910512.XA CN109165586B (en) 2018-08-11 2018-08-11 Intelligent image processing method for AI chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810910512.XA CN109165586B (en) 2018-08-11 2018-08-11 Intelligent image processing method for AI chip

Publications (2)

Publication Number Publication Date
CN109165586A true CN109165586A (en) 2019-01-08
CN109165586B CN109165586B (en) 2021-09-03

Family

ID=64895515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810910512.XA Active CN109165586B (en) 2018-08-11 2018-08-11 Intelligent image processing method for AI chip

Country Status (1)

Country Link
CN (1) CN109165586B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738194A (en) * 2020-06-29 2020-10-02 深圳力维智联技术有限公司 Evaluation method and device for similarity of face images
CN113688266A (en) * 2021-09-13 2021-11-23 上海联影医疗科技股份有限公司 Image priority determining method, image processing device, image processing apparatus, and recording medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6944318B1 (en) * 1999-01-15 2005-09-13 Citicorp Development Center, Inc. Fast matching systems and methods for personal identification
CN102194114A (en) * 2011-06-25 2011-09-21 电子科技大学 Method for recognizing iris based on edge gradient direction pyramid histogram
CN106326841A (en) * 2016-08-12 2017-01-11 合肥虹视信息工程有限公司 Quick iris recognition algorithm
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6944318B1 (en) * 1999-01-15 2005-09-13 Citicorp Development Center, Inc. Fast matching systems and methods for personal identification
CN102194114A (en) * 2011-06-25 2011-09-21 电子科技大学 Method for recognizing iris based on edge gradient direction pyramid histogram
CN106326841A (en) * 2016-08-12 2017-01-11 合肥虹视信息工程有限公司 Quick iris recognition algorithm
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KIEN NGUYEN 等: "Iris Recognition With Off-the-Shelf CNN Features:A Deep Learning Perspective", 《IEEE ACCESS》 *
李志明: "基于卷积神经网络的虹膜活体检测算法研究", 《图形图像处理》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738194A (en) * 2020-06-29 2020-10-02 深圳力维智联技术有限公司 Evaluation method and device for similarity of face images
CN111738194B (en) * 2020-06-29 2024-02-02 深圳力维智联技术有限公司 Method and device for evaluating similarity of face images
CN113688266A (en) * 2021-09-13 2021-11-23 上海联影医疗科技股份有限公司 Image priority determining method, image processing device, image processing apparatus, and recording medium

Also Published As

Publication number Publication date
CN109165586B (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN111476302B (en) fast-RCNN target object detection method based on deep reinforcement learning
Yang et al. Infrared and visible image fusion via texture conditional generative adversarial network
CN108681752B (en) Image scene labeling method based on deep learning
CN110148104B (en) Infrared and visible light image fusion method based on significance analysis and low-rank representation
Esmaeili et al. Fast-at: Fast automatic thumbnail generation using deep neural networks
CN109671102B (en) Comprehensive target tracking method based on depth feature fusion convolutional neural network
CN111161311A (en) Visual multi-target tracking method and device based on deep learning
CN110210513B (en) Data classification method and device and terminal equipment
CN107633226B (en) Human body motion tracking feature processing method
CN107169117B (en) Hand-drawn human motion retrieval method based on automatic encoder and DTW
CN110263666B (en) Action detection method based on asymmetric multi-stream
EP3149611A1 (en) Learning deep face representation
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN111783532B (en) Cross-age face recognition method based on online learning
CN109284779A (en) Object detecting method based on the full convolutional network of depth
Yang et al. Geodesic clustering in deep generative models
CN110458235B (en) Motion posture similarity comparison method in video
CN109190505A (en) The image-recognizing method that view-based access control model understands
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
Wang et al. Small vehicle classification in the wild using generative adversarial network
CN109165586A (en) intelligent image processing method for AI chip
CN109165587A (en) intelligent image information extraction method
CN113191361B (en) Shape recognition method
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
Kirstein et al. Rapid online learning of objects in a biologically motivated recognition architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210806

Address after: 410000 second floor, building 2, new Changhai Lugu center, No. 627 Lugu Avenue, Changsha high tech Development Zone, Changsha City, Hunan Province

Applicant after: Hunan Technology Technology Co.,Ltd.

Address before: 641103 No. 23, group 5, yangjiachong village, Shuangqiao Township, Dongxing District, Neijiang City, Sichuan Province

Applicant before: Shi Xiuying

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant