CN112257688A - GWO-OSELM-based non-contact palm in-vivo detection method and device - Google Patents

GWO-OSELM-based non-contact palm in-vivo detection method and device Download PDF

Info

Publication number
CN112257688A
CN112257688A CN202011496830.XA CN202011496830A CN112257688A CN 112257688 A CN112257688 A CN 112257688A CN 202011496830 A CN202011496830 A CN 202011496830A CN 112257688 A CN112257688 A CN 112257688A
Authority
CN
China
Prior art keywords
oselm
image
gwo
palm
lpq
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011496830.XA
Other languages
Chinese (zh)
Inventor
赵国栋
高旭
张烜
李学双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Shengdian Century Technology Co ltd
Original Assignee
Sichuan Shengdian Century Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Shengdian Century Technology Co ltd filed Critical Sichuan Shengdian Century Technology Co ltd
Priority to CN202011496830.XA priority Critical patent/CN112257688A/en
Publication of CN112257688A publication Critical patent/CN112257688A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1382Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to a non-contact palm biopsy method and a non-contact palm biopsy device based on GWO-OSELM, wherein the non-contact palm biopsy method comprises the following steps: 1) collecting palm print images of palms of a plurality of living bodies and non-living bodies as positive and negative training samples; 2) carrying out Gaussian filtering processing on the positive and negative training sample images; 3) performing feature fusion on the LPQ features and the BSIF features to form a total feature vector; 4) forming a classification model; 5) inputting the total feature vector into a classification model for training; 6) and extracting the total characteristic vector of the image to be detected, inputting the total characteristic vector into a trained classification model to detect and identify the living body data, and determining whether the image data to be detected is a palm print image of the living body. The invention optimizes and selects the input weight and the bias by utilizing the gray wolf optimization algorithm, has good generalization performance and strong stability, and is beneficial to improving the efficiency of in-vivo detection.

Description

GWO-OSELM-based non-contact palm in-vivo detection method and device
Technical Field
The invention belongs to the technical field of biological feature identification and information safety, and particularly relates to a non-contact palm biopsy method and device based on GWO-OSELM.
Background
The non-contact palm print recognition technology has good market prospect as a new biological characteristic recognition technology. However, in order to prevent a malicious attacker from stealing or forging the biometric features of others for identity authentication, the non-contact palm print recognition system needs to have a live body detection function, i.e., determine whether the extracted palm print features are from a live real individual. The palm living body detection is to distinguish whether the palm in the current acquired image is a living body palm (a live real palm) or a false body palm (a simulated palm imitating the identity of a real person) on the basis of palm detection so as to achieve the purpose of preventing lawless persons from falsely using the palm print information of the legal user.
At present, more palm biopsy algorithms are focused on external equipment analysis, motion information analysis, image texture analysis and the like. The external equipment analysis method needs to be judged by means of external equipment, so that extra cost is increased; the motion information analysis method mainly detects and judges the corresponding motion information of the organism, increases resource consumption and has certain limitation; the image texture analysis method is mainly used for distinguishing based on the surface texture information of biological features, for example, a palm print extraction and identification method disclosed in the patent number CN103198304B, and comprises the following operation steps: the method comprises the steps of collecting a palm print image, preprocessing, extracting a palm contour, analyzing feature points on the palm contour, extracting three main lines of the palm print, obtaining feature points of the main lines, carrying out region segmentation on the palm print of the palm, searching and identifying abnormal prints in each segmented small region, wherein the detection rate is low because a traditional image texture analysis method does not utilize noise level difference of imaging quality on an authentic palm print image. In addition, the difference of imaging environments and the diversity of attack modes bring huge challenges to the traditional in-vivo detection method.
The gray Wolf optimization algorithm (GWO) was a group intelligent optimization algorithm proposed by the university of griffis university scholars of australia, Mirjalili et al in 2014. The algorithm is developed by inspiring the action of the sirius prey, and has the characteristics of strong convergence performance, few parameters, easiness in implementation and the like.
An Extreme Learning Machine (ELM) is a fast neural network Learning algorithm proposed by Huang G.B et al in 2004. Compared with the traditional feedforward neural network learning algorithm, the training process can complete the quick learning and training of the neural network without repeatedly adjusting. However, the ELM is difficult to determine the input weight, the random initialization of the hidden layer node bias and the number of the hidden layer nodes, so that the performance of the ELM becomes extremely unstable, and the prediction accuracy is low. To this end, Liang et al propose the online sequential extreme learning algorithm OSELM. OSELM improves the problems to a certain extent, the prediction precision of the model is improved, and the performance is more stable.
The application of the gray wolf optimization algorithm and the online sequence extreme learning machine algorithm to the image recognition of the palm print has not been reported yet.
Disclosure of Invention
The invention aims to solve the technical problem of providing a non-contact palm biopsy method based on GWO-OSELM, and solving the problem of low detection efficiency caused by poor generalization and stability of the traditional palm biopsy.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
the invention relates to a GWO-OSELM-based non-contact palm biopsy method, which comprises the following steps:
1) collecting palm print images of palms of a plurality of living bodies and non-living bodies as positive and negative training samples, extracting ROI from the images of the positive and negative training samples, and then preprocessing the images;
2) respectively carrying out Gaussian filtering processing on the positive and negative training sample images after the preprocessing;
3) extracting LPQ histogram features and BSIF features of positive and negative training sample images, and performing feature fusion on the LPQ histogram features and the BSIF features to form a total feature vector;
4) initializing parameters of an OSELM model, and determining the weight and the bias of an input layer of the OSELM model by utilizing a wolf optimization algorithm to form an GWO-OSELM classification model;
5) inputting the total feature vector into an GWO-OSELM classification model for training;
6) and (3) extracting the total characteristic vector of the image to be detected by adopting the methods from the step 1) to the step 3), inputting the total characteristic vector into a trained GWO-OSELM classification model for detecting and identifying the living body data, and determining whether the image to be detected is a palm print image of the living body.
The method can quickly distinguish the real palm print image and the pseudo palm print image copied by the mobile phone, greatly enhances the description of the texture detail characteristics of the whole palm print image, and is favorable for improving the safety of palm print image identification.
Preferably, the specific steps of step 3) include:
3.1) calculating LPQ histogram characteristics: dividing the image into a plurality of rectangular blocks, calculating the quantization coefficient of each pixel on each rectangular block, generating a histogram of the quantization coefficient, and connecting the histogram vectors of all the rectangular blocks in series to obtain the LPQ histogram characteristics of the image;
3.2) calculating BSIF characteristics: using binary system to count image characteristics, obtaining a group of filters with different sizes and numbers through the counted image characteristics, and extracting BSIF characteristics of the image by using the filters;
3.3) fusing the LPQ histogram features and the BSIF features into an overall feature vector.
Preferably, the specific step of calculating the LPQ histogram feature in step 3.1) includes:
3.1.1) rectangular neighborhood N of M × M size of each pixel point x on the grayscale image f (x)xPerforming discrete fourier transform to extract phase information F (u, x), i.e., formula (1):
Figure 656038DEST_PATH_IMAGE001
wherein, WuBasis vectors of a 2-dimensional discrete Fourier transform of frequency u, fxIs NxMiddle M2Vector composed of gray values of pixels, x representing pixel points on the gray image, y representing rectangleThe number of pixel points in the neighborhood is,
Figure 677084DEST_PATH_IMAGE002
t here is transposed and means
Figure 410685DEST_PATH_IMAGE003
e is a mathematical constant which is the base number of a natural logarithm function; f (x) represents a grayscale image;
3.1.2) LPQ histogram features only
Figure 367139DEST_PATH_IMAGE004
,
Figure 310825DEST_PATH_IMAGE005
,
Figure 479769DEST_PATH_IMAGE006
,
Figure 192510DEST_PATH_IMAGE007
The Fourier coefficients are considered at four frequency points
Figure 818401DEST_PATH_IMAGE008
Where a is a frequency point not exceeding the first zero crossing and has a value of a =1/winSize, which is an input parameter, i.e. formula (2):
Figure 249383DEST_PATH_IMAGE009
3.1.3) phase information in Fourier coefficients is represented by FxThe phase information in the Fourier coefficients is quantized by the hierarchical quantization method of formula (3) to obtain a binary string qj(x):
Figure 222018DEST_PATH_IMAGE010
Wherein g isj(x) Is composed of
Figure 930211DEST_PATH_IMAGE011
J is an integer which is greater than 0 and less than or equal to 8, Re { } represents a real part, and Im { } represents an imaginary part;
3.1.4) composing the obtained binary string into a characteristic value, namely LPQ histogram characteristic of the image, and expressing a quantization coefficient F by binary codingLPQ(x) The quantization coefficient is one [0,255]The calculation formula is shown as (4):
Figure 87523DEST_PATH_IMAGE012
preferably, the step 3.2) of calculating BSIF characteristics specifically includes:
3.2.1) setting an image block of size n × n pixelsXAnd a linear filter of the same size
Figure 881166DEST_PATH_IMAGE013
Filter response
Figure 516547DEST_PATH_IMAGE014
Calculated by equation (5):
Figure 595360DEST_PATH_IMAGE015
whereini is an integer greater than 0 and less than or equal to n; n is an integer greater than 0 and less than or equal to the image width, vector wiAnd x respectively represent the filtering WiAnd pixels of image block X, giIs a filter response, and obtains the corresponding binary characteristic b through the calculation of formula (6)iI.e. by
Figure 923573DEST_PATH_IMAGE016
3.2.2) setting m Linear filters WiThey are connected in series to form a series of m × n2The size of the matrix W is determined by the filter size n and the filteringDetermining the number m of the devices;
3.2.3) traversing the image, calculating the filter response of the image to all filters through a formula (7), and obtaining a corresponding binary sequence, wherein the image is represented by a decimal statistical histogram generated by the binary sequence, namely the BSIF coded image:
Figure 204513DEST_PATH_IMAGE017
where g represents the filter response of all filters.
Preferably, the specific steps of step 4) include:
4.1) setting relevant parameters of a gray wolf optimization algorithm, including a wolf colony scale N, a maximum iteration number Max _ iter, a search boundary, a search dimension and a fitness function;
4.2) initializing a gray wolf population, initializing an OSELM model, and randomly generating a group of hidden layer input weights and biases by the OSELM model to serve as population members to form an initial population of the gray wolf algorithm;
4.3) calculating the fitness function value of each head in the wolf population: establishing an original OSELM model, performing prediction training by using all individuals and a training data set in an initial population, and calculating a fitness function value fit by using a Root Mean Square Error (RMSE) as a fitness function, wherein a formula of the fitness function is shown as (8):
Figure 909163DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 326369DEST_PATH_IMAGE019
in order to output the training data,
Figure 825484DEST_PATH_IMAGE020
for measured values, i, j are integers and 0<i<N,0<j<N, N is the total number of samples;
4.4) using the fitness function value fit obtained by calculation as the optimal solution to replace OSCarrying out initialization training on the OSELM model by using the input weight and the bias randomly generated by the ELM model to obtain an initial weight matrix
Figure 328141DEST_PATH_IMAGE021
4.5) carrying out serialization training of an OSELM model to obtain a final output weight matrix
Figure 305324DEST_PATH_IMAGE022
Preferably, the image preprocessing in step 1) includes size and gray scale normalization processing.
Preferably, in the step 1), the real palm print image and the pseudo palm print image copied by the mobile phone are respectively collected by the mobile phone to serve as positive and negative training samples.
The invention also relates to a non-contact palm biopsy device based on GWO-OSELM, which comprises:
1) the image preprocessing module is used for respectively acquiring a real palm print image and a pseudo palm print image copied by the mobile phone by using the mobile phone as a positive training sample and a negative training sample, extracting an ROI from the images of the positive training sample and the negative training sample, and preprocessing the images;
2) the Gaussian filtering module is used for respectively carrying out Gaussian filtering processing on the preprocessed positive and negative training sample images;
3) the feature vector extraction module is used for extracting LPQ histogram features and BSIF features of the positive and negative training sample images and performing feature fusion on the LPQ histogram features and the BSIF features to form a total feature vector;
4) the GWO-OSELM classification model generation module is used for initializing parameters of an OSELM model, determining the weight and the bias of an input layer of the OSELM model by utilizing a wolf optimization algorithm, and forming a GWO-OSELM classification model;
5) the training module is used for inputting the total feature vector into the GWO-OSELM classification model for training;
6) and the judging module is used for extracting the feature vector of the test image, inputting the feature vector into the trained GWO-OSELM classification model for detecting and identifying the living body data and determining whether the test image data is a palm print image of the living body palm.
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
1. according to the invention, the grey wolf optimization algorithm is used for optimizing and selecting the input weight and the bias of the hidden layer of the OSELM classifier to obtain the GWO-OSELM classifier, and then the well-constructed GWO-OSELM classifier is used for classifying the collected living body and non-living body palm print image libraries to detect the living body images, so that the living body detection method has good generalization performance and strong stability, and is beneficial to improving the efficiency of living body detection.
2. The method adopts a mode of connecting the LPQ histogram feature and the BSIF feature in series, and utilizes the characteristic of strong description capacity of the LPQ histogram feature; then, local information of the image can be accurately described by using BSIF characteristics, and the characteristics of the overall characteristic information of the image can be mastered; because the LPQ histogram feature may have the problem of incomplete extracted palm print detail feature, the BSIF feature expresses the information of the whole palm print image texture in a more detailed way on the basis of the LPQ histogram feature, and then the description of the whole palm print image texture detail feature is greatly enhanced by connecting two feature vectors in series as the final feature.
Drawings
FIG. 1 is a flow chart of a GWO-OSELM-based non-contact palm biopsy method;
FIG. 2 is a collected palm print image of a living body;
FIG. 3 is a captured non-live palm print image;
FIG. 4 is an image of a real palm print image after ROI extraction, normalization processing and Gaussian filtering;
FIG. 5 is an image of a pseudo-palmprint image after ROI extraction, normalization processing and Gaussian filtering;
FIG. 6 is an image of a real palm print image after LPQ histogram feature extraction;
FIG. 7 is an image of a pseudo-palm print image after LPQ histogram feature extraction;
FIG. 8 is an image of a sample image after BSIF feature extraction;
FIG. 9 is a flow chart of selecting input weights and offsets for OSELM hidden layers using the GWO algorithm;
FIG. 10 is a schematic block diagram of GWO-OSELM-based non-contact palm liveness detection.
Detailed Description
For further understanding of the present invention, the present invention will be described in detail with reference to examples, which are provided for illustration of the present invention but are not intended to limit the scope of the present invention.
Example 1
Referring to the attached figure 1, the invention relates to a non-contact palm biopsy method based on GWO-OSELM, which comprises the following steps:
1) respectively acquiring a real palm print image (a living palm print image and a positive sample) and a pseudo palm print image (a non-living palm print image and a negative sample) copied by the mobile phone by using the mobile phone as a positive training sample and a negative training sample, extracting an ROI (region of interest) of the images of the positive training sample and the negative training sample, and preprocessing the images, including size and gray level normalization.
2) And respectively carrying out Gaussian filtering processing on the positive and negative training sample images after the preprocessing, namely carrying out denoising processing by adopting Gaussian filtering, wherein the palm print images of the living body and the non-living body before the processing are shown in figures 2 and 3, and the images of the living body and the non-living body after the processing are shown in figures 4 and 5.
3) Extracting LPQ histogram features and BSIF features of positive and negative training sample images, and performing feature fusion on the LPQ histogram features and the BSIF features to form a total feature vector, wherein the method specifically comprises the following steps:
3.1) calculating Local Phase Quantization (LPQ) histogram features: dividing the image into a plurality of rectangular blocks, calculating the quantization coefficient of each pixel on each rectangular block to generate a histogram of the quantization coefficient, and connecting the histogram vectors of all the rectangular blocks in series to obtain the LPQ histogram feature of the image, wherein the specific steps of the LPQ histogram feature calculation process are as follows:
3.1.1) in grayscale imagesf(x) Each pixel point ofxM × M sized rectangular fieldN x Performing discrete Fourier transform to extract phase information
Figure 606730DEST_PATH_IMAGE023
I.e. formula (1):
Figure 152112DEST_PATH_IMAGE001
wherein, WuBasis vectors of a 2-dimensional discrete Fourier transform of frequency u, fxIs NxMiddle M2A vector consisting of the gray values of the individual pixels, x representing a pixel point on the gray image, y representing a pixel point on the neighborhood of the rectangle,
Figure 266698DEST_PATH_IMAGE002
t here is transposed and means
Figure 922939DEST_PATH_IMAGE003
e is a mathematical constant which is the base number of a natural logarithm function; f (x) represents a grayscale image;
3.1.2) LPQ histogram features only
Figure 439371DEST_PATH_IMAGE004
Figure 421233DEST_PATH_IMAGE005
Figure 23116DEST_PATH_IMAGE006
Figure 981582DEST_PATH_IMAGE007
The Fourier coefficients are considered at four frequency points
Figure 352521DEST_PATH_IMAGE008
WhereinaIs the frequency point not exceeding the first zero crossing and has a =1/winSize, which is the input parameter, i.e. formula (2):
Figure 505285DEST_PATH_IMAGE024
3.1.3) phase information in Fourier coefficients
Figure 204250DEST_PATH_IMAGE008
The phase information in the Fourier coefficients is quantized by the hierarchical quantization method of formula (3) to obtain a binary string
Figure 592506DEST_PATH_IMAGE025
Figure 693318DEST_PATH_IMAGE010
Wherein g isj(x) Is composed of
Figure 876037DEST_PATH_IMAGE011
J is an integer which is greater than 0 and less than or equal to 8, Re { } represents a real part, and Im { } represents an imaginary part;
3.1.4) composing the obtained binary string into characteristic values, namely LPQ histogram characteristics of the image, and expressing quantization coefficients by binary coding
Figure 826413DEST_PATH_IMAGE026
The quantization coefficient is one [0,255]The calculation formula is shown as (4):
Figure 18360DEST_PATH_IMAGE012
since a fixed four-phase, eight eigenvalues are used here, the eigenvector has 256 dimensions. Forming a histogram by the obtained characteristic values; FIGS. 6 and 7 are the results of processing the real and pseudo palm print images by local feature extraction method, respectively, from which the details of each texture can be clearly seen;
3.2) computing Binary Statistical Image (BSIF) Features: using binary statistic image characteristics to obtain a group of filters with different sizes and numbers through the statistic image characteristics, and using the filters to extract BSIF characteristics of the image, i.e. the BSIF characteristics
3.2.1) setting an image block of size n × n pixelsXAnd a linear filter of the same size
Figure 973678DEST_PATH_IMAGE013
Filter response
Figure 733824DEST_PATH_IMAGE014
Calculated by equation (5):
Figure 532015DEST_PATH_IMAGE015
whereini is an integer greater than 0 and less than or equal to n; n is an integer greater than 0 and less than or equal to the image width, vector wiAnd x respectively represent the filtering WiAnd pixels of image block X, giIs a filter response, and obtains the corresponding binary characteristic b through the calculation of formula (6)iI.e. by
Figure 871861DEST_PATH_IMAGE027
3.2.2) setting m Linear filters WiThey are connected in series to form a series of m × n2The size of the matrix W is determined by the size n of the filter and the number m of the filters;
3.2.3) traversing the image, calculating the filter response of the image to all filters through a formula (7), and obtaining a corresponding binary sequence, wherein the image is represented by a decimal statistical histogram generated by the binary sequence, namely the BSIF coded image:
Figure 337477DEST_PATH_IMAGE017
where g represents the filter response of all filters. The image of the sample image after extracting the BSIF features is shown in fig. 8.
3.3) fusing the LPQ histogram features and the BSIF features into an overall feature vector.
4) Initializing parameters of an OSELM model, determining an input layer weight and bias of the OSELM model by utilizing a grey wolf optimization algorithm, and forming an GWO-OSELM classification model, wherein the step is to set an activation function of a hidden layer neuron and determine the input weight and bias of the hidden layer by utilizing the grey wolf optimization algorithm based on an Online Sequential Extreme Learning Machine (OSELM) algorithm, so as to construct the classification model.
Referring to fig. 9, initializing parameters of the OSELM model, determining input layer weights and biases of the OSELM model by using the gray wolf optimization algorithm, and forming GWO-OSELM classification models by the specific steps of:
4.1) setting relevant parameters of a gray wolf optimization algorithm, wherein the relevant parameters comprise a wolf colony size N = 40, a maximum iteration number Max _ iter = 50, a search boundary [0.01,100], a search dimension [1,3] and a fitness function;
4.2) initializing a gray wolf population, initializing an OSELM model, and randomly generating a group of hidden layer input weights and biases by the OSELM model to serve as population members to form an initial population of the gray wolf algorithm;
4.3) calculating the fitness function value of each head in the wolf population: establishing an original OSELM model, performing prediction training by using all individuals and a training data set in an initial population, and calculating a fitness function value fit by using a Root Mean Square Error (RMSE) as a fitness function, wherein a formula of the fitness function is shown as (8):
Figure 507340DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 527248DEST_PATH_IMAGE019
in order to output the training data,
Figure 936364DEST_PATH_IMAGE020
for measured values, i, j are integers and 0<i<N,0<j<N, N is the total number of samples;
as can be seen from the above formula, if the fitness function value is small, it means that the competitiveness of an individual between a certain population is large, and the individual is easy to store in the next generation, so that the individual with the optimal fitness can be searched by using the RMSE value, and the individual with the optimal fitness can be recorded as a global optimal solution;
4.4) replacing the input weight and bias randomly generated by the OSELM model by taking the fitness function value fit obtained by calculation as an optimal solution to carry out initialization training on the OSELM model to obtain an initial weight matrix
Figure 990908DEST_PATH_IMAGE028
4.5) carrying out serialization training of an OSELM model to obtain a final output weight matrix
Figure 827277DEST_PATH_IMAGE022
5) Inputting the total feature vector into an GWO-OSELM classification model for training;
6) and (3) extracting the total characteristic vector of the image to be detected by adopting the methods from the step 1) to the step 3), inputting the total characteristic vector into a trained GWO-OSELM classification model for detecting and identifying the living body data, and determining whether the image to be detected is a palm print image of the living body.
Test examples
The following are the experimental results and analysis of several image databases using the palm biopsy method of the present invention.
This embodiment has gathered three groups by the positive negative sample palm print image database of cell-phone collection, and wherein the first group image comprises 1000 positive samples and 1000 negative samples, and the second group comprises 10000 positive samples and 8000 negative samples, and the third group comprises 3000 positive samples and 1500 negative samples. Wherein 70% of images in each group of image library are selected as a training set, and 30% of images are selected as a testing set. The camera pixel of the mobile phone is 4800 ten thousand, the Visual Studio Community 2019 is used as compiling software, and the operating system of the used computer is 64-bit Window10, the memory is 8G, and the main frequency is 2.30 GHz. For each group of image libraries, according to the non-contact palm biopsy method of embodiment 1, firstly, preprocessing images after ROI extraction, then, gaussian filtering the images to reduce the influence of noise, then, extracting LPQ histogram features and BSIF features from the images to obtain palm print texture information, then, collecting the LPQ histogram features and BSIF features extracted from the training set images as a training feature vector set, sending the training feature vector set to an GWO-OSELM classifier for training to obtain a trained GWO-OSELM classifier, and finally, sending the fusion features extracted from the test set images as a test feature vector set to a trained GWO-OSELM classifier for classification and identification to obtain a classification result, wherein, | "left side is biopsy rate, and" | "right side is non-biopsy rate, and the identification result is shown in table 1;
TABLE 1 table for comparing the detection rates of various methods (algorithms)
Figure 475427DEST_PATH_IMAGE029
As can be seen from Table 1, the live body classification precision of the method provided by the invention on different image libraries reaches 100%, and the non-live body detection rate also reaches more than 99.09%. The living body classification precision is about 5% higher than that of the original ELM and about 2% higher than that of the original OELSM, and it can be seen that the non-contact palm living body detection method based on GWO-OSELM provided by the invention can effectively extract key information of palm print images of a living body and a non-living body and achieve a better living body detection effect.
Example 2
Referring to fig. 10, the non-contact palm biopsy device based on GWO-OSELM of the present embodiment includes:
1) the image preprocessing module is used for respectively acquiring a real palm print image and a pseudo palm print image copied by the mobile phone by using the mobile phone as a positive training sample and a negative training sample, extracting an ROI from the images of the positive training sample and the negative training sample, and preprocessing the images; the image preprocessing module is used for realizing the functions of step 1) in the embodiment 1.
2) The Gaussian filtering module is used for respectively carrying out Gaussian filtering processing on the preprocessed positive and negative training sample images; the gaussian filter module is used for realizing the function of step 2) in the embodiment 1.
3) The feature vector extraction module is used for extracting LPQ histogram features and BSIF features of the positive and negative training sample images and performing feature fusion on the LPQ histogram features and the BSIF features to form a total feature vector; the feature vector extraction module is used for realizing the function of step 3) in the embodiment 1.
4) The GWO-OSELM classification model generation module is used for initializing parameters of an OSELM model, determining the weight and the bias of an input layer of the OSELM model by utilizing a wolf optimization algorithm, and forming a GWO-OSELM classification model; GWO-OSELM classification model generation module is used for realizing the function of step 4) of the embodiment 1.
5) The training module is used for inputting the total feature vector into the GWO-OSELM classification model for training; the training module is used for realizing the function of step 5) in the embodiment 1.
6) And the judging module is used for extracting the characteristic vector of the image to be detected, inputting the characteristic vector into the trained GWO-OSELM classification model for detecting and identifying the living body data, and determining whether the image to be detected is a palm print image of the living body. The discrimination module is used for realizing the function of the step 6) of the embodiment 1.
Obviously, the non-contact type palm biopsy device of the present embodiment can be used as the execution main body of the non-contact type palm biopsy method of embodiment 1, and therefore, the functions realized by the non-contact type palm biopsy method can be realized. Since the principle is the same, the detailed description is omitted here.
The present invention has been described in detail with reference to the embodiments, but the description is only for the preferred embodiments of the present invention and should not be construed as limiting the scope of the present invention. All equivalent changes and modifications made within the scope of the present invention shall fall within the scope of the present invention.

Claims (8)

1. A non-contact palm biopsy method based on GWO-OSELM is characterized in that: which comprises the following steps:
1) collecting palm print images of palms of a plurality of living bodies and non-living bodies as positive and negative training samples, extracting ROI from the images of the positive and negative training samples, and then preprocessing the images;
2) respectively carrying out Gaussian filtering processing on the positive and negative training sample images after the preprocessing;
3) extracting LPQ histogram features and BSIF features of positive and negative training sample images, and performing feature fusion on the LPQ histogram features and the BSIF features to form a total feature vector;
4) initializing parameters of a gray wolf algorithm and an OSELM model, and determining the weight and the bias of an input layer of the OSELM model by utilizing a gray wolf optimization algorithm to form an GWO-OSELM classification model;
5) inputting the total feature vector into an GWO-OSELM classification model for training;
6) and (3) extracting the total characteristic vector of the image to be detected by adopting the methods from the step 1) to the step 3), inputting the total characteristic vector into a trained GWO-OSELM classification model for detecting and identifying the living body data, and determining whether the image to be detected is a palm print image of the living body.
2. The GWO-OSELM based non-contact palm biopsy method of claim 1, wherein: the specific steps of the step 3) comprise:
3.1) calculating LPQ histogram characteristics: dividing the image into a plurality of rectangular blocks, calculating the quantization coefficient of each pixel on each rectangular block, generating a histogram of the quantization coefficient, and connecting the histogram vectors of all the rectangular blocks in series to obtain the LPQ histogram characteristics of the image;
3.2) calculating BSIF characteristics: using binary system to count image characteristics, obtaining a group of filters with different sizes and numbers through the counted image characteristics, and extracting BSIF characteristics of the image by using the filters;
3.3) fusing the LPQ histogram features and the BSIF features into an overall feature vector.
3. The GWO-OSELM based non-contact palm biopsy method of claim 2, wherein: the specific step of calculating the features of the LPQ histogram in the step 3.1) comprises the following steps:
3.1.1) rectangular neighborhood N of M × M size of each pixel point x on the grayscale image f (x)xPerforming discrete fourier transform to extract phase information F (u, x), i.e., formula (1):
Figure 260371DEST_PATH_IMAGE001
wherein, WuBasis vectors of a 2-dimensional discrete Fourier transform of frequency u, fxIs NxMiddle M2A vector consisting of the gray values of the individual pixels, x representing a pixel point on the gray image, y representing a pixel point on the neighborhood of the rectangle,
Figure 191418DEST_PATH_IMAGE002
t here is transposed and means
Figure 476906DEST_PATH_IMAGE003
e is a mathematical constant which is the base number of a natural logarithm function; f (x) represents a grayscale image;
3.1.2) LPQ histogram features only
Figure 886021DEST_PATH_IMAGE004
,
Figure 940565DEST_PATH_IMAGE005
,
Figure 42513DEST_PATH_IMAGE006
,
Figure 189199DEST_PATH_IMAGE007
The Fourier coefficients are considered at four frequency points
Figure 667584DEST_PATH_IMAGE008
Wherein a isFrequency points beyond the first zero crossing, with a =1/winSize as input parameter, i.e. formula (2):
Figure 983159DEST_PATH_IMAGE009
3.1.3) phase information in Fourier coefficients is represented by FxThe phase information in the Fourier coefficients is quantized by the hierarchical quantization method of formula (3) to obtain a binary string qj(x):
Figure 849484DEST_PATH_IMAGE010
Wherein g isj(x) Is composed of
Figure 250510DEST_PATH_IMAGE011
J is an integer which is greater than 0 and less than or equal to 8, Re { } represents a real part, and Im { } represents an imaginary part;
3.1.4) composing the obtained binary string into a characteristic value, namely LPQ histogram characteristic of the image, and expressing a quantization coefficient F by binary codingLPQ(x) The quantization coefficient is one [0,255]The calculation formula is shown as (4):
Figure 499963DEST_PATH_IMAGE012
4. the GWO-OSELM based non-contact palm biopsy method of claim 2, wherein: the specific step of calculating the BSIF feature in the step 3.2) comprises the following steps:
3.2.1) setting an image block of size n × n pixelsXAnd a linear filter of the same size
Figure 529099DEST_PATH_IMAGE013
Filtration ofWave response
Figure 441691DEST_PATH_IMAGE014
Calculated by equation (5):
Figure 454647DEST_PATH_IMAGE015
whereini is an integer greater than 0 and less than or equal to n; n is an integer greater than 0 and less than or equal to the image width, vector wiAnd x respectively represent the filtering WiAnd pixels of image block X, giIs a filter response, and obtains the corresponding binary characteristic b through the calculation of formula (6)iI.e. by
Figure 743677DEST_PATH_IMAGE016
3.2.2) setting m Linear filters WiThey are connected in series to form a series of m × n2The size of the matrix W is determined by the size n of the filter and the number m of the filters;
3.2.3) traversing the image, calculating the filter response of the image to all filters through a formula (7), and obtaining a corresponding binary sequence, wherein the image is represented by a decimal statistical histogram generated by the binary sequence, namely the BSIF coded image:
Figure 768264DEST_PATH_IMAGE017
where g represents the filter response of all filters.
5. The GWO-OSELM based non-contact palm biopsy method of claim 1, wherein: the specific steps of the step 4) comprise:
4.1) setting relevant parameters of a gray wolf optimization algorithm, including a wolf colony scale N, a maximum iteration number Max _ iter, a search boundary, a search dimension and a fitness function;
4.2) initializing a gray wolf population, initializing an OSELM model, and randomly generating a group of hidden layer input weights and biases by the OSELM model to serve as population members to form an initial population of the gray wolf algorithm;
4.3) calculating the fitness function value of each head in the wolf population: establishing an original OSELM model, performing prediction training by using all individuals and a training data set in an initial population, and calculating a fitness function value fit by using a Root Mean Square Error (RMSE) as a fitness function, wherein a formula of the fitness function is shown as (8):
Figure 507550DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 856404DEST_PATH_IMAGE019
in order to output the training data,
Figure 214704DEST_PATH_IMAGE020
for measured values, i, j are integers and 0<i<N,0<j<N, N is the total number of samples;
4.4) replacing the input weight and bias randomly generated by the OSELM model by taking the fitness function value fit obtained by calculation as an optimal solution to carry out initialization training on the OSELM model to obtain an initial weight matrix
Figure 890536DEST_PATH_IMAGE021
4.5) carrying out serialization training of an OSELM model to obtain a final output weight matrix
Figure 676090DEST_PATH_IMAGE022
6. The GWO-OSELM based non-contact palm biopsy method of claim 1, wherein: the image preprocessing in the step 1) comprises size and gray level normalization processing.
7. The GWO-OSELM based non-contact palm biopsy method of claim 1, wherein: and in the step 1), the real palm print image and the pseudo palm print image copied by the mobile phone are respectively collected by the mobile phone to be used as positive and negative training samples.
8. A non-contact palm biopsy device based on GWO-OSELM is characterized in that: it includes:
1) the image preprocessing module is used for acquiring palm print images of palms of a plurality of living bodies and non-living bodies as positive and negative training samples, extracting ROI (region of interest) of the images of the positive and negative training samples and preprocessing the images;
2) the Gaussian filtering module is used for respectively carrying out Gaussian filtering processing on the preprocessed positive and negative training sample images;
3) the feature vector extraction module is used for extracting LPQ histogram features and BSIF features of the positive and negative training sample images and performing feature fusion on the LPQ histogram features and the BSIF features to form a total feature vector;
4) the GWO-OSELM classification model generation module is used for initializing parameters of an OSELM model, determining the weight and the bias of an input layer of the OSELM model by utilizing a wolf optimization algorithm, and forming a GWO-OSELM classification model;
5) the training module is used for inputting the total feature vector into the GWO-OSELM classification model for training;
6) and the judging module is used for extracting the characteristic vector of the image to be detected, inputting the characteristic vector into the trained GWO-OSELM classification model for detecting and identifying the living body data, and determining whether the test image data is a palm print image of the living body palm.
CN202011496830.XA 2020-12-17 2020-12-17 GWO-OSELM-based non-contact palm in-vivo detection method and device Pending CN112257688A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011496830.XA CN112257688A (en) 2020-12-17 2020-12-17 GWO-OSELM-based non-contact palm in-vivo detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011496830.XA CN112257688A (en) 2020-12-17 2020-12-17 GWO-OSELM-based non-contact palm in-vivo detection method and device

Publications (1)

Publication Number Publication Date
CN112257688A true CN112257688A (en) 2021-01-22

Family

ID=74224950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011496830.XA Pending CN112257688A (en) 2020-12-17 2020-12-17 GWO-OSELM-based non-contact palm in-vivo detection method and device

Country Status (1)

Country Link
CN (1) CN112257688A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124517A (en) * 2021-11-22 2022-03-01 码客工场工业科技(北京)有限公司 Industrial Internet intrusion detection method based on Gaussian process

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060078176A1 (en) * 2004-10-08 2006-04-13 Fujitsu Limited Biometric information input device, biometric authentication device, biometric information processing method, and computer-readable recording medium recording biometric information processing program
US20160328622A1 (en) * 2012-08-17 2016-11-10 Flashscan3D, Llc System and method for a biometric image sensor with spoofing detection
CN107358144A (en) * 2017-05-20 2017-11-17 深圳信炜科技有限公司 Image identification system and electronic installation
CN107918769A (en) * 2017-11-29 2018-04-17 深圳市奈士迪技术研发有限公司 It is a kind of that there is the safe and reliable fingerprint identification device of living body authentication
US20190050618A1 (en) * 2017-08-09 2019-02-14 The Board Of Trustees Of The Leland Stanford Junior University Interactive biometric touch scanner
CN109948566A (en) * 2019-03-26 2019-06-28 江南大学 A kind of anti-fraud detection method of double-current face based on weight fusion and feature selecting
CN110689353A (en) * 2019-09-24 2020-01-14 青岛网信信息科技有限公司 Identity recognition method and device based on palm print and finger pressure and application

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060078176A1 (en) * 2004-10-08 2006-04-13 Fujitsu Limited Biometric information input device, biometric authentication device, biometric information processing method, and computer-readable recording medium recording biometric information processing program
US20160328622A1 (en) * 2012-08-17 2016-11-10 Flashscan3D, Llc System and method for a biometric image sensor with spoofing detection
CN107358144A (en) * 2017-05-20 2017-11-17 深圳信炜科技有限公司 Image identification system and electronic installation
US20190050618A1 (en) * 2017-08-09 2019-02-14 The Board Of Trustees Of The Leland Stanford Junior University Interactive biometric touch scanner
CN107918769A (en) * 2017-11-29 2018-04-17 深圳市奈士迪技术研发有限公司 It is a kind of that there is the safe and reliable fingerprint identification device of living body authentication
CN109948566A (en) * 2019-03-26 2019-06-28 江南大学 A kind of anti-fraud detection method of double-current face based on weight fusion and feature selecting
CN110689353A (en) * 2019-09-24 2020-01-14 青岛网信信息科技有限公司 Identity recognition method and device based on palm print and finger pressure and application

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
张荣飞: "基于极限学习机的指纹活体检测技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
李晓明: "基于图像的掌纹活体检测方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
林森 等: "子区域局部相位量化在掌脉识别中的应用研究", 《仪器仪表学报》 *
王俊杰: "基于视觉伺服的机械手智能控制算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114124517A (en) * 2021-11-22 2022-03-01 码客工场工业科技(北京)有限公司 Industrial Internet intrusion detection method based on Gaussian process
CN114124517B (en) * 2021-11-22 2024-05-28 码客工场工业科技(北京)有限公司 Industrial Internet intrusion detection method based on Gaussian process

Similar Documents

Publication Publication Date Title
Babu et al. Statistical features based optimized technique for copy move forgery detection
Yuan et al. Fingerprint liveness detection from different fingerprint materials using convolutional neural network and principal component analysis
Bayar et al. Towards open set camera model identification using a deep learning framework
JP4543423B2 (en) Method and apparatus for automatic object recognition and collation
Tome et al. The 1st competition on counter measures to finger vein spoofing attacks
CN104778457B (en) Video face identification method based on multi-instance learning
CN107085716A (en) Across the visual angle gait recognition method of confrontation network is generated based on multitask
AU2017201281B2 (en) Identifying matching images
Svoboda et al. Generative convolutional networks for latent fingerprint reconstruction
Raghavendra et al. Transferable deep convolutional neural network features for fingervein presentation attack detection
Zois et al. A comprehensive study of sparse representation techniques for offline signature verification
CN112437926B (en) Fast robust friction ridge patch detail extraction using feedforward convolutional neural network
CN113095156B (en) Double-current network signature identification method and device based on inverse gray scale mode
CN112257688A (en) GWO-OSELM-based non-contact palm in-vivo detection method and device
CN113673465A (en) Image detection method, device, equipment and readable storage medium
Tao et al. Illumination-insensitive image representation via synergistic weighted center-surround receptive field model and weber law
CN112818774A (en) Living body detection method and device
CN115424163A (en) Lip-shape modified counterfeit video detection method, device, equipment and storage medium
Ernawati et al. Image Splicing Forgery Approachs: A Review and Future Direction
CN112613341A (en) Training method and device, fingerprint identification method and device, and electronic device
Zhao et al. Dynamic texture recognition using 3D random features
CN113780084B (en) Face data amplification method based on generation type countermeasure network, electronic equipment and storage medium
Leekha et al. Methods of Detecting Image forgery using convolutional neural network
CN115935378B (en) Image fusion model security detection method based on conditional generation type network
Veerashetty Optimal multi-kernel SVM classifier with rotation, illumination and scale invariant hybrid DWT-Shearlet based GLCM feature descriptor and its application to face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210122