CN110866507A - Method for protecting mobile phone chatting content based on iris recognition - Google Patents
Method for protecting mobile phone chatting content based on iris recognition Download PDFInfo
- Publication number
- CN110866507A CN110866507A CN201911141090.5A CN201911141090A CN110866507A CN 110866507 A CN110866507 A CN 110866507A CN 201911141090 A CN201911141090 A CN 201911141090A CN 110866507 A CN110866507 A CN 110866507A
- Authority
- CN
- China
- Prior art keywords
- layer
- iris
- mobile phone
- image
- chat
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000010606 normalization Methods 0.000 claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 18
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 15
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 230000006870 function Effects 0.000 claims description 42
- 230000004913 activation Effects 0.000 claims description 21
- 238000011176 pooling Methods 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 239000003086 colorant Substances 0.000 claims description 3
- 210000001747 pupil Anatomy 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 abstract description 4
- 238000012795 verification Methods 0.000 abstract 1
- 238000005096 rolling process Methods 0.000 description 8
- 230000017105 transposition Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/478—Contour-based spectral representations or scale-space representations, e.g. by Fourier analysis, wavelet analysis or curvature scale-space [CSS]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Economics (AREA)
- Tourism & Hospitality (AREA)
- Human Computer Interaction (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Computing Systems (AREA)
- General Business, Economics & Management (AREA)
- Ophthalmology & Optometry (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
A method for protecting the chat content of mobile phone based on iris recognition belongs to the technical field of network privacy protection, and is based on the mobile phone with infrared sensor, and realizes the chat content protection by acquiring and transmitting the face image of the chat object in real time and recognizing the iris of the chat object. Comprises the following steps of 1: selecting a chat object face image, sequentially transmitting the chat object face image to an eye region extraction module, an iris segmentation module, a normalization module and an encoding module, and encoding an original iris image by 01; step 2: the mobile phone infrared sensor acquires the face image of the other side every 3 seconds, and transmits the face image to each module again to encode the original iris image at each moment by 01; and step 3: comparing the 01 strings obtained in the two steps, judging the Hamming distance and controlling the chat. The 5G era is moving forward, which provides enough bandwidth for real-time face transmission and has high transmission speed. The iris recognition is used for identity verification in mobile phone chatting, has the characteristics of convenience, easiness in implementation, high reliability, high safety and the like, is combined with a convolutional neural network, monitors the identity of a chatting object in real time, effectively avoids the chatting under the condition that the other party is the other person, and protects the personal privacy safety.
Description
Technical Field
The invention relates to the technical field of network privacy protection, in particular to a method for protecting mobile phone chatting content based on iris recognition.
Background
Nowadays, social software becomes software with the highest daily use frequency in a mobile phone, more and more people do not choose to make a call or the like for communication, but directly chat on the social software, so that how to set the viewing permission of chat information is one of important problems for protecting personal privacy from being invaded by other people.
The iris has the characteristics of lifetime invariance, portability and the like, is used for identity authentication, well solves the problems that the secret key comes from the outside, is too long and is not easy to remember and the like, and has higher safety and reliability. At present, the 5G era is moving forward, sufficient bandwidth is provided for real-time transmission of a human face, the transmission speed is high, and an iris can be extracted from the transmitted human face for recognition. In recent years, the convolutional neural network also shows excellent performance in the aspect of iris segmentation, so that real-time chat object identity recognition is possible.
Disclosure of Invention
Aiming at the problems of uncertainty of objects, privacy disclosure and the like of network chatting, the invention provides a method for protecting the chatting content of a mobile phone based on iris recognition.
In order to achieve the above purpose, the invention provides a method for protecting mobile phone chatting content based on iris recognition, which comprises the following modules:
the eye region extraction module is used for extracting an eye region from an input human face image;
the iris segmentation module is used for segmenting the region between the iris and the pupil in the eye image of the eye region extraction module by using a convolutional neural network;
the normalization module is used for carrying out normalization processing on the iris area obtained by the iris segmentation module to obtain a normalized image;
and the coding module is used for coding the normalized image obtained by the normalization module and obtaining a 01 string through wavelet transformation.
The method comprises the following steps:
step 1, selecting face images of chat objects, sequentially transmitting the face images to the modules, and carrying out 01 coding on original iris images;
step 2, the infrared sensor of the mobile phone of the opposite side acquires the face image of the opposite side in real time, and transmits the face image to each module again to encode the original iris image at each moment by 01;
step 3, comparing the 01 strings obtained in the two steps, and controlling the chat;
the iris segmentation module finally obtains a binary feature vector, the binary feature vector is displayed as a picture with only black and white colors, and the white part represents an iris area.
In the step 1, encoding 01 of the original iris image includes the following steps:
step 1.1, selecting a chat object face image from a mobile phone photo album with an infrared sensor, inputting the chat object face image into an eye region extraction module, and extracting an eye region;
step 1.2, inputting the eye region into an iris segmentation module, and segmenting the iris region in the image through a pre-trained convolutional neural network;
step 1.3, transmitting the obtained iris area to a normalization module to obtain a normalized image;
step 1.4, inputting the obtained normalized image into a coding module, converting the normalized image into 01 strings by using wavelet transform, and storing the 01 strings as original iris features in a mobile phone;
in the step 2, encoding 01 the original iris image at each moment, including the following steps:
step 2.1, the infrared sensor of the mobile phone of the opposite side continuously acquires a plurality of face images of the opposite side every 3 seconds, judges the definition of the face images, and inputs the face images into the eye region extraction module to extract the eye region when the definition is greater than a set threshold value α;
step 2.2, inputting the eye region into an iris segmentation module, and segmenting the iris region in the image by the convolutional neural network;
step 2.3, transmitting the obtained iris area to a normalization module to obtain a normalized image;
step 2.4, inputting the obtained normalized image into a coding module, and converting the normalized image into 01 strings by using wavelet transform;
in step 3, controlling chat, comprising the following steps:
step 3.1, setting a Hamming distance threshold β for judging the similarity of iris features;
step 3.2, comparing the original iris characteristics with the Hamming distance between the iris characteristics acquired by the mobile phone in real time;
step 3.3, when the Hamming distance is smaller than the threshold β, matching is passed, chatting can be continued, otherwise, the chatting is stopped;
the pre-trained convolutional neural network in the step comprises a convolutional layer, a pooling layer and a transposition convolutional layer, and a binary feature vector is obtained by inputting the eye gray level image of the chat object into the network, wherein the specific network structure is as follows:
an input layer: inputting a chatting object eye gray level image;
layer C1: and (4) rolling up the layers. The layer is provided with 8 filters with the size of 3 multiplied by 3, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out, the learning process is accelerated, and the later network layer is more robust. The activation function is the Relu function.
Layer S1: a maximum pooling layer. The maximum value in 2 multiplied by 2 neighborhood points of each group of feature mapping in the C1 layer is reserved, and the sliding step length is 2;
layer C2: and (4) rolling up the layers. The layer has 16 filters with the size of 3 multiplied by 3, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out in the SAME way, and the activation function is a Relu function;
layer S2: a maximum pooling layer. The maximum value in 2 multiplied by 2 neighborhood points of each group of feature mapping in the C2 layer is reserved, and the sliding step length is 2;
layer C3: and (4) rolling up the layers. The layer is provided with 32 filters with the size of 3 multiplied by 3, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
layer DP 3: dropout layer. This layer randomly sets the features obtained for a portion of the C3 layer to 0, preventing overfitting, with a parameter of 0.5;
layer C4: and (4) rolling up the layers. The layer is provided with 32 filters with the size of 1 multiplied by 1, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
layer DP 4: dropout layer. This layer randomly sets the features obtained for a portion of the C4 layer to 0, preventing overfitting, with a parameter of 0.5;
CT1 layer: transposing the convolution layer. The layer is provided with 16 filters with the size of 3 multiplied by 3, the sliding step length is 2, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
CT2 layer: transposing the convolution layer. The layer is provided with 8 filters with the size of 3 multiplied by 3, the sliding step length is 2, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
CT3 layer: transposing the convolution layer. The layer has 1 filter with the size of 3 multiplied by 3, the sliding step length is 2, the filling mode is SAME, batch standardization is carried out, the activation function is Relu function, and the feature vector with the SAME size as the input is obtained.
Compared with the prior art, the iris recognition technology is applied to the field of network information safety, the input image is processed through the eye region extraction module, the iris segmentation module and the normalization module, and finally the iris is coded. Different from the traditional method, the invention divides the iris area by establishing the convolutional neural network and transmits the iris area to obtain the iris 01 string code, thereby monitoring the identity of the network chat object in real time. In the network chatting process, the face information is easy to collect, the iris has lifelong invariance, and the identity of the chatting object is verified more safely and reliably through the iris.
Drawings
FIG. 1 is a general flow chart of the method for mobile phone chat content protection by iris recognition according to the present invention
FIG. 2 is a flow chart of the original iris feature acquisition of the present invention
FIG. 3 is a flow chart of the present invention for real-time iris feature acquisition
FIG. 4 is a flow chart of the present invention for controlling chat
Detailed description of the invention
The present invention will be described in further detail below with reference to the accompanying drawings.
The invention comprises the following modules:
the eye region extraction module is used for extracting an eye region from an input human face image;
the iris segmentation module is used for segmenting the region between the iris and the pupil in the eye image of the eye region extraction module by using a convolutional neural network;
the normalization module is used for carrying out normalization processing on the iris area obtained by the iris segmentation module to obtain a normalized image;
and the coding module is used for coding the normalized image obtained by the normalization module and obtaining a 01 string through wavelet transformation.
The iris segmentation module finally obtains a binary feature vector, the binary feature vector is displayed as a picture with only black and white colors, and the white part represents an iris area.
As shown in fig. 1, the method of the present invention comprises the steps of:
step 1, selecting the face image of the chat object, sequentially transmitting the face image to the modules, and carrying out 01 coding on the original iris image. The specific process is shown in fig. 2, and includes: selecting a chat object face image from a mobile phone photo album with an infrared sensor, and inputting the chat object face image into an eye region extraction module to extract an eye region; the eye region is input into an iris segmentation module, the iris region in the image is segmented through a pre-trained convolutional neural network and is subjected to normalization processing, the obtained normalized image is input into a coding module, and the normalized image is converted into 01 strings through wavelet transformation and is stored in a mobile phone as the original iris feature. The eye region is extracted to remove regions except for eyes, so that interference of non-eye regions is avoided, and network training precision is improved; normalization is to normalize the segmented iris region into a rectangular region, which can compress the size of the image and facilitate subsequent feature extraction or recognition operation. This step is used to obtain a string of 01 original iris images as a comparison for authentication.
Step 2, the infrared sensor of the mobile phone of the opposite side acquires the face image of the mobile phone in real time and transmits the face image to each module again to carry out 01 coding on the original iris image at each moment, as shown in figure 3, the infrared sensor of the mobile phone of the opposite side acquires a plurality of face images of the opposite side continuously every 3 seconds, judges the definition of the face images, inputs the face images into the eye region extraction module to extract the eye region when the definition is larger than a set threshold α, inputs the eye region into the iris segmentation module, the convolutional neural network segments the iris region in the images and transmits the segmented iris region to the normalization module to obtain a normalized image, inputs the obtained normalized image into the coding module to convert the normalized image into 01 strings by using wavelet transformation, and the definition threshold is set to avoid that the sensor acquires an over-fuzzy image, so that the recognition fails and the waste of time space is avoided.
Step 3, comparing the 01 strings obtained in the two steps to control the chat, wherein the steps are shown in figure 4, setting a Hamming distance threshold β for judging the similarity of the iris features, comparing the Hamming distance between the original iris features and the iris features acquired by the mobile phone in real time, when the Hamming distance is smaller than the threshold β, considering the other party as the self, matching the self, the chat can be continued, otherwise, the chat is stopped;
the pre-trained convolutional neural network in the step comprises a convolutional layer, a pooling layer and a transposition convolutional layer, and a binary feature vector is obtained by inputting the eye gray level image of the chat object into the network, wherein the specific network structure is as follows:
an input layer: inputting a chatting object eye gray level image;
layer C1: and (4) rolling up the layers. The layer is provided with 8 filters with the size of 3 multiplied by 3, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out, the learning process is accelerated, and the later network layer is more robust. The activation function is the Relu function.
Layer S1: a maximum pooling layer. The maximum value in 2 multiplied by 2 neighborhood points of each group of feature mapping in the C1 layer is reserved, and the sliding step length is 2;
layer C2: and (4) rolling up the layers. The layer has 16 filters with the size of 3 multiplied by 3, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out in the SAME way, and the activation function is a Relu function;
layer S2: a maximum pooling layer. The maximum value in 2 multiplied by 2 neighborhood points of each group of feature mapping in the C2 layer is reserved, and the sliding step length is 2;
layer C3: and (4) rolling up the layers. The layer is provided with 32 filters with the size of 3 multiplied by 3, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
layer DP 3: dropout layer. This layer randomly sets the features obtained for a portion of the C3 layer to 0, preventing overfitting, with a parameter of 0.5;
layer C4: and (4) rolling up the layers. The layer is provided with 32 filters with the size of 1 multiplied by 1, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
layer DP 4: dropout layer. This layer randomly sets the features obtained for a portion of the C4 layer to 0, preventing overfitting, with a parameter of 0.5;
CT1 layer: transposing the convolution layer. The layer is provided with 16 filters with the size of 3 multiplied by 3, the sliding step length is 2, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
CT2 layer: transposing the convolution layer. The layer is provided with 8 filters with the size of 3 multiplied by 3, the sliding step length is 2, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
CT3 layer: transposing the convolution layer. The layer has 1 filter with the size of 3 multiplied by 3, the sliding step length is 2, the filling mode is SAME, batch standardization is carried out, the activation function is Relu function, and the feature vector with the SAME size as the input is obtained.
Claims (6)
1. A method for protecting mobile phone chatting content based on iris recognition is characterized in that: the system comprises the following modules:
the eye region extraction module is used for extracting an eye region from an input human face image;
the iris segmentation module is used for segmenting the region between the iris and the pupil in the eye image of the eye region extraction module by using a convolutional neural network;
the normalization module is used for carrying out normalization processing on the iris area obtained by the iris segmentation module to obtain a normalized image;
the coding module is used for coding the normalized image obtained by the normalization module and obtaining a 01 string through wavelet transformation;
a method for protecting mobile phone chatting content based on iris recognition comprises the following specific operation steps:
step 1, selecting face images of chat objects, sequentially transmitting the face images to the modules, and carrying out 01 coding on original iris images;
step 2, the infrared sensor of the mobile phone of the opposite side acquires the face image of the opposite side in real time, and transmits the face image to each module again to encode the original iris image at each moment by 01;
and 3, comparing the 01 strings obtained in the two steps, and controlling the chat.
2. The method for mobile phone chat content protection based on iris recognition as claimed in claim 1, wherein: in the step 1, encoding 01 of the original iris image includes the following steps:
step 1.1, selecting a chat object face image from a mobile phone photo album with an infrared sensor, inputting the chat object face image into an eye region extraction module, and extracting an eye region;
step 1.2, inputting the eye region into an iris segmentation module, and segmenting the iris region in the image through a pre-trained convolutional neural network;
step 1.3, transmitting the obtained iris area to a normalization module to obtain a normalized image;
and 1.4, inputting the obtained normalized image into a coding module, converting the normalized image into 01 strings by using wavelet transform, and storing the 01 strings serving as original iris features in a mobile phone.
3. The method for mobile phone chat content protection based on iris recognition as claimed in claim 1, wherein: in the step 2, the iris image obtained at each moment of the mobile phone is encoded by 01, and the method comprises the following steps:
step 2.1, the infrared sensor of the mobile phone of the opposite side continuously acquires a plurality of face images of the opposite side every 3 seconds, judges the definition of the face images, and inputs the face images into the eye region extraction module to extract the eye region when the definition is greater than a set threshold value α;
step 2.2, inputting the eye region into an iris segmentation module, and segmenting the iris region in the image by the convolutional neural network;
step 2.3, transmitting the obtained iris area to a normalization module to obtain a normalized image;
and 2.4, inputting the obtained normalized image into a coding module, and converting the normalized image into 01 strings by using wavelet transform.
4. The method for mobile phone chat content protection based on iris recognition as claimed in claim 1, wherein: in step 3, controlling chat, comprising the following steps:
step 3.1, setting a Hamming distance threshold β for judging the similarity of iris features;
step 3.2, comparing the original iris characteristics with the Hamming distance between the iris characteristics acquired by the mobile phone in real time;
step 3.3. when the hamming distance is less than the threshold β, the match is passed and chat can continue, otherwise terminate chat.
5. The method for mobile phone chat content protection based on iris recognition as claimed in claim 3 or 4, wherein: in the step 1.2 or 2.2, the pre-trained convolutional neural network includes a convolutional layer, a pooling layer and a transpose convolutional layer, and a binary feature vector is obtained by inputting the eye gray level image of the chat object into the network, and the specific network structure is as follows:
an input layer: inputting a chatting object eye gray level image;
layer C1: a convolution layer; the layer is provided with 8 filters with the size of 3 multiplied by 3, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out, the learning process is accelerated, and the later network layer has higher robustness; the activation function is a Relu function;
layer S1: a maximum pooling layer; the maximum value in 2 multiplied by 2 neighborhood points of each group of feature mapping in the C1 layer is reserved, and the sliding step length is 2;
layer C2: a convolution layer; the layer has 16 filters with the size of 3 multiplied by 3, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out in the SAME way, and the activation function is a Relu function;
layer S2: a maximum pooling layer; the maximum value in 2 multiplied by 2 neighborhood points of each group of feature mapping in the C2 layer is reserved, and the sliding step length is 2;
layer C3: a convolution layer; the layer is provided with 32 filters with the size of 3 multiplied by 3, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
layer DP 3: a Dropout layer; this layer randomly sets the features obtained for a portion of the C3 layer to 0, preventing overfitting, with a parameter of 0.5;
layer C4: a convolution layer; the layer is provided with 32 filters with the size of 1 multiplied by 1, the sliding step length is 1, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
layer DP 4: a Dropout layer; this layer randomly sets the features obtained for a portion of the C4 layer to 0, preventing overfitting, with a parameter of 0.5;
CT1 layer: transposing the convolution layer; the layer is provided with 16 filters with the size of 3 multiplied by 3, the sliding step length is 2, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
CT2 layer: transposing the convolution layer; the layer is provided with 8 filters with the size of 3 multiplied by 3, the sliding step length is 2, the filling mode is SAME, batch standardization is carried out, and the activation function is a Relu function;
CT3 layer: transposing the convolution layer; the layer has 1 filter with the size of 3 multiplied by 3, the sliding step length is 2, the filling mode is SAME, batch standardization is carried out, the activation function is Relu function, and the feature vector with the SAME size as the input is obtained.
6. The method for mobile phone chat content protection based on iris recognition as claimed in claim 1, wherein: the iris segmentation module finally obtains a binary feature vector, the binary feature vector is displayed as a picture with only black and white colors, and the white part represents an iris area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911141090.5A CN110866507A (en) | 2019-11-20 | 2019-11-20 | Method for protecting mobile phone chatting content based on iris recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911141090.5A CN110866507A (en) | 2019-11-20 | 2019-11-20 | Method for protecting mobile phone chatting content based on iris recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110866507A true CN110866507A (en) | 2020-03-06 |
Family
ID=69654967
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911141090.5A Pending CN110866507A (en) | 2019-11-20 | 2019-11-20 | Method for protecting mobile phone chatting content based on iris recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110866507A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111507195A (en) * | 2020-03-20 | 2020-08-07 | 北京万里红科技股份有限公司 | Iris segmentation neural network model training method, iris segmentation method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1760887A (en) * | 2004-10-11 | 2006-04-19 | 中国科学院自动化研究所 | The robust features of iris image extracts and recognition methods |
CN101325491A (en) * | 2008-07-28 | 2008-12-17 | 北京中星微电子有限公司 | Method and system for controlling user interface of instant communication software |
CN106709431A (en) * | 2016-12-02 | 2017-05-24 | 厦门中控生物识别信息技术有限公司 | Iris recognition method and device |
CN106778664A (en) * | 2016-12-29 | 2017-05-31 | 天津中科智能识别产业技术研究院有限公司 | The dividing method and its device of iris region in a kind of iris image |
CN107506754A (en) * | 2017-09-19 | 2017-12-22 | 厦门中控智慧信息技术有限公司 | Iris identification method, device and terminal device |
-
2019
- 2019-11-20 CN CN201911141090.5A patent/CN110866507A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1760887A (en) * | 2004-10-11 | 2006-04-19 | 中国科学院自动化研究所 | The robust features of iris image extracts and recognition methods |
CN101325491A (en) * | 2008-07-28 | 2008-12-17 | 北京中星微电子有限公司 | Method and system for controlling user interface of instant communication software |
CN106709431A (en) * | 2016-12-02 | 2017-05-24 | 厦门中控生物识别信息技术有限公司 | Iris recognition method and device |
CN106778664A (en) * | 2016-12-29 | 2017-05-31 | 天津中科智能识别产业技术研究院有限公司 | The dividing method and its device of iris region in a kind of iris image |
CN107506754A (en) * | 2017-09-19 | 2017-12-22 | 厦门中控智慧信息技术有限公司 | Iris identification method, device and terminal device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111507195A (en) * | 2020-03-20 | 2020-08-07 | 北京万里红科技股份有限公司 | Iris segmentation neural network model training method, iris segmentation method and device |
CN111507195B (en) * | 2020-03-20 | 2023-10-03 | 北京万里红科技有限公司 | Iris segmentation neural network model training method, iris segmentation method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Daouk et al. | Iris recognition | |
CN113591747B (en) | Multi-scene iris recognition method based on deep learning | |
Franzgrote et al. | Palmprint verification on mobile phones using accelerated competitive code | |
RU2628201C1 (en) | Method of adaptive quantization for encoding iris image | |
CN101425134A (en) | On-line hand back vein identification method | |
WO2023142453A1 (en) | Biometric identification method, server, and client | |
CN105740675A (en) | Method and system for identifying and triggering authorization management on the basis of dynamic figure | |
Ayoup et al. | Cancellable Multi-Biometric Template Generation Based on Arnold Cat Map and Aliasing. | |
CN110866507A (en) | Method for protecting mobile phone chatting content based on iris recognition | |
Pal et al. | Implementation of hand vein structure authentication based system | |
CN114596639A (en) | Biological feature recognition method and device, electronic equipment and storage medium | |
CN104615992A (en) | Long-distance fingerprint dynamic authentication method | |
CN212160688U (en) | Face recognition integrated front-end device with biological feature privacy protection function | |
Choudhary et al. | Multimodal biometric-based authentication with secured templates | |
CN110852239B (en) | Face recognition system | |
CN110781795A (en) | Method for protecting network chat content based on face recognition | |
KR100596498B1 (en) | Online face recognition system based on multi frame | |
Al-Saidi et al. | Iris features via fractal functions for authentication protocols | |
CN113190858B (en) | Image processing method, system, medium and device based on privacy protection | |
JP5279007B2 (en) | Verification system, verification method, program, and recording medium | |
CN114882582A (en) | Gait recognition model training method and system based on federal learning mode | |
Yang | Fingerprint recognition system based on big data and multi-feature fusion | |
Javadtalab et al. | Transparent non-intrusive multimodal biometric system for video conference using the fusion of face and ear recognition | |
Baek et al. | Fake Fingerprint Detection Biometric System Using Neural Network Algorithm | |
Patel | A study on fingerprint (biometrics) recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |