CN115512470A - Security protection equipment based on thing networking - Google Patents

Security protection equipment based on thing networking Download PDF

Info

Publication number
CN115512470A
CN115512470A CN202211226949.4A CN202211226949A CN115512470A CN 115512470 A CN115512470 A CN 115512470A CN 202211226949 A CN202211226949 A CN 202211226949A CN 115512470 A CN115512470 A CN 115512470A
Authority
CN
China
Prior art keywords
training
feature map
fingerprint image
fingerprint
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211226949.4A
Other languages
Chinese (zh)
Inventor
马荣飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taizhou Vocational College of Science and Technology
Original Assignee
Taizhou Vocational College of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taizhou Vocational College of Science and Technology filed Critical Taizhou Vocational College of Science and Technology
Priority to CN202211226949.4A priority Critical patent/CN115512470A/en
Publication of CN115512470A publication Critical patent/CN115512470A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y20/00Information sensed or collected by the things
    • G16Y20/40Information sensed or collected by the things relating to personal data, e.g. biometric data, records or preferences
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y30/00IoT infrastructure
    • G16Y30/10Security thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/30Control
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/50Safety; Security of things, users, data or systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application relates to the field of intelligent control, and specifically discloses security protection equipment based on internet of things, which extracts high-dimensional image features of an acquired fingerprint image and an input fingerprint image by defogging the acquired fingerprint image and then using a feature extractor based on a deep convolutional neural network, and compares the similarity of the acquired fingerprint image and the input fingerprint image in a high-dimensional feature space to improve the matching accuracy and take the security into account.

Description

Security protection equipment based on thing networking
Technical Field
The application relates to the field of intelligent control, and more specifically relates to a security protection equipment based on thing networking.
Background
The fingerprint lock is a common security device, and integrates the internet of things technology, the computer information technology, the electronic technology, the mechanical technology, the hardware technology and the like. Today, fingerprint locks are widely used in various fields, for example, in households, to avoid troubles caused by lost keys, and, as applied to companies, to control opening and closing of a public door in combination with a fingerprint lock.
However, in the practical application of the fingerprint lock, it is found that when a fingerprint is input, the fingerprint of a user is in a standard state, but when the actual user swipes the fingerprint, water stains may exist on the surface of the fingerprint of the user, and the user may not swipe the fingerprint in a standard posture, so that the difference between the fingerprint data acquired by the fingerprint lock and the actually input fingerprint is too large, the entrance guard cannot be opened, and inconvenience is brought to the user.
Therefore, an optimized security device based on the internet of things is expected.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides security and protection equipment based on the Internet of things, which extracts high-dimensional image features of an acquired fingerprint image and an input fingerprint image by defogging the acquired fingerprint image and then utilizing a feature extractor based on a deep convolutional neural network, and compares the similarity of the acquired fingerprint image and the input fingerprint image in a high-dimensional feature space so as to improve the matching precision and take the security into consideration.
According to an aspect of the application, a security protection device based on the internet of things is provided, which comprises:
the fingerprint acquisition unit is used for acquiring a reference fingerprint image input by a user from a database and acquiring a user pressing fingerprint image acquired by a camera arranged in the fingerprint lock;
the defogging unit is used for enabling the user pressing fingerprint image to pass through a defogging generator based on a countermeasure generation network so as to obtain a generated user pressing fingerprint image;
a fingerprint feature extraction unit, configured to pass the reference fingerprint image and the generated user press fingerprint image through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a reference feature map and a verification feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure;
the difference unit is used for calculating a difference characteristic diagram between the reference characteristic diagram and the verification characteristic diagram;
the judging unit is used for enabling the differential feature map to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the fingerprint image pressed by the user is matched with the reference fingerprint or not; and
and the control result generation unit is used for responding to the classification result that the fingerprint image pressed by the user is matched with the reference fingerprint and generating an unlocking control instruction.
In the above security protection equipment based on the internet of things, the fingerprint feature extraction unit includes: a detection fingerprint feature extraction subunit, configured to perform depth convolution coding on the generated user-pressed fingerprint image using the multiple convolution layers of the first convolution neural network to output a depth check feature map from a last layer of the multiple convolution layers; a first spatial attention subunit, configured to input the depth check feature map into a first spatial attention module of the first convolutional neural network to obtain a first spatial attention map; and the attention applying subunit is used for calculating the position-point-by-position multiplication of the depth verification feature map and the first spatial attention map to obtain the verification feature map.
In the above security protection equipment based on the internet of things, the fingerprint feature extraction unit includes: a reference fingerprint feature extraction subunit, configured to perform depth convolution encoding on the reference fingerprint image using the multilayer convolution layers of the second convolutional neural network to output a depth reference feature map from a last layer of the multilayer convolution layers; a second spatial attention subunit, configured to input the depth reference feature map into a second spatial attention module of the second convolutional neural network to obtain a second spatial attention map; and an attention force action subunit, configured to calculate a position-wise multiplication of the depth reference feature map and the second spatial attention map to obtain the reference feature map.
In the above security device based on the internet of things, the difference unit is further configured to: calculating and calculating a difference characteristic diagram between the reference characteristic diagram and the verification characteristic diagram according to the following formula;
wherein the formula is:
Figure BDA0003879710360000021
wherein, F 1 Representing said reference feature map, F 2 Representing said verification feature map, F c A graph representing the difference characteristics is shown,
Figure BDA0003879710360000022
indicating a difference by position.
In the above security protection device based on the internet of things, the determining unit includes: processing the differential feature map to generate a classification result according to the following formula;
wherein the formula is:
softmax{(W n ,B n ):…:(W 1 ,B 1 ) L Project (F) }, where Project (F) denotes the projection of the difference feature map as a vector, W 1 To W n As a weight matrix for each fully connected layer, B 1 To B n A bias matrix representing the fully connected layers of each layer.
In the security device based on the internet of things, the security device based on the internet of things further comprises a training module for training the defogging generator based on the countermeasure generating network, the twin network model and the classifier.
In the above security protection equipment based on the internet of things, the training module includes: the training data acquisition unit is used for acquiring training data, wherein the training data comprises a training reference fingerprint image, a training user pressing fingerprint image acquired by a camera deployed in the fingerprint lock and a true value of whether the training user pressing fingerprint image is matched with the training reference fingerprint; the training defogging unit is used for enabling the training user pressing fingerprint image to pass through the defogging generator based on the countermeasure generation network so as to obtain a training generation user pressing fingerprint image; the training fingerprint feature extraction unit is used for enabling the training reference fingerprint image and the training generation user pressing fingerprint image to pass through the twin network model comprising the first convolutional neural network and the second convolutional neural network so as to obtain a training reference feature map and a training verification feature map; the training difference unit is used for calculating a training difference feature map between the training reference feature map and the training verification feature map; the training judgment unit is used for enabling the training difference characteristic diagram to pass through a classifier to obtain a classification loss function value; a feature extraction pattern solution loss unit, configured to calculate an inhibition loss function value of a feature extraction pattern solution of the training reference feature map and the training verification feature map, where the inhibition loss function value of the feature extraction pattern solution is related to a square of a two-norm of a difference feature vector between a first feature vector expanded from the training reference feature map and a second feature vector expanded from the training verification feature map; and a training unit for training the countermeasure generation network-based defogger, the twin network model, and the classifier with a weighted sum between the suppression loss function values and the classification loss function values resolved by the feature extraction mode as a classification loss function value.
According to another aspect of the application, a security method based on the internet of things is provided, which includes:
acquiring a reference fingerprint image input by a user from a database and acquiring a user pressing fingerprint image acquired by a camera deployed in a fingerprint lock;
passing the user press fingerprint image through a defogger generator based on a countermeasure generation network to obtain a generated user press fingerprint image;
passing the reference fingerprint image and the generated user press fingerprint image through a twin network model comprising a first convolutional neural network and a second convolutional neural network to obtain a reference feature map and a check feature map, wherein the first convolutional neural network and the second convolutional neural network have the same network structure;
calculating a difference feature map between the reference feature map and the verification feature map;
the differential feature map is processed by a classifier to obtain a classification result, and the classification result is used for indicating whether the fingerprint image pressed by the user is matched with the reference fingerprint or not; and
and responding to the classification result that the fingerprint image pressed by the user is matched with the reference fingerprint, and generating an unlocking control instruction.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the internet of things based security method as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the internet of things based security method as described above.
Compared with the prior art, the security equipment based on the Internet of things conducts defogging processing on the collected fingerprint images, extracts the high-dimensional image characteristics of the collected fingerprint images and the input fingerprint images by utilizing the feature extractor based on the deep convolutional neural network, compares the similarity of the collected fingerprint images and the input fingerprint images in a high-dimensional feature space, and then improves the matching accuracy to generate corresponding control instructions.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally indicate like parts or steps.
Fig. 1 illustrates an application scenario diagram of a security device based on the internet of things according to an embodiment of the application;
FIG. 2 illustrates a block diagram of an Internet of things based security device in accordance with an embodiment of the present application;
FIG. 3 illustrates a block diagram of an Internet of things based security device in accordance with an embodiment of the present application;
FIG. 4 illustrates a system architecture diagram of an inference module in an Internet of things based security device according to an embodiment of the application;
fig. 5 illustrates a block diagram of a fingerprint feature extraction unit in an internet of things-based security device according to an embodiment of the present application;
fig. 6 illustrates a block diagram of a fingerprint feature extraction unit in an internet of things based security device according to an embodiment of the present application;
FIG. 7 illustrates a system architecture diagram of a training module in an Internet of things based security device according to an embodiment of the present application;
FIG. 8 illustrates a flow chart of a method for Internet of things based security according to an embodiment of the application;
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Overview of scenes
In order to solve the technical problems, the applicant of the application finds that when a user swipes a fingerprint, if water stains exist on the surface of the fingerprint of the user, an acquired image can be atomized and blurred, so that matching fails; when the user does not swipe a fingerprint in a standard gesture, the captured fingerprint is only part of the entered fingerprint and is rotated, which may result in a failed match. In the first failure scene, the matching accuracy can be improved by carrying out defogging treatment on the collected fingerprint image, in the second failure scene, the feature extractor based on the deep convolution neural network can be used for extracting the high-dimensional image features of the collected fingerprint image and the input fingerprint image, the similarity of the collected fingerprint image and the input fingerprint image is compared in a high-dimensional feature space, and then the matching accuracy is improved.
Specifically, in the technical scheme of the application, firstly, a reference fingerprint input by a user is obtained from a database, and a user pressing fingerprint image collected by a camera deployed in a fingerprint lock is obtained. Then, the user press fingerprint image is passed through a defogging generator based on a challenge generation network to obtain a generated user press fingerprint image. As described above, when a user swipes a fingerprint, if water stains exist on the surface of the user fingerprint, fog blurring may occur in the collected image, and accordingly, in the technical solution of the present application, a defogging generator based on a countermeasure generation network is used to perform countermeasure generation on the user pressed fingerprint image to obtain the generated user pressed fingerprint image. Specifically, the anti-generation-network-based defogging generator comprises a generator and a discriminator, wherein the generator is used for generating a defogging image, and the discriminator is used for calculating a discriminator loss function value between the defogging image and a real image, and updating the neural network parameters of the generator through a BP algorithm on the basis of the discriminator loss function value serving as the loss function value.
And after the generated user pressing fingerprint image is obtained, passing the reference fingerprint image and the generated user pressing fingerprint image through a twin network model comprising a first convolutional neural network and a second convolutional neural network to obtain a reference characteristic diagram and a check characteristic diagram, wherein the first convolutional neural network and the second convolutional neural network have the same network structure. Namely, the reference fingerprint image and the defogged generated user pressing fingerprint image pass through a twin network model with a symmetrical network structure, wherein a first convolutional neural network and a second convolutional neural network of the twin network model are used for carrying out explicit spatial coding on the reference fingerprint image and the generated user pressing fingerprint image so as to extract high-dimensional local implicit features in the reference fingerprint image and the generated user pressing fingerprint image to obtain the reference feature map and a verification feature map.
In the technical scheme of the application, the importance of each pixel in the reference fingerprint image and the generated user pressed fingerprint image to the subsequent matching judgment is different, and the characteristic can be focused through a spatial attention mechanism. That is, preferably, in the technical solution of the present application, a spatial attention mechanism is integrated in the first convolutional neural network and the second convolutional neural network.
After obtaining the reference feature map and the verification feature map, calculating a difference feature map between the reference feature map and the verification feature map. That is, the difference of the feature distribution in the high-dimensional feature space of the reference fingerprint image and the generated user press fingerprint image is calculated. In a specific example, a difference by position between the reference feature map and the verification feature map is calculated to obtain the difference feature map. Then, the process is carried out. And the differential characteristic diagram is processed through a classifier to obtain a classification result, the classification result is used for indicating whether the user presses the fingerprint image to be matched with the reference fingerprint, and further, when the classification result is that the user presses the fingerprint image to be matched with the reference fingerprint, an unlocking control instruction is generated to unlock the door control through the fingerprint.
Particularly, in the technical solution of the present application, since the classification feature map is obtained by calculating a difference feature map between the reference feature map and the verification feature map, in the training process, a classification loss function of the classifier respectively passes through the first convolutional neural network and the second convolutional neural network when a gradient reversely propagates, so that the resolution of the feature extraction mode of the first convolutional neural network and the second convolutional neural network may be caused by abnormal gradient divergence, thereby affecting the accuracy of the classification result of the difference feature map.
Therefore, preferably, a suppression loss function for feature extraction pattern resolution for the reference feature map and the verification feature map is introduced, expressed as:
Figure BDA0003879710360000061
Figure BDA0003879710360000062
here, V 1 And V 2 Respectively, the feature vectors M obtained after the expansion of the reference feature map and the verification feature map 1 And M 2 Respectively, the classifier is for the feature vector V 1 And V 2 The vector of (1) | · | | non-conducting phosphor F Represents the F-norm of a matrix, an
Figure BDA0003879710360000071
Representing the square of the two norms of the vector.
Specifically, the suppression loss function of the feature extraction pattern digestion ensures that the difference distribution of the classifier relative to the weight matrix of different feature vectors is consistent with the real feature difference distribution of the feature vectors in a cross entropy manner, so as to ensure that the directional derivative is regularized near a branch point of gradient propagation when the gradient is reversely propagated, that is, the gradient is subjected to over-weighting on the feature extraction patterns of the first convolutional neural network and the second convolutional neural network, so that the digestion of the feature extraction pattern is suppressed, the feature expression capability of the reference feature map and the verification feature map is improved, and the accuracy of the classification result of the differential feature map is correspondingly improved. Therefore, the accuracy of fingerprint matching judgment is improved, and both safety and sensitivity are considered.
Based on this, this application provides a security protection equipment based on thing networking, it includes: the fingerprint acquisition unit is used for acquiring a reference fingerprint image input by a user from a database and acquiring a user pressing fingerprint image acquired by a camera arranged in the fingerprint lock; the defogging unit is used for enabling the user pressing fingerprint image to pass through a defogging generator based on a countermeasure generation network so as to obtain a generated user pressing fingerprint image; a fingerprint feature extraction unit, configured to pass the reference fingerprint image and the generated user press fingerprint image through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a reference feature map and a check feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure; a difference unit, configured to calculate a difference feature map between the reference feature map and the verification feature map; the judging unit is used for enabling the differential feature map to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the fingerprint image pressed by the user is matched with the reference fingerprint or not; and the control result generation unit is used for responding to the classification result that the fingerprint image pressed by the user is matched with the reference fingerprint and generating an unlocking control instruction.
Fig. 1 illustrates an application scenario diagram of a security device based on the internet of things according to an embodiment of the application. As shown in fig. 1, in the application scenario, a pressing fingerprint image (e.g., F1 as illustrated in fig. 1) of a user is acquired through a camera (e.g., C as illustrated in fig. 1) disposed in a fingerprint lock, and a reference fingerprint image (e.g., F2 as illustrated in fig. 1) entered by the user is acquired from a database. Then, the images are input into a server (for example, S in fig. 1) deployed with a security algorithm based on the internet of things, wherein the server can process the images by the security algorithm based on the internet of things to generate a classification result indicating whether the fingerprint image pressed by the user matches the reference fingerprint, and then generate a corresponding control instruction.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
Fig. 2 illustrates a block diagram of an internet of things based security device according to an embodiment of the application. As shown in fig. 2, the internet of things-based security device 300 according to the embodiment of the present application includes an inference module, wherein the inference module includes: a fingerprint acquisition unit 310; a defogging unit 320; a fingerprint feature extraction unit 330; a difference unit 340; a judgment unit 350; and a control result generation unit 360.
The fingerprint acquisition unit 310 is configured to acquire a reference fingerprint image input by a user from a database and acquire a user pressing fingerprint image acquired by a camera deployed in a fingerprint lock; the defogging unit 320 is used for enabling the user pressing fingerprint image to pass through a defogging generator based on a countermeasure generation network so as to obtain a generated user pressing fingerprint image; the fingerprint feature extraction unit 330 is configured to pass the reference fingerprint image and the generated user press fingerprint image through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a reference feature map and a verification feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure; the difference unit 340 is configured to calculate a difference feature map between the reference feature map and the verification feature map; the judging unit 350 is configured to pass the difference feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the user pressed fingerprint image matches a reference fingerprint; and the control result generating unit 360 is configured to generate an unlocking control instruction in response to the classification result indicating that the user pressed fingerprint image matches the reference fingerprint.
Fig. 4 illustrates a system architecture diagram of an inference module in an internet of things based security device according to an embodiment of the application. As shown in fig. 4, in the system architecture of the security device 300 based on the internet of things, in the inference process, first, the fingerprint acquisition unit 310 acquires a reference fingerprint image entered by a user from a database and acquires a user pressing fingerprint image acquired by a camera deployed in a fingerprint lock; the defogging unit 320 enables the user pressing fingerprint image acquired by the fingerprint acquisition unit 310 to pass through a defogging generator based on a countermeasure generation network so as to obtain a generated user pressing fingerprint image; then, the fingerprint feature extraction unit 330 passes the reference fingerprint image obtained by the fingerprint acquisition unit 310 and the generated user pressing fingerprint image obtained by the defogging unit 320 through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a reference feature map and a verification feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure; the difference unit 340 calculates a difference feature map between the reference feature map generated by the fingerprint feature extraction unit 330 and the verification feature map; then, the judging unit 350 passes the differential feature map calculated by the differentiating unit 340 through a classifier to obtain a classification result, where the classification result is used to indicate whether the user pressed fingerprint image matches the reference fingerprint; further, the control result generating unit 360 generates an unlocking control instruction in response to the classification result being that the user pressed fingerprint image matches the reference fingerprint.
Specifically, in the operation process of the security device 300 based on the internet of things, the fingerprint collection unit 310 and the defogging unit 320 are configured to obtain a reference fingerprint image entered by a user from a database, obtain a user pressing fingerprint image collected by a camera deployed in a fingerprint lock, and generate the user pressing fingerprint image by passing the user pressing fingerprint image through a defogging generator based on a countermeasure generation network. It should be understood that when the user swipes the fingerprint, if water stains exist on the surface of the user fingerprint, the collected image can be fogged and blurred, and accordingly, in the technical scheme of the application, the user pressing fingerprint image is subjected to countermeasure generation by using a defogging generator based on a countermeasure generation network so as to obtain the generated user pressing fingerprint image. Specifically, the anti-generation-network-based defogging generator comprises a generator and a discriminator, wherein the generator is used for generating a defogging image, and the discriminator is used for calculating a discriminator loss function value between the defogging image and a real image, and updating the neural network parameters of the generator through a BP algorithm on the basis of the discriminator loss function value serving as the loss function value.
Specifically, in the operation process of the internet of things-based security device 300, the fingerprint feature extraction unit 330 is configured to pass the reference fingerprint image and the generated user pressed fingerprint image through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a reference feature map and a verification feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure. After the generated user pressing fingerprint image is obtained, the reference fingerprint image and the generated user pressing fingerprint image are passed through a twin network model comprising a first convolutional neural network and a second convolutional neural network to obtain a reference characteristic diagram and a check characteristic diagram, wherein the first convolutional neural network and the second convolutional neural network have the same network structure. That is, the reference fingerprint image and the generated user pressed fingerprint image after the defogging processing are passed through a twin network model with a symmetrical network structure, wherein a first convolutional neural network and a second convolutional neural network of the twin network model are used for performing explicit spatial coding on the reference fingerprint image and the generated user pressed fingerprint image to extract high-dimensional local implicit features in the reference fingerprint image and the generated user pressed fingerprint image so as to obtain the reference feature map and the verification feature map. In the technical scheme of the application, the importance of each pixel in the reference fingerprint image and the generated user pressed fingerprint image to the subsequent matching judgment is different, and the characteristic can be focused through a spatial attention mechanism. That is, preferably, in the technical solution of the present application, a spatial attention mechanism is integrated in the first convolutional neural network and the second convolutional neural network.
Fig. 5 illustrates a block diagram of a fingerprint feature extraction unit in an internet of things based security device according to an embodiment of the present application. As shown in fig. 5, the fingerprint feature extraction unit 330 includes: a detected fingerprint feature extraction subunit 331, configured to perform depth convolution encoding on the generated user-pressed fingerprint image using the multiple convolutional layers of the first convolutional neural network to output a depth check feature map from a last layer of the multiple convolutional layers; a first spatial attention subunit 332, configured to input the depth check feature map into a first spatial attention module of the first convolutional neural network to obtain a first spatial attention map; and an attention applying subunit 333, configured to calculate a point-by-point multiplication of the depth verification feature map and the first spatial attention map to obtain the verification feature map.
Fig. 6 illustrates a block diagram of a fingerprint feature extraction unit in a security device based on the internet of things according to an embodiment of the present application. As shown in fig. 6, the fingerprint feature extraction unit 330 includes: a reference fingerprint feature extraction subunit 3311, configured to perform depth convolution encoding on the reference fingerprint image using the multiple convolutional layers of the second convolutional neural network to output a depth reference feature map from a last layer of the multiple convolutional layers; a second spatial attention subunit 3321, configured to input the depth reference feature map into a second spatial attention module of the second convolutional neural network to obtain a second spatial attention map; and an attention force action subunit 3331 configured to calculate a position-wise multiplication of the depth reference feature map and the second spatial attention map to obtain the reference feature map.
Specifically, in the operation process of the security device 300 based on the internet of things, the difference unit 340 is configured to calculate a difference feature map between the reference feature map and the verification feature map. That is, the difference of the feature distribution in the high-dimensional feature space of the reference fingerprint image and the generated user press fingerprint image is calculated. In a specific example, a difference by position between the reference feature map and the verification feature map is calculated to obtain the difference feature map. In a specific example of the present application, the difference unit is further configured to: calculating and calculating a difference characteristic diagram between the reference characteristic diagram and the verification characteristic diagram according to the following formula;
wherein the formula is:
Figure BDA0003879710360000101
wherein, F 1 Representing said reference characteristic diagram, F 2 Representing said verification feature map, F c A graph of the difference signature is represented,
Figure BDA0003879710360000102
indicating a difference by position.
Specifically, in the operation process of the security device 300 based on the internet of things, the determining unit 350 and the control result generating unit 360 are configured to pass the difference feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the user presses the fingerprint image to match the reference fingerprint, and generate the unlocking control instruction in response to the classification result indicating that the user presses the fingerprint image to match the reference fingerprint. It should be understood that the differential characteristic diagram is passed through the classifier to obtain a classification result, the classification result is used for indicating whether the user presses the fingerprint image to match with the reference fingerprint, and further, when the classification result is that the user presses the fingerprint image to match with the reference fingerprint, an unlocking control instruction is generated to unlock the door control by the fingerprint. In a specific example of the present application, the determining unit includes: processing the differential feature map to generate a classification result according to the following formula;
wherein the formula is:
softmax{(W n ,B n ):…:(W 1 ,B 1 ) L Project (F) }, where Project (F) denotes the projection of the difference feature map as a vector, W 1 To W n As a weight matrix for all connected layers of each layer, B 1 To B n A bias matrix representing the layers of the fully connected layer.
It will be appreciated that the twin network model, the defogger based on the antagonistic generation network and the classifier need to be trained prior to making inferences using the neural network model described above. That is to say, in the security equipment based on the internet of things of the application, the security equipment further comprises a training module for training the defogging generator based on the confrontation generation network, the twin network model and the classifier.
Fig. 3 illustrates a block diagram of an internet of things based security device according to an embodiment of the application. As shown in fig. 3, the security device 300 based on the internet of things according to the embodiment of the present application further includes a training module 400, the training module includes: a training data acquisition unit 410; training the defogging unit 420; a training fingerprint feature extraction unit 430; a training difference unit 440; a training judgment unit 450; a feature extraction mode resolution loss unit 460; and a training unit 470.
The training data acquisition unit 410 is configured to acquire training data, where the training data includes a training reference fingerprint image, an acquired training user pressing fingerprint image acquired by a camera deployed in the fingerprint lock, and a true value of whether the training user pressing fingerprint image matches the training reference fingerprint; the training defogging unit 420 is used for enabling the training user pressing fingerprint image to pass through the defogging generator based on the countermeasure generation network to obtain a training generation user pressing fingerprint image; the training fingerprint feature extraction unit 430 is configured to pass the training reference fingerprint image and the training generation user pressing fingerprint image through the twin network model including the first convolutional neural network and the second convolutional neural network to obtain a training reference feature map and a training verification feature map; the training difference unit 440 is configured to calculate a training difference feature map between the training reference feature map and the training verification feature map; the training judgment unit 450 is configured to pass the training difference feature map through a classifier to obtain a classification loss function value; the feature extraction pattern solution loss unit 460 is configured to calculate a suppression loss function value of a feature extraction pattern solution of the training reference feature map and the training verification feature map, where the suppression loss function value of the feature extraction pattern solution is related to a square of a two-norm of a difference feature vector between a first feature vector expanded from the training reference feature map and a second feature vector expanded from the training verification feature map; and the training unit 470, configured to train the countermeasure generation network-based defogger, the twin network model, and the classifier with a weighted sum between the suppression loss function values and the classification loss function values of the feature extraction pattern solution as a classification loss function value.
Fig. 7 illustrates a system architecture diagram of a training module in an internet of things based security device according to an embodiment of the application. As shown in fig. 7, in the system architecture of the security device 300 based on the internet of things, in the training process, firstly, training data is acquired through the training data acquisition unit 410, where the training data includes a training reference fingerprint image, an actual value indicating whether a training user pressing fingerprint image acquired by a camera deployed in the fingerprint lock matches the training reference fingerprint or not is acquired, and the training user pressing fingerprint image is acquired by the camera deployed in the fingerprint lock; the training defogging unit 420 enables the training user pressing fingerprint image acquired by the training data acquisition unit 410 to pass through the defogging generator based on the confrontation generation network to obtain a training generation user pressing fingerprint image; then, the training fingerprint feature extraction unit 430 uses the training reference fingerprint image obtained by the training data acquisition unit 410 and the training generation user fingerprint image obtained by the defogging unit 420 to pass through the twin network model including the first convolutional neural network and the second convolutional neural network to obtain a training reference feature map and a training verification feature map; the training difference unit 440 calculates a training difference feature map between the training reference feature map and the training verification feature map; then, the training determining unit 450 passes the training difference feature map calculated by the training difference unit 440 through a classifier to obtain a classification loss function value; the feature extraction pattern solution loss unit 460 calculates an inhibition loss function value of the feature extraction pattern solution of the training reference feature map and the training verification feature map, the inhibition loss function value of the feature extraction pattern solution being related to a square of a two-norm of a difference feature vector between a first feature vector expanded from the training reference feature map and a second feature vector expanded from the training verification feature map; further, the training unit 470 trains the countermeasure generation network-based defogger, the twin network model, and the classifier with a weighted sum between the suppression loss function value and the classification loss function value of the feature extraction pattern solution as a classification loss function value.
Particularly, in the technical solution of the present application, since the classification feature map is obtained by calculating a difference feature map between the reference feature map and the verification feature map, in the training process, a classification loss function of the classifier respectively passes through the first convolutional neural network and the second convolutional neural network when a gradient reversely propagates, so that the resolution of the feature extraction mode of the first convolutional neural network and the second convolutional neural network may be caused by abnormal gradient divergence, thereby affecting the accuracy of the classification result of the difference feature map.
Therefore, preferably, a suppression loss function for feature extraction pattern resolution for the reference feature map and the verification feature map is introduced, expressed as:
Figure BDA0003879710360000131
Figure BDA0003879710360000132
here, V 1 And V 2 Respectively, the feature vectors M obtained after the expansion of the reference feature map and the verification feature map 1 And M 2 Respectively, the classifier for the feature vector V 1 And V 2 Is given by the weight matrix, | · | F Represents the F norm of the matrix, an
Figure BDA0003879710360000133
Representing the square of the two-norm of the vector.
Specifically, the suppression loss function of the feature extraction pattern digestion ensures that the difference distribution of the classifier relative to the weight matrix of different feature vectors is consistent with the real feature difference distribution of the feature vectors in a cross entropy manner, so as to ensure that the directional derivative is regularized near a branch point of gradient propagation when the gradient is reversely propagated, that is, the gradient is subjected to over-weighting on the feature extraction patterns of the first convolutional neural network and the second convolutional neural network, so that the digestion of the feature extraction pattern is suppressed, the feature expression capability of the reference feature map and the verification feature map is improved, and the accuracy of the classification result of the differential feature map is correspondingly improved. Therefore, the accuracy of fingerprint matching judgment is improved, and both safety and sensitivity are considered.
To sum up, the security device 300 based on the internet of things according to the embodiment of the present application is clarified, and performs defogging processing on a collected fingerprint image, extracts high-dimensional image features of the collected fingerprint image and an input fingerprint image by using a feature extractor based on a deep convolutional neural network, compares similarity between the collected fingerprint image and the input fingerprint image in a high-dimensional feature space, and then improves matching accuracy to generate a corresponding control instruction.
As described above, the security device based on the internet of things according to the embodiment of the present application can be implemented in various terminal devices. In one example, the internet of things based security device 300 according to the embodiment of the present application may be integrated into the terminal device as one software module and/or hardware module. For example, the internet of things based security device 300 may be a software module in the operating system of the terminal device or may be an application developed for the terminal device; of course, the security device 300 based on the internet of things can also be one of numerous hardware modules of the terminal device.
Alternatively, in another example, the internet of things based security device 300 and the terminal device may be separate devices, and the internet of things based security device 300 may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information according to an agreed data format.
Exemplary method
Fig. 8 illustrates a flowchart of a security method based on the internet of things according to an embodiment of the present application. As shown in fig. 8, the security method based on the internet of things according to the embodiment of the application includes the following steps: s110, acquiring a reference fingerprint image input by a user from a database and acquiring a user pressing fingerprint image acquired by a camera deployed in a fingerprint lock; s120, enabling the user pressing fingerprint image to pass through a defogging generator based on a countermeasure generation network to obtain a generated user pressing fingerprint image; s130, enabling the reference fingerprint image and the generated user pressing fingerprint image to pass through a twin network model comprising a first convolutional neural network and a second convolutional neural network to obtain a reference characteristic diagram and a verification characteristic diagram, wherein the first convolutional neural network and the second convolutional neural network have the same network structure; s140, calculating a difference characteristic diagram between the reference characteristic diagram and the verification characteristic diagram; s150, enabling the differential feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the fingerprint image pressed by the user is matched with the reference fingerprint or not; and S160, responding to the classification result that the user pressing fingerprint image is matched with the reference fingerprint, and generating an unlocking control instruction.
In an example, in the security method based on the internet of things, the step S130 includes: depth convolution encoding the generated user pressed fingerprint image using the multiple layers of convolution layers of the first convolutional neural network to output a depth check feature map from a last layer of the multiple layers of convolution layers; inputting the depth check feature map into a first spatial attention module of the first convolutional neural network to obtain a first spatial attention map; and calculating the position-based point multiplication of the depth verification feature map and the first spatial attention map to obtain the verification feature map.
In an example, in the security method based on the internet of things, the step S130 includes: depth convolution encoding the reference fingerprint image using the multi-layer convolution layers of the second convolutional neural network to output a depth reference feature map from a last layer of the multi-layer convolution layers; inputting the depth reference feature map into a second spatial attention module of the second convolutional neural network to obtain a second spatial attention map; and calculating the position-based multiplication of the depth reference feature map and the second spatial attention map to obtain the reference feature map.
In an example, in the security method based on the internet of things, the step S140 includes: calculating and calculating a difference characteristic diagram between the reference characteristic diagram and the verification characteristic diagram according to the following formula;
wherein the formula is:
Figure BDA0003879710360000151
wherein, F 1 Representing said reference characteristic diagram, F 2 Representing said verification feature map, F c A graph representing the difference characteristics is shown,
Figure BDA0003879710360000152
indicating a difference by position.
In an example, in the security method based on the internet of things, the step S150 includes: processing the differential feature map to generate a classification result according to the following formula;
wherein the formula is:
softmax{(W n ,B n ):…:(W 1 ,B 1 ) L Project (F) }, where Project (F) denotes the projection of the difference feature map as a vector, W 1 To W n As a weight matrix for each fully connected layer, B 1 To B n A bias matrix representing the fully connected layers of each layer.
In summary, the security method based on the internet of things according to the embodiment of the application is clarified, and the acquired fingerprint image is subjected to defogging processing, the feature extractor based on the deep convolutional neural network is used for extracting the high-dimensional image features of the acquired fingerprint image and the input fingerprint image, and the similarity of the acquired fingerprint image and the input fingerprint image is compared in the high-dimensional feature space, so that the matching accuracy is improved and the safety is considered.

Claims (8)

1. The utility model provides a security protection equipment based on thing networking which characterized in that includes:
the fingerprint acquisition unit is used for acquiring a reference fingerprint image input by a user from a database and acquiring a user pressing fingerprint image acquired by a camera arranged in the fingerprint lock;
the defogging unit is used for enabling the user pressing fingerprint image to pass through a defogging generator based on a countermeasure generation network so as to obtain a generated user pressing fingerprint image;
a fingerprint feature extraction unit, configured to pass the reference fingerprint image and the generated user press fingerprint image through a twin network model including a first convolutional neural network and a second convolutional neural network to obtain a reference feature map and a verification feature map, where the first convolutional neural network and the second convolutional neural network have the same network structure;
a difference unit, configured to calculate a difference feature map between the reference feature map and the verification feature map;
the judging unit is used for enabling the differential feature map to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the fingerprint image pressed by the user is matched with the reference fingerprint or not; and
and the control result generation unit is used for responding to the classification result that the fingerprint image pressed by the user is matched with the reference fingerprint and generating an unlocking control instruction.
2. The internet of things-based security device according to claim 1, wherein the fingerprint feature extraction unit comprises:
a detection fingerprint feature extraction subunit, configured to perform depth convolution coding on the generated user-pressed fingerprint image using the multiple convolution layers of the first convolution neural network to output a depth check feature map from a last layer of the multiple convolution layers;
a first spatial attention subunit, configured to input the depth check feature map into a first spatial attention module of the first convolutional neural network to obtain a first spatial attention map; and
an attention applying subunit, configured to calculate a multiplication of the depth verification feature map and the first spatial attention map by a position point to obtain the verification feature map.
3. The internet of things-based security device according to claim 2, wherein the fingerprint feature extraction unit comprises:
a reference fingerprint feature extraction subunit, configured to perform depth convolution encoding on the reference fingerprint image using the multilayer convolution layers of the second convolutional neural network to output a depth reference feature map from a last layer of the multilayer convolution layers;
a second spatial attention subunit, configured to input the depth reference feature map into a second spatial attention module of the second convolutional neural network to obtain a second spatial attention map; and
an attention force action subunit, configured to calculate a position-wise multiplication of the depth reference feature map and the second spatial attention map to obtain the reference feature map.
4. The internet of things-based security device of claim 3, wherein the difference unit is further configured to: calculating and calculating a difference characteristic diagram between the reference characteristic diagram and the verification characteristic diagram according to the following formula;
wherein the formula is:
Figure FDA0003879710350000021
wherein, F 1 Representing said reference characteristic diagram, F 2 Representing said verification feature map, F c A graph of the difference signature is represented,
Figure FDA0003879710350000022
indicating a difference by position.
5. The Internet of things-based security and protection device according to claim 4, wherein the judging unit comprises: processing the differential feature map to generate a classification result according to the following formula;
wherein the formula is:
softmax{(W n ,B n ):…:(W 1 ,B 1 ) L Project (F) }, where Project (F) denotes the projection of the difference feature map as a vector, W 1 To W n As a weight matrix for each fully connected layer, B 1 To B n A bias matrix representing the fully connected layers of each layer.
6. The internet of things-based security device of claim 1, further comprising a training module for training the countermeasure generation network-based defogger, the twin network model, and the classifier.
7. The internet of things-based security device of claim 6, wherein the training module comprises:
the training data acquisition unit is used for acquiring training data, wherein the training data comprises a training reference fingerprint image, a training user pressing fingerprint image acquired by a camera deployed in the fingerprint lock and a true value of whether the training user pressing fingerprint image is matched with the training reference fingerprint;
the training defogging unit is used for enabling the training user pressing fingerprint image to pass through the defogging generator based on the countermeasure generation network so as to obtain a training generation user pressing fingerprint image;
the training fingerprint feature extraction unit is used for enabling the training reference fingerprint image and the training generation user pressing fingerprint image to pass through the twin network model comprising the first convolutional neural network and the second convolutional neural network so as to obtain a training reference feature map and a training verification feature map;
the training difference unit is used for calculating a training difference feature map between the training reference feature map and the training verification feature map;
the training judgment unit is used for enabling the training difference characteristic diagram to pass through a classifier to obtain a classification loss function value;
a feature extraction pattern solution loss unit configured to calculate a suppression loss function value of a feature extraction pattern solution of the training reference feature map and the training verification feature map, where the suppression loss function value of the feature extraction pattern solution is related to a square of a two-norm of a difference feature vector between a first feature vector expanded from the training reference feature map and a second feature vector expanded from the training verification feature map; and
a training unit for training the countermeasure generation network-based defogger, the twin network model, and the classifier with a weighted sum between suppression loss function values and the classification loss function values resolved by the feature extraction mode as a classification loss function value.
8. The internet of things-based security device of claim 7, wherein the feature extraction mode resolution loss unit is further configured to: calculating an inhibition loss function value of the feature extraction mode resolution of the training reference feature map and the training verification feature map according to the following formula;
wherein the formula is:
Figure FDA0003879710350000031
Figure FDA0003879710350000032
wherein V 1 And V 2 Respectively, the feature vectors M obtained after the expansion of the reference feature map and the verification feature map 1 And M 2 Respectively, the classifier is for the feature vector V 1 And V 2 Is given by the weight matrix, | · | F Represents the F norm of the matrix, an
Figure FDA0003879710350000033
Representing the square of the two-norm of the vector.
CN202211226949.4A 2022-10-09 2022-10-09 Security protection equipment based on thing networking Withdrawn CN115512470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211226949.4A CN115512470A (en) 2022-10-09 2022-10-09 Security protection equipment based on thing networking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211226949.4A CN115512470A (en) 2022-10-09 2022-10-09 Security protection equipment based on thing networking

Publications (1)

Publication Number Publication Date
CN115512470A true CN115512470A (en) 2022-12-23

Family

ID=84507655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211226949.4A Withdrawn CN115512470A (en) 2022-10-09 2022-10-09 Security protection equipment based on thing networking

Country Status (1)

Country Link
CN (1) CN115512470A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116000297A (en) * 2023-01-03 2023-04-25 赣州市光华有色金属有限公司 Preparation device and method for high-strength tungsten lanthanum wire
CN116721441A (en) * 2023-08-03 2023-09-08 厦门瞳景智能科技有限公司 Block chain-based access control security management method and system
CN116740866A (en) * 2023-08-11 2023-09-12 上海银行股份有限公司 Banknote loading and clearing system and method for self-service machine

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116000297A (en) * 2023-01-03 2023-04-25 赣州市光华有色金属有限公司 Preparation device and method for high-strength tungsten lanthanum wire
CN116721441A (en) * 2023-08-03 2023-09-08 厦门瞳景智能科技有限公司 Block chain-based access control security management method and system
CN116721441B (en) * 2023-08-03 2024-01-19 厦门瞳景智能科技有限公司 Block chain-based access control security management method and system
CN116740866A (en) * 2023-08-11 2023-09-12 上海银行股份有限公司 Banknote loading and clearing system and method for self-service machine
CN116740866B (en) * 2023-08-11 2023-10-27 上海银行股份有限公司 Banknote loading and clearing system and method for self-service machine

Similar Documents

Publication Publication Date Title
CN115512470A (en) Security protection equipment based on thing networking
Krish et al. Improving automated latent fingerprint identification using extended minutia types
EP1821172B1 (en) Collation method, collation system, computer, and program
Sheng et al. Image splicing detection based on Markov features in discrete octonion cosine transform domain
CN112560753A (en) Face recognition method, device and equipment based on feature fusion and storage medium
CN114800229B (en) Double-surface double-glass surface polishing device and polishing method thereof
KR20200083119A (en) User verification device and method
Sreedevi et al. Image processing based real time vehicle theft detection and prevention system
Ezz et al. A silent password recognition framework based on lip analysis
CN113627503A (en) Tracing method and device for generating image, model training method and device, electronic equipment and storage medium
CN118053232A (en) Enterprise safety intelligent management system and method thereof
JP6542819B2 (en) Image surveillance system
Vijayalakshmi et al. Finger and palm print based multibiometric authentication system with GUI interface
Nikhal et al. Weakly supervised face and whole body recognition in turbulent environments
Sadhya et al. Construction of a Bayesian decision theory‐based secure multimodal fusion framework for soft biometric traits
CN113159317B (en) Antagonistic sample generation method based on dynamic residual corrosion
JP5279007B2 (en) Verification system, verification method, program, and recording medium
Ashiba et al. Proposed homomorphic DWT for cancelable palmprint recognition technique
Kashyap et al. Accurate Personal Identification Using Left and Right Palmprint Images Based on ANFIS Approach
CN113313029A (en) Integrated identity authentication method based on human and object feature fusion
Mohammad Razavi et al. Multimodal biometric identification system based on finger‐veins using hybrid rank–decision‐level fusion technique
Pan Smart access control system by using sparse representation features
US20240046708A1 (en) Spoof images for user authentication
Balaji et al. Multimodal Biometrics Authentication in Healthcare Using Improved Convolution Deep Learning Model
CN117373157A (en) Intelligent household equipment management system and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20221223