CN114758440B - Access control system based on image and text mixed recognition - Google Patents

Access control system based on image and text mixed recognition Download PDF

Info

Publication number
CN114758440B
CN114758440B CN202011588979.0A CN202011588979A CN114758440B CN 114758440 B CN114758440 B CN 114758440B CN 202011588979 A CN202011588979 A CN 202011588979A CN 114758440 B CN114758440 B CN 114758440B
Authority
CN
China
Prior art keywords
image information
mode
information
dimensional
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011588979.0A
Other languages
Chinese (zh)
Other versions
CN114758440A (en
Inventor
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Qiyuan Xipu Technology Co ltd
Original Assignee
Chengdu Qiyuan Xipu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Qiyuan Xipu Technology Co ltd filed Critical Chengdu Qiyuan Xipu Technology Co ltd
Priority to CN202011588979.0A priority Critical patent/CN114758440B/en
Publication of CN114758440A publication Critical patent/CN114758440A/en
Application granted granted Critical
Publication of CN114758440B publication Critical patent/CN114758440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/35Individual registration on entry or exit not involving the use of a pass in combination with an identity check by means of a handwritten signature
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

In order to solve the problem that the entrance guard fails due to counterfeit biological characteristics possibly existing in the entrance guard safety in the prior art, the invention provides an entrance guard system based on image and text mixed recognition. The system is based on a two-dimensional and three-dimensional information matching and fusion technology, character image information obtained by on-site signature is combined with three-dimensional image information obtained by face recognition, dependence on biological characteristics is reduced, and interference of a counterfeiter on face recognition by using a three-dimensional printing technology is avoided.

Description

Access control system based on image and text mixed recognition
Technical Field
The invention relates to the technical field of safety identification, in particular to an access control system based on image and text hybrid identification.
Background
The access control system can control the access of personnel, and also can control the behavior of the personnel in buildings and sensitive areas and record and count the digital access control system of management data. With the development of China's economy and society, the entrance guard safety management system has been in deep life, and has provided important guarantee for personal safety, property safety and information safety of people. The entrance guard safety management system is a new modern safety management system, which relates to a plurality of new technologies such as electronics, machinery, optics, computer technology, communication technology, biotechnology and the like, is an effective measure for solving the problem of entrance and exit safety precaution management of important departments, and is suitable for various occasions such as banks, hotels, parking lot management, machine rooms, ordnance libraries, machine rooms, offices, intelligent communities, factories and the like.
Disclosure of Invention
In order to overcome the problem of entrance guard safety caused by the fact that the entrance guard safety possibly exists in the prior art that the entrance guard is imitated by using high-tech technology, such as a mask printed by 3D, the invention provides an entrance guard system based on image and text mixed recognition, which comprises:
the acquisition unit is used for acquiring name text image information and face image information at the same moment, the name text image information and the face image information are in matched connection with each other by taking the acquisition direction of a sensor for acquiring each information as a correlation, the text image information is two-dimensional image information obtained by on-site signing of a person to be detected, and the face image information is three-dimensional image information;
the first input unit is used for inputting name text image information and preprocessing the text image information, the name text image information comprises a plurality of groups of static image information of a first mode with the same interested region and a second mode with the same interested region and the same interested region, and the first mode and the second mode are respectively positioned at different layers of a data structure of the static image information;
the second input unit is used for inputting facial image information matched with name text image information, the facial image information comprises a plurality of groups of dynamic image information of at least a shape mode with the same interested area and a color mode, a speed mode and a distance mode corresponding to the shape mode, and the shape mode, the color mode, the speed mode and the distance mode are respectively positioned at different layers of a data structure of the dynamic image information;
the judging unit is used for judging whether the image information of each layer of different modes in each group of static image information is matched with each other; if the image information of each layer of different modes in each group of static image information is matched with each other, dividing the corresponding name text image information of each layer into a plurality of two-dimensional image blocks; if the static image information of each layer of different modes in each group of static image information is not completely matched, carrying out three-dimensional reconstruction and registration on the static image information of the first mode in each group of data, and then carrying out segmentation to obtain a first set containing static image information of m layers of first modes, wherein m is a natural number larger than 5; cleaning the information of the first set by using a morphological hole filling method, performing information fusion on each layer of slice static image information in the static image information of the first mode and the static image information of the second mode corresponding to the same group by using a frequency domain information fusion method of discrete cosine transform, performing three-dimensional reconstruction and registration to obtain three-dimensional fusion information, wherein the first dimension is the information obtained by fusing the static image information corresponding to the first mode and the second mode, the second dimension is the static image information of the second mode representing the color, the third dimension represents the distance and is set to be 0, performing information fusion on the rebuilt three-dimensional fusion information and the dynamic image information, and marking the fused information as a sub-block of the to-be-identified image with a direction according to the acquisition direction;
the training unit is used for training the neural network model by utilizing the pre-acquired name text image information; setting a third dimension to 0 by using prepared image sub-blocks to be recognized in all directions so as to make two dimensions, obtaining two-dimensional image sub-blocks, inputting the two-dimensional image sub-blocks into a neural network model, and comparing the similarity between the obtained recognition result and one of two-dimensional facial image information: if the similarity of the comparison result is smaller than the preset threshold value, continuing to compare the similarity with other two-dimensional facial image information, otherwise, stopping the iterative operation of the similarity comparison by the model, and storing the model.
Further, the preprocessing includes thresholding to eliminate the effect of noise that may be present in the text image information, and/or interpolation of the facial image information to unify the resolutions of the different planes of the facial image information.
Further, the directions include three angles of 75 °, +90°, 105 °.
Further, each group of the first-mode static image information and the second-mode static image information of the name text image information come from the same person to be detected.
Further, each group of the first-modality static image information and the second-modality static image information of the name text image information come from different persons to be detected, and are used as confusion data when training the model.
Further, the image information of the same modality is acquired by the same device.
Further, the device is a three-dimensional camera.
The invention has the beneficial effects that: the text image information obtained by on-site signature is combined with the three-dimensional image information obtained by face recognition, so that dependence on biological characteristics is reduced, and interference of counterfeiters on face recognition by using a three-dimensional printing technology is avoided.
Drawings
Fig. 1 shows a block diagram of the structure of the present system.
Detailed Description
An access control system based on image and text hybrid recognition, comprising:
the acquisition unit is used for acquiring name text image information and face image information at the same moment, the name text image information and the face image information are in matched connection with each other by taking the acquisition direction of a sensor for acquiring each information as a correlation, the text image information is two-dimensional image information obtained by on-site signing of a person to be detected, and the face image information is three-dimensional image information;
the first input unit is used for inputting name text image information and preprocessing the text image information, the name text image information comprises a plurality of groups of static image information of a first mode with the same interested region and a second mode with the same interested region and the same interested region, and the first mode and the second mode are respectively positioned at different layers of a data structure of the static image information;
the second input unit is used for inputting facial image information matched with name text image information, the facial image information comprises a plurality of groups of dynamic image information of at least a shape mode with the same interested area and a color mode, a speed mode and a distance mode corresponding to the shape mode, and the shape mode, the color mode, the speed mode and the distance mode are respectively positioned at different layers of a data structure of the dynamic image information;
the judging unit is used for judging whether the image information of each layer of different modes in each group of static image information is matched with each other; if the image information of each layer of different modes in each group of static image information is matched with each other, dividing the corresponding name text image information of each layer into a plurality of two-dimensional image blocks; if the static image information of each layer of different modes in each group of static image information is not completely matched, carrying out three-dimensional reconstruction and registration on the static image information of the first mode in each group of data, and then carrying out segmentation to obtain a first set containing static image information of m layers of first modes, wherein m is a natural number larger than 5; cleaning the information of the first set by using a morphological hole filling method, performing information fusion on each layer of slice static image information in the static image information of the first mode and the static image information of the second mode corresponding to the same group by using a frequency domain information fusion method of discrete cosine transform, performing three-dimensional reconstruction and registration to obtain three-dimensional fusion information, wherein the first dimension is the information obtained by fusing the static image information corresponding to the first mode and the second mode, the second dimension is the static image information of the second mode representing the color, the third dimension represents the distance and is set to be 0, performing information fusion on the rebuilt three-dimensional fusion information and the dynamic image information, and marking the fused information as a sub-block of the to-be-identified image with a direction according to the acquisition direction;
the training unit is used for training the neural network model by utilizing the pre-acquired name text image information; setting a third dimension to 0 by using prepared image sub-blocks to be recognized in all directions so as to make two dimensions, obtaining two-dimensional image sub-blocks, inputting the two-dimensional image sub-blocks into a neural network model, and comparing the similarity between the obtained recognition result and one of two-dimensional facial image information: if the similarity of the comparison result is smaller than the preset threshold value, continuing to compare the similarity with other two-dimensional facial image information, otherwise, stopping the iterative operation of the similarity comparison by the model, and storing the model.
Preferably, the preprocessing includes thresholding to eliminate the effect of noise that may be present in the text image information, and/or interpolating the facial image information to unify the resolutions of the different planes of the facial image information.
Preferably, the directions include three angles of 75 °, +90°, 105 °.
Preferably, each group of the first-modality static image information and the second-modality static image information of the nameword image information come from the same person to be detected.
Preferably, each set of the first modality static image information and the second modality static image information of the nameword image information is from a different person to be detected as confusion data when training the model.
Preferably, the image information of the same modality is acquired by the same device.
Preferably, the device is a three-dimensional camera.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (7)

1. An access control system based on image and character mixed recognition is characterized by comprising:
the acquisition unit is used for acquiring name text image information and face image information at the same moment, the name text image information and the face image information are in pairing connection with each other by taking the acquisition direction of a sensor for acquiring each information as a correlation, the text image information is two-dimensional image information, and the face image information is three-dimensional image information;
the first input unit is used for inputting name text image information and preprocessing the text image information, the name text image information comprises a plurality of groups of static image information of a first mode with the same interested region and a second mode with the same interested region and the same interested region, and the first mode and the second mode are respectively positioned at different layers of a data structure of the static image information;
the second input unit is used for inputting facial image information matched with name text image information, the facial image information comprises a plurality of groups of dynamic image information of at least a shape mode with the same interested area and a color mode, a speed mode and a distance mode corresponding to the shape mode, and the shape mode, the color mode, the speed mode and the distance mode are respectively positioned at different layers of a data structure of the dynamic image information;
the judging unit is used for judging whether the image information of each layer of different modes in each group of static image information is matched with each other; if the image information of each layer of different modes in each group of static image information is matched with each other, dividing the corresponding name text image information of each layer into a plurality of two-dimensional image blocks; if the static image information of each layer of different modes in each group of static image information is not completely matched, carrying out three-dimensional reconstruction and registration on the static image information of the first mode in each group of data, and then carrying out segmentation to obtain a first set containing static image information of m layers of first modes, wherein m is a natural number larger than 5; cleaning the information of the first set by using a morphological hole filling method, performing information fusion on each layer of slice static image information in the static image information of the first mode and the static image information of the second mode corresponding to the same group by using a frequency domain information fusion method of discrete cosine transform, performing three-dimensional reconstruction and registration to obtain three-dimensional fusion information, wherein the first dimension is the information obtained by fusing the static image information corresponding to the first mode and the second mode, the second dimension is the static image information of the second mode representing the color, the third dimension represents the distance and is set to be 0, performing information fusion on the rebuilt three-dimensional fusion information and the dynamic image information, and marking the fused information as a sub-block of the to-be-identified image with a direction according to the acquisition direction;
the training unit is used for training the neural network model by utilizing the pre-acquired name text image information; setting a third dimension to 0 by using prepared image sub-blocks to be recognized in all directions so as to make two dimensions, obtaining two-dimensional image sub-blocks, inputting the two-dimensional image sub-blocks into a neural network model, and comparing the similarity between the obtained recognition result and one of two-dimensional facial image information: if the similarity of the comparison result is smaller than the preset threshold value, continuing to compare the similarity with other two-dimensional facial image information, otherwise, stopping the iterative operation of the similarity comparison by the model, and storing the model.
2. The access control system based on image and text hybrid recognition according to claim 1, wherein: the preprocessing comprises the steps of eliminating the influence of noise possibly existing in the text image information through threshold processing and/or carrying out interpolation processing on the facial image information so as to unify the resolutions of different planes of the facial image information.
3. The entrance guard system based on image and text hybrid recognition according to claim 1, wherein the directions include three angles of 75 °, +90°, 105 °.
4. The access control system based on image and text hybrid recognition as claimed in claim 1, wherein: and each group of the first-mode static image information and the second-mode static image information of the name text image information come from the same person to be detected.
5. The access control system based on image and text hybrid recognition as claimed in claim 1, wherein: each group of the first-mode static image information and the second-mode static image information of the name text image information come from different testees to be used as confusing data when training the model.
6. The access control system based on image and text hybrid recognition as claimed in claim 1, wherein: the image information of the same mode is collected by the same equipment.
7. The access control system based on image and text hybrid recognition as claimed in claim 6, wherein: the device is a three-dimensional camera.
CN202011588979.0A 2020-12-29 2020-12-29 Access control system based on image and text mixed recognition Active CN114758440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011588979.0A CN114758440B (en) 2020-12-29 2020-12-29 Access control system based on image and text mixed recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011588979.0A CN114758440B (en) 2020-12-29 2020-12-29 Access control system based on image and text mixed recognition

Publications (2)

Publication Number Publication Date
CN114758440A CN114758440A (en) 2022-07-15
CN114758440B true CN114758440B (en) 2023-07-18

Family

ID=82324488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011588979.0A Active CN114758440B (en) 2020-12-29 2020-12-29 Access control system based on image and text mixed recognition

Country Status (1)

Country Link
CN (1) CN114758440B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758439B (en) * 2020-12-29 2023-07-18 成都启源西普科技有限公司 Multi-mode access control system based on artificial intelligence

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0216649D0 (en) * 2002-06-17 2002-08-28 Storm Mason R Identity verification
EP1589480B1 (en) * 2003-01-28 2013-08-21 Fujitsu Limited Biometrics information registration apparatus, biometrics information matching apparatus, biometrics information registration/matching system, and biometrics information registration program
CN103426016B (en) * 2013-08-14 2017-04-12 湖北微模式科技发展有限公司 Method and device for authenticating second-generation identity card
CN105631272B (en) * 2016-02-02 2018-05-11 云南大学 A kind of identity identifying method of multiple security
CN107507286B (en) * 2017-08-02 2020-09-29 五邑大学 Bimodal biological characteristic sign-in system based on face and handwritten signature
US10963677B2 (en) * 2018-07-23 2021-03-30 The Mitre Corporation Name and face matching
KR102053581B1 (en) * 2019-05-27 2019-12-09 주식회사 시큐브 Apparatus and method for user authentication based on face recognition and handwritten signature verification
CN111599044A (en) * 2020-05-14 2020-08-28 哈尔滨学院 Access control safety management system based on multi-mode biological feature recognition
CN114758439B (en) * 2020-12-29 2023-07-18 成都启源西普科技有限公司 Multi-mode access control system based on artificial intelligence

Also Published As

Publication number Publication date
CN114758440A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN102945366B (en) A kind of method and device of recognition of face
CN102629320B (en) Ordinal measurement statistical description face recognition method based on feature level
CN104766063A (en) Living body human face identifying method
CN112069891B (en) Deep fake face identification method based on illumination characteristics
CN104751108A (en) Face image recognition device and face image recognition method
CN107230267A (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN108694399A (en) Licence plate recognition method, apparatus and system
Yen et al. Facial feature extraction using genetic algorithm
CN114758440B (en) Access control system based on image and text mixed recognition
CN109063643A (en) A kind of facial expression pain degree recognition methods under the hidden conditional for facial information part
CN114758439B (en) Multi-mode access control system based on artificial intelligence
CN108363944A (en) Recognition of face terminal is double to take the photograph method for anti-counterfeit, apparatus and system
CN109684990A (en) A kind of behavioral value method of making a phone call based on video
CN111582195B (en) Construction method of Chinese lip language monosyllabic recognition classifier
CN112700576B (en) Multi-modal recognition algorithm based on images and characters
CN104299000A (en) Handwriting recognition method based on local fragment distribution characteristics
Krishneswari et al. A review on palm print verification system
Vezjak et al. An anthropological model for automatic recognition of the male human face
Jobin et al. Palm biometrics recognition and verification system
Kaur et al. State-of-the-art techniques for passive image forgery detection: a brief review
Pilania et al. Implementation of image-based attendance system
Kashyap et al. Robust detection of copy-move forgery based on wavelet decomposition and firefly algorithm
Syeda-Mahmood Detecting perceptually salient texture regions in images
Kalangi et al. Deployment of Haar Cascade algorithm to detect real-time faces
Shrivastava et al. Bridging the semantic gap with human perception based features for scene categorization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant