CN115830762A - Safety community access control platform, control method and control terminal - Google Patents

Safety community access control platform, control method and control terminal Download PDF

Info

Publication number
CN115830762A
CN115830762A CN202310055847.9A CN202310055847A CN115830762A CN 115830762 A CN115830762 A CN 115830762A CN 202310055847 A CN202310055847 A CN 202310055847A CN 115830762 A CN115830762 A CN 115830762A
Authority
CN
China
Prior art keywords
face
module
data
real
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310055847.9A
Other languages
Chinese (zh)
Inventor
张秀才
郝明华
薛方俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Sanside Technology Co ltd
Original Assignee
Sichuan Sanside Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Sanside Technology Co ltd filed Critical Sichuan Sanside Technology Co ltd
Priority to CN202310055847.9A priority Critical patent/CN115830762A/en
Publication of CN115830762A publication Critical patent/CN115830762A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a safe community access control platform, a control method and a control terminal, which comprise an image acquisition module, a face recognition module and a data transmission module, wherein the image acquisition module is arranged at an access of each cell; the invention sets an image acquisition module at the entrance and exit of the cell, acquires or processes three-dimensional real-time face data through the image acquisition module to obtain the three-dimensional real-time face data, encrypts and transmits the real-time face data to the face recognition module through the data transmission module, extracts the characteristics of the real-time face data through the face recognition module, and compares the characteristics with prestored data in a face library to judge whether the real-time face data belongs to the face library, thereby realizing the recognition of people entering and exiting the cell.

Description

Safety community access control platform, control method and control terminal
Technical Field
The invention relates to the technical field of face recognition, in particular to a safe community entrance and exit control platform, a control method and a control terminal.
Background
With the advancement of science and technology, in order to effectively protect the safety of people, a secure community is being widely popularized, and the secure community includes various module units, for example:
community (cell) intelligent monitoring: the monitoring and infrared alarm system is designed in the community and the perimeter wall to carry out all-weather 24-hour uninterrupted video monitoring and perimeter alarm precaution.
The face recognition system comprises: a face video automatic identification system is designed at a main entrance and exit to perform classification management on people entering a community (cell), such as blacklist people entering an alarm and special owners entering reminding management.
Community (community) public access control system: a public access control system is designed at a main entrance and an exit of a community (community), and the functions of face recognition, IC card recognition, password recognition and visual talkback are configured, so that the safety of the entrance and the exit is guaranteed, and the autonomous management of entrance and exit of owners and visitors is realized.
For a face recognition system, face information is used as personal information of a citizen and needs privacy protection, so in general, face recognition is not performed at a front end, but face data is collected and then transmitted to a back end (namely, a security end such as public security and government) for recognition.
Therefore, in a general face recognition system, face information is collected at a cell end, then the face information is transmitted to a security server, and after face recognition is performed in the security server, the recognition information is transmitted back to the cell end.
Resulting in problems of low recognition efficiency and possible disclosure of secret during transmission.
Disclosure of Invention
The invention aims to solve the technical problems that the identification efficiency is too low and secret leakage possibly occurs in the transmission process, and aims to provide a safe community entrance and exit control platform, a control method and a control terminal, so that the identification efficiency of a human face is improved, and the possibility of secret leakage caused in the data transmission process is reduced.
The invention is realized by the following technical scheme:
the utility model provides a safe community access & exit management and control platform, includes:
the image acquisition module is arranged at the entrance and exit of each cell and is used for acquiring real-time face data of people entering the cell;
the face recognition module is arranged in the official server, stores a face library and recognizes real-time face data transmitted by the image acquisition module;
the data transmission module is connected with the image acquisition module and the face recognition module and used for carrying out data encryption transmission between the image acquisition module and the face recognition module;
the image acquisition module includes: at least one of a binocular stereo vision component, a 3D structured light component or a laser ranging component;
the binocular stereoscopic vision component comprises 2 face cameras, the two face cameras are respectively arranged on two sides of the entrance and the exit and synchronously acquire images at the entrance;
the binocular stereoscopic vision component acquires images of the same object at different positions through a face camera, calculates the position deviation existing between corresponding points of the images according to the parallax principle, and acquires three-dimensional real-time face data information of the detected object;
and the 3D structured light component and the laser ranging component acquire three-dimensional real-time stereo image conversion to acquire three-dimensional real-time human face data information.
A safe community access control method is based on the safe community access control platform, and comprises the following steps:
acquiring real-time image data at an entrance and an exit of a cell through an image acquisition module;
preprocessing the real-time image data to obtain real-time face data;
encrypting and transmitting real-time face data to a face recognition module;
performing feature extraction on real-time face data through a face recognition module to obtain a face feature vector;
and comparing the similarity of the face feature vector with the face feature vectors of all the persons in the face library, and determining whether the real-time face data belongs to the face library or not according to the similarity.
Specifically, the method for preprocessing the real-time image data sequentially comprises point cloud denoising, hole filling and face cutting;
the method for denoising the point cloud comprises the following steps:
traversing the original point cloud data, and calculating the distance tie value between each point of the point cloud and the set K neighborhood points;
calculating the standard deviation and the mean value corresponding to the average distance of all the point time, and setting the distance threshold
Figure SMS_1
Wherein
Figure SMS_2
Is the average of the average distances over all point times,
Figure SMS_3
is a scaling factor determined by the value of K,
Figure SMS_4
standard deviation of the mean distance at all point times;
performing secondary traversal on the point cloud data, wherein the average distance of the filtered point time is larger than that of the filtered point time
Figure SMS_5
Completing point cloud denoising for all the points;
the hole filling method comprises the following steps:
performing point cloud storage in a discrete point form, and acquiring real-time image data subjected to point cloud denoising;
forming a human face triangular model through a greedy projection algorithm of a point cloud library;
filling holes appearing in the face triangular model by approximating a topological structure of a missing area through a radial basis function algorithm;
the face cutting method comprises the following steps:
determining the nose tip point coordinates in the face triangular model, and taking the nose tip as a central point;
setting a cutting area through geodesic distance or Euclidean distance;
and (4) cutting the human face three-dimensional model by adopting spherical neighbor cutting.
Specifically, the method for extracting the features of the real-time face data comprises the following steps:
acquiring an illumination measurement vector of real-time face data;
constructing a training network based on ResNet-50, wherein the training network comprises a first module, a second module, a third module and a fourth module, and a training branch of the training network comprises a face recognition branch and an illumination processing branch;
parameter sharing of the face recognition branch and the illumination processing branch is achieved through a hard sharing mode, wherein the first module and the second module share parameters, and when the face recognition branch is executed, the third module and the fourth module learn face recognition task parameters; when the illumination processing branch is executed, the third module and the fourth module learn the parameters of the illumination processing task;
constructing a first full connection layer, a second full connection layer and a third full connection layer, wherein the first full connection layer is used for outputting the result of the face recognition branch, the second full connection layer is used for outputting the result of the illumination processing branch, and the third full connection layer is used for learning the task weights of the face recognition branch and the illumination processing branch; the first full-connection layer and the second full-connection layer are connected behind the fourth module, and the third full-connection layer is connected behind the second module;
determining a loss function of the face recognition branch:
Figure SMS_6
wherein N is the number of face categories,
Figure SMS_7
for the label of the input image on the nth class,
Figure SMS_8
making an nth value of the softmax processed result for the output of the first fully-connected layer;
determining the loss function of the illumination processing branch:
Figure SMS_9
wherein, in the step (A),
Figure SMS_10
a vector is measured for the illumination of the image,
Figure SMS_11
is composed of
Figure SMS_12
D is the dimension of the illumination vector,
Figure SMS_13
is the result of the normalized output of the second fully-connected layer,
Figure SMS_14
is composed of
Figure SMS_15
The ith component of (a);
determining a loss function of the training network:
Figure SMS_16
wherein, in the step (A),
Figure SMS_17
the softmax processed result of the face recognition branch task weight output for the third full link layer,
Figure SMS_18
and processing the result after softmax processing of the branch task weight for the illumination output by the third full connection layer.
Specifically, the method for acquiring the illumination measurement vector comprises the following steps:
drawing a unit sphere by using given illumination distribution, and selecting a plurality of sampling points from the unit sphere;
set of azimuth angles
Figure SMS_19
Vertex angle set
Figure SMS_20
Combining every two elements in the azimuth angle set and the zenith angle set to obtain 9 normal sampling directions;
and forming 9-dimensional vectors by using the 9 normal sampling directions to obtain the illumination measurement vector.
Optionally, the first module includes 3 residual structures, the second module includes 4 residual structures, the third module includes 6 residual structures, and the fourth module includes 3 residual structures, where the residual structures include 3 convolutional layers, and sizes of convolutional kernels of the residual structures are 1 × 1, 3 × 3, and 1 × 1, respectively.
Specifically, the similarity comparison method includes:
selecting cosine distance as similarity measurement distance between samples;
acquiring a cosine value of an included angle between two adjacent human face features;
setting a cosine value threshold, and if the cosine value is greater than the cosine value threshold, proving that the real-time face data belongs to a face library; if the cosine value is smaller than the cosine value threshold value, the real-time face data is proved not to belong to a face library;
the face library comprises face feature vectors which are input in advance.
Specifically, the method for encrypted transmission comprises the following steps: the real-time face data is encrypted at the image acquisition module, and is decrypted at the face recognition module after being transmitted to the face recognition module through the Internet.
Specifically, the encryption and decryption method includes:
setting three times of encryption keys at the image acquisition module and the face recognition module,
Figure SMS_21
Figure SMS_22
Figure SMS_23
the encryption algorithm is as follows:
Figure SMS_24
wherein P is plaintext, C is ciphertext, E represents encryption operation, and D represents decryption operation;
the method for data transmission comprises the following steps:
positioning a transmission port of a data transmission link, calculating the strength of a signal, acquiring noise information carried by the signal during transmission, and taking the noise information as an additional transmission signal;
compensating the signal transmission frequency by the following formula:
Figure SMS_25
wherein, in the process,
Figure SMS_26
compensating values for frequencies at different transmission nodes in a network transmission environment,
Figure SMS_27
carrier frequency for the central node of the network, t time for the forwarding node,
Figure SMS_28
the included angle formed between the moving direction of the transmission signal on the transmission node and the incident wave.
A safe community entrance and exit control terminal comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the steps of the safe community entrance and exit control method.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention sets an image acquisition module at the entrance and exit of the cell, acquires or processes three-dimensional real-time face data through the image acquisition module to obtain the three-dimensional real-time face data, encrypts and transmits the real-time face data to the face recognition module through the data transmission module, extracts the characteristics of the real-time face data through the face recognition module, and compares the characteristics with prestored data in a face library to judge whether the real-time face data belongs to the face library, thereby realizing the recognition of people entering and exiting the cell.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the principles of the invention.
Fig. 1 is a structural block diagram of a safe community entrance and exit management and control platform according to the present invention.
Fig. 2 is a schematic flow chart of a security community entrance and exit control method according to the present invention.
Description of the preferred embodiment
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and embodiments. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limitations of the invention.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
Embodiments of the present invention and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Examples
As shown in fig. 1, a safe community access control platform includes an image acquisition module, a face recognition module and a data transmission module.
The image acquisition module sets up in the access & exit department of each district, and is used for gathering the real-time people's face data that gets into the district personnel, and the quantity of image acquisition module is a plurality of, can set up it in the access & exit department of each district, can realize carrying out the management and control to a plurality of districts.
The face recognition module is arranged in the official server, stores a face library and recognizes real-time face data transmitted by the image acquisition module; official server can be for policeman and other encryption servers, need can keep secret to face data, prevents to reveal, and a plurality of image acquisition modules can with a face identification module, 1 to N's form promptly.
The data transmission module is connected with the image acquisition module and the face recognition module, and data encryption transmission is carried out between the image acquisition module and the face recognition module; the data transmission module comprises a function of sending and receiving data, and can perform data communication between the image acquisition module and the face recognition module.
The image acquisition module includes: at least one of a binocular stereo vision assembly, a 3D structured light assembly, or a laser ranging assembly.
The binocular stereoscopic vision component comprises 2 face cameras, the two face cameras are respectively arranged on two sides of the entrance and the exit and synchronously acquire images at the entrance;
the binocular stereoscopic vision component acquires images of the same object at different positions through a face camera, calculates the position deviation existing between corresponding points of the images according to the parallax principle, and acquires three-dimensional real-time face data information of the detected object;
and the 3D structured light component and the laser ranging component acquire three-dimensional real-time stereo image conversion to acquire three-dimensional real-time human face data information.
The 3D structured light assembly obtains a three-dimensional real-time stereo image through a light reflector and a video camera, and the specific working principle is that an infrared camera on the camera is adopted to send out projection light, so that the surface layer structure information of the human face is identified and obtained, a depth-of-field image is formed, and the three-dimensional real-time human face stereo image is constructed according to triangulation and three-dimensional calculation.
The laser ranging device has two modes, one is a pulse method, and the other is a phase method. The former is to emit pulse light to a measured object, then reflect the pulse light to a lens by a reflection principle, and convert the distance by calculating the time difference in the process, so that three-dimensional data can be acquired.
Examples
Based on the above safe community access control platform, as shown in fig. 2, this embodiment provides a safe community access control method, and the control method includes:
the method comprises the steps that firstly, real-time image data at an entrance and an exit of a cell are collected through an image collection module; and acquiring a three-dimensional real-time image.
Secondly, preprocessing the real-time image data to obtain real-time face data;
the method for preprocessing the real-time image data sequentially comprises point cloud denoising, hole filling and face cutting;
the method for denoising the point cloud is to obtain the original point cloud data of a real-time image through a point cloud library, wherein the point cloud library is an open source programming library in a cross-platform form and covers very rich cloud processing algorithms such as point cloud registration, cloud format conversion, point cloud reconstruction and the like. Noise points appearing in the point cloud are discrete points which deviate from the surface of the point cloud when scanning is carried out by using scanning equipment and influence the whole three-dimensional structure, and the discrete points need to be removed, and the specific method comprises the following steps:
traversing the original point cloud data, and calculating the distance tie value between each point of the point cloud and the set K neighborhood points;
calculating the standard deviation and the mean value corresponding to the average distance of all the point time, and setting the distance threshold
Figure SMS_29
Wherein
Figure SMS_30
Is the average of the average distances at all points in time,
Figure SMS_31
is a scaling factor determined by the value of K,
Figure SMS_32
standard deviation of the mean distance at all point times;
performing secondary traversal on the point cloud data, wherein the average distance of the filtered point time is larger than that of the filtered point time
Figure SMS_33
And (4) finishing point cloud denoising at all points.
The reason for hole filling is as follows: individual points in the pit can be lost during point cloud collection, so that the real-time human face model is lack of integrity, and the hole power supply image needs to be filled with holes in real time.
Performing point cloud storage in a discrete point form, and acquiring real-time image data subjected to point cloud denoising;
forming a human face triangular model through a greedy projection algorithm of a point cloud library;
filling holes appearing in the face triangular model by approximating a topological structure of the missing region through a radial basis function algorithm;
the reason for face tailoring is that the collected point cloud model generally contains multi-region information such as ears, heads, shoulders, necks and necks, and by acquiring the three-dimensional face region, information interference caused by other regions to the face region can be effectively reduced, so that individual features are captured through a related algorithm. The face cutting is similar to the detection link in the two-dimensional face data processing, and redundant parts appearing in the model can be removed.
Determining the nose tip point coordinates in the face triangular model, and taking the nose tip as a central point; the key point of face cutting is to determine the position of the nose tip. The nose tip coordinates can be determined by the vertex coordinates, or by inverse normalization.
Setting a cutting area through geodesic distance or Euclidean distance;
and (4) cutting the human face three-dimensional model by adopting spherical neighbor cutting.
Thirdly, encrypting and transmitting real-time face data to a face recognition module;
the method for encrypting transmission comprises the following steps: the real-time face data is encrypted at the image acquisition module, and is decrypted at the face recognition module after being transmitted to the face recognition module through the Internet.
The encryption and decryption method comprises the following steps:
a cubic encryption key is introduced (cubic encryption keys are set at both the image acquisition module and the face recognition module,
Figure SMS_34
Figure SMS_35
Figure SMS_36
) Encrypting real-time face dataAnd generates corresponding Data, and the bit number of the Data is guaranteed to be 64 bits. In the process of data transmission, a sending end and a receiving end of a network environment can set a uniform key in advance, and encryption of transmission data is completed by using the key at a source point of data transmission. And then transmits the data to the terminal in the form of a password in the network environment.
And when the data reaches the end point, the key is also utilized to decrypt the transmission data, so as to obtain the original plaintext sent by the sending end.
The specific encryption algorithm is as follows:
Figure SMS_37
wherein P is plaintext, C is ciphertext, E represents encryption operation, and D represents decryption operation;
in addition, in the network transmission environment, the transmission accuracy of each interface is different, and because there are many influencing factors threatening data security in the environment, the problem of frequency difference between the receiving end and the transmitting end during data transmission can be caused. If the frequency difference problem is not processed in time, the data can be seriously leaked in the network transmission process, so that the frequency compensation is needed when the data is transmitted, and the specific method comprises the following steps:
positioning a transmission port of a data transmission link, calculating the strength of a signal, acquiring noise information carried by the signal during transmission, and taking the noise information as an additional transmission signal;
the method has the advantages that the signal transmission frequency is compensated, the node can be ensured to realize the compensation of the channel transmission frequency to the greatest extent in the transmission process, the leakage problem of data is solved, the safety and the stability of information in a network are improved, the privacy information related to a user in the data is ensured not to be leaked, and meanwhile, the sender interface and the receiver interface still need to complete relative movement in the data transmission process. The compensation formula is as follows:
Figure SMS_38
wherein, in the step (A),
Figure SMS_39
is a netFrequency offset values at different transmission nodes in a network transmission environment,
Figure SMS_40
carrier frequency for the central node of the network, t time for the forwarding node,
Figure SMS_41
the included angle formed between the moving direction of the transmission signal on the transmission node and the incident wave.
Fourthly, extracting the features of the real-time face data through a face recognition module to obtain a face feature vector;
and acquiring an illumination measurement vector of real-time face data.
Considering that the appearances of the same object are similar under similar illumination distributions, and a larger difference exists under illumination distributions with larger differences, the proposed acquisition method comprises the following steps:
drawing a unit sphere by using given illumination distribution, and selecting a plurality of sampling points from the unit sphere; the method comprises the steps of selecting a plurality of sampling points with larger difference in direction from a drawn unit sphere, wherein the values of the sampling points are closer under the similar illumination distribution, otherwise, the difference is larger.
To ensure that the normal direction of the sampling points used to measure the illumination difference can better cover the front hemisphere with a certain difference, an azimuth angle set is set
Figure SMS_42
Vertex angle set
Figure SMS_43
Combining every two elements in the azimuth angle set and the zenith angle set to obtain 9 normal sampling directions;
and forming 9-dimensional vectors by using the 9 normal sampling directions to obtain the illumination measurement vector.
The method for extracting the features of the real-time face data comprises the following steps:
the method comprises the steps of constructing a training network based on ResNet-50, wherein the training network comprises a first module, a second module, a third module and a fourth module, the first module comprises 3 residual error structures, the second module comprises 4 residual error structures, the third module comprises 6 residual error structures, the fourth module comprises 3 residual error structures, the residual error structures comprise 3 convolutional layers, and the sizes of convolutional cores of the convolutional layers are 1 x 1, 3 x 3 and 1 x 1 respectively.
The training branches of the training network comprise a face recognition branch and an illumination processing branch;
parameter sharing of a face recognition branch and an illumination processing branch is achieved through a hard sharing mode, wherein a first module and a second module share parameters, and when the face recognition branch is executed, a third module and a fourth module learn face recognition task parameters; when the illumination processing branch is executed, the third module and the fourth module learn the parameters of the illumination processing task; during training, the output characteristics are firstly leveled, and then the final result is output through the full connection layer.
And a weight mode of dynamically updating tasks is adopted, the network dynamically adjusts the weights of the 2 person face recognition branches and the illumination processing branches according to the learning condition of the network, after the second module, flattening operation and L2 regularization are carried out on output features, a third full connection layer is connected and used for learning the weight ratio of the 2 tasks, the input dimensionality of the third full connection layer is the dimensionality after flattening is carried out on the features output by the last layer of the second module, and the output dimensionality is the number of the tasks.
Constructing a first full connection layer, a second full connection layer and a third full connection layer, wherein the first full connection layer is used for outputting the result of the face recognition branch, the second full connection layer is used for outputting the result of the illumination processing branch, and the third full connection layer is used for learning the task weights of the face recognition branch and the illumination processing branch; the first full connection layer and the second full connection layer are connected behind the fourth module, and the third full connection layer is connected behind the second module;
determining a loss function of the face recognition branch:
Figure SMS_44
wherein N is the number of face categories,
Figure SMS_45
for input images in class nThe number of the labels on the label sheet,
Figure SMS_46
making an nth value of the result after softmax processing for the output of the first fully-connected layer;
determining the loss function of the illumination processing branch:
Figure SMS_47
wherein, in the step (A),
Figure SMS_48
a vector is measured for the illumination of the image,
Figure SMS_49
is composed of
Figure SMS_50
D is the dimension of the illumination vector, from the previous step it is known that d =9,
Figure SMS_51
is the result of the normalized output of the second fully-connected layer,
Figure SMS_52
is composed of
Figure SMS_53
The ith component of (a);
determining a loss function of the training network:
Figure SMS_54
wherein, in the step (A),
Figure SMS_55
the softmax processed result of the face recognition branching task weight output for the third fully connected layer,
Figure SMS_56
the result after softmax processing of the branch task weight is processed for the illumination output by the third fully connected layer.
And fifthly, comparing the similarity of the face feature vector with the face feature vectors of all the persons in the face library, and determining whether the real-time face data belongs to the face library or not according to the similarity.
The main operation of the face matching stage is to compare the similarity of the face feature vector obtained by the feature extraction module with all the face features of the people in the face library. In the classification, a similarity measure between different samples needs to be estimated, and a method of calculating a distance between samples is generally adopted. The distances commonly used for calculating the similarity measure between the samples are Euclidean distance, manhattan distance, cosine distance, mahalanobis distance and the like. The comparison method adopted in the embodiment includes:
selecting cosine distance as similarity measurement distance between samples; the cosine distance is the similarity between two vectors judged by the cosine value of the included angle between the 2 vectors. The smaller the distance is, the more similar the two faces are; the larger the distance, the greater the variance of 2 faces.
Acquiring a cosine value of an included angle between two adjacent human face features;
setting a cosine value threshold, and if the cosine value is greater than the cosine value threshold (1.0 is selected in the embodiment), proving that the real-time face data belongs to a face library; if the cosine value is smaller than the cosine value threshold value, the real-time face data is proved not to belong to a face library;
the face library comprises face feature vectors which are input in advance, namely the face feature vectors are input as required, information of a community owner can be input, and information of other personnel can also be input.
Examples
A safe community access control terminal comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the safe community access control method.
The memory may be used to store software programs and modules, and the processor may execute various functional applications of the terminal and data processing by operating the software programs and modules stored in the memory. The memory may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an execution program required for at least one function, and the like.
The storage data area may store data created according to the use of the terminal, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
A computer readable storage medium stores a computer program, and when executed by a processor, the computer program implements the steps of the method for managing and controlling the entrance and exit of the safe community.
Without loss of generality, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instruction data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory and mass storage devices described above may be collectively referred to as memory.
In the description of the present specification, reference to the description of "one embodiment/mode", "some embodiments/modes", "example", "specific example", or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment/mode or example is included in at least one embodiment/mode or example of the present application. In this specification, the schematic representations of the terms used above are not necessarily intended to be the same embodiment/mode or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/aspects or examples and features of the various embodiments/aspects or examples described in this specification can be combined and combined by one skilled in the art without conflicting therewith.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
It will be understood by those skilled in the art that the foregoing embodiments are merely for clarity of description and are not intended to limit the scope of the invention. It will be apparent to those skilled in the art that other variations or modifications may be made on the above invention and still be within the scope of the invention.

Claims (10)

1. The utility model provides a safe community access & exit management and control platform which characterized in that includes:
the image acquisition module is arranged at the entrance and exit of each cell and is used for acquiring real-time face data of people entering the cell;
the face recognition module is arranged in the official server, stores a face library and recognizes real-time face data transmitted by the image acquisition module;
the data transmission module is connected with the image acquisition module and the face recognition module and used for carrying out data encryption transmission between the image acquisition module and the face recognition module;
the image acquisition module includes: at least one of a binocular stereo vision assembly, a 3D structured light assembly or a laser ranging assembly;
the binocular stereoscopic vision component comprises 2 face cameras, the two face cameras are respectively arranged on two sides of the entrance and the exit and synchronously acquire images at the entrance;
the binocular stereoscopic vision component acquires images of the same object at different positions through a face camera, calculates the position deviation existing between corresponding points of the images according to the parallax principle, and acquires three-dimensional real-time face data information of the detected object;
and the 3D structured light component and the laser ranging component acquire three-dimensional real-time stereo image conversion to acquire three-dimensional real-time human face data information.
2. The safe community access control method is characterized in that based on the safe community access control platform according to claim 1, the control method comprises the following steps:
acquiring real-time image data at an entrance and an exit of a cell through an image acquisition module;
preprocessing the real-time image data to obtain real-time face data;
encrypting and transmitting real-time face data to a face recognition module;
performing feature extraction on real-time face data through a face recognition module to obtain a face feature vector;
and comparing the similarity of the face feature vector with the face feature vectors of all the persons in the face library, and determining whether the real-time face data belongs to the face library or not according to the similarity.
3. The method for managing and controlling the entrance and the exit of the safe community according to claim 2, wherein the method for preprocessing the real-time image data sequentially comprises point cloud denoising, hole filling and face clipping;
the method for denoising the point cloud comprises the following steps:
traversing the original point cloud data, and calculating the distance tie value between each point of the point cloud and the set K neighborhood points;
calculating the standard deviation and the mean value corresponding to the average distance of all the point time, and setting the distance threshold
Figure QLYQS_1
Wherein
Figure QLYQS_2
Is the average of the average distances over all point times,
Figure QLYQS_3
is given by the value of KThe determined scaling factor is used to determine the scaling factor,
Figure QLYQS_4
standard deviation of the mean distance at all point times;
performing secondary traversal on the point cloud data, wherein the average distance of the filtering point time is larger than
Figure QLYQS_5
Finishing point cloud denoising;
the hole filling method comprises the following steps:
performing point cloud storage in a discrete point form, and acquiring real-time image data subjected to point cloud denoising;
forming a face triangular model through a greedy projection algorithm of a point cloud library;
filling holes appearing in the face triangular model by approximating a topological structure of the missing region through a radial basis function algorithm;
the face cutting method comprises the following steps:
determining the nose tip point coordinates in the face triangular model, and taking the nose tip as a central point;
setting a cutting area through geodesic distance or Euclidean distance;
and (4) cutting the human face three-dimensional model by adopting spherical neighbor cutting.
4. The safe community entrance and exit management and control method according to claim 3, wherein the method for extracting features of real-time face data comprises the following steps:
acquiring an illumination measurement vector of real-time face data;
constructing a training network based on ResNet-50, wherein the training network comprises a first module, a second module, a third module and a fourth module, and a training branch of the training network comprises a face recognition branch and an illumination processing branch;
parameter sharing of the face recognition branch and the illumination processing branch is achieved through a hard sharing mode, wherein the first module and the second module share parameters, and when the face recognition branch is executed, the third module and the fourth module learn face recognition task parameters; when the illumination processing branch is executed, the third module and the fourth module learn the parameters of the illumination processing task;
constructing a first full connection layer, a second full connection layer and a third full connection layer, wherein the first full connection layer is used for outputting the result of the face recognition branch, the second full connection layer is used for outputting the result of the illumination processing branch, and the third full connection layer is used for learning the task weights of the face recognition branch and the illumination processing branch; the first full-connection layer and the second full-connection layer are connected behind the fourth module, and the third full-connection layer is connected behind the second module;
determining a loss function of the face recognition branch:
Figure QLYQS_6
wherein N is the number of face categories,
Figure QLYQS_7
for the label of the input image on the nth class,
Figure QLYQS_8
making an nth value of the softmax processed result for the output of the first fully-connected layer;
determining the loss function of the illumination processing branch:
Figure QLYQS_9
wherein, in the step (A),
Figure QLYQS_10
a vector is measured for the illumination of the image,
Figure QLYQS_11
is composed of
Figure QLYQS_12
D is the dimension of the illumination vector,
Figure QLYQS_13
normalized for the output of the second fully-connected layerAs a result of the post-conversion treatment,
Figure QLYQS_14
is composed of
Figure QLYQS_15
The ith component of (a);
determining a loss function of the training network:
Figure QLYQS_16
wherein, in the step (A),
Figure QLYQS_17
the softmax processed result of the face recognition branching task weight output for the third fully connected layer,
Figure QLYQS_18
the result after softmax processing of the branch task weight is processed for the illumination output by the third fully connected layer.
5. The method for managing and controlling the entrance and exit of the safe community according to claim 4, wherein the method for obtaining the illumination measurement vector comprises:
drawing a unit sphere by using given illumination distribution, and selecting a plurality of sampling points from the unit sphere;
set of azimuth angles
Figure QLYQS_19
Vertex angle set
Figure QLYQS_20
Combining every two elements in the azimuth angle set and the zenith angle set to obtain 9 normal sampling directions;
and forming 9-dimensional vectors by using the 9 normal sampling directions to obtain the illumination measurement vector.
6. The method according to claim 4, wherein the first module includes 3 residual error structures, the second module includes 4 residual error structures, the third module includes 6 residual error structures, and the fourth module includes 3 residual error structures, and the residual error structures include 3 convolutional layers, and sizes of convolutional kernels of the residual error structures are 1 × 1, 3 × 3, and 1 × 1, respectively.
7. The method for managing and controlling the entrance and the exit of the safe community according to claim 2, wherein the method for comparing the similarity comprises the following steps:
selecting cosine distance as similarity measurement distance between samples;
acquiring cosine values of included angles between two adjacent human face features;
setting a cosine value threshold, and if the cosine value is greater than the cosine value threshold, proving that the real-time face data belongs to a face library; if the cosine value is smaller than the cosine value threshold value, the real-time face data is proved not to belong to a face library;
the face library comprises face feature vectors which are input in advance.
8. The method for managing and controlling the entrance and exit of the safe community according to claim 2, wherein the method for encrypting transmission comprises: the real-time face data is encrypted at the image acquisition module, and is decrypted at the face recognition module after being transmitted to the face recognition module through the Internet.
9. The method for managing and controlling the entrance and exit of the safe community according to claim 8, wherein the encryption and decryption method comprises:
setting three times of encryption keys at the image acquisition module and the face recognition module,
Figure QLYQS_21
Figure QLYQS_22
Figure QLYQS_23
the encryption algorithm is as follows:
Figure QLYQS_24
wherein, P is a plaintext, C is a ciphertext, E represents an encryption operation, and D represents a decryption operation;
the method for data transmission comprises the following steps:
positioning a transmission port of a data transmission link, calculating the strength of a signal, acquiring noise information carried by the signal during transmission, and taking the noise information as an additional transmission signal;
compensating the signal transmission frequency by the following formula:
Figure QLYQS_25
wherein, in the step (A),
Figure QLYQS_26
compensating values for frequencies at different transmission nodes in a network transmission environment,
Figure QLYQS_27
carrier frequency for the central node of the network, t time for the forwarding node,
Figure QLYQS_28
the included angle formed between the moving direction of the transmission signal on the transmission node and the incident wave.
10. A safe community entrance and exit control terminal, comprising a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor implements the steps of a safe community entrance and exit control method according to any one of claims 2 to 9 when executing the computer program.
CN202310055847.9A 2023-01-17 2023-01-17 Safety community access control platform, control method and control terminal Pending CN115830762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310055847.9A CN115830762A (en) 2023-01-17 2023-01-17 Safety community access control platform, control method and control terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310055847.9A CN115830762A (en) 2023-01-17 2023-01-17 Safety community access control platform, control method and control terminal

Publications (1)

Publication Number Publication Date
CN115830762A true CN115830762A (en) 2023-03-21

Family

ID=85520724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310055847.9A Pending CN115830762A (en) 2023-01-17 2023-01-17 Safety community access control platform, control method and control terminal

Country Status (1)

Country Link
CN (1) CN115830762A (en)

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof
CN103489011A (en) * 2013-09-16 2014-01-01 广东工业大学 Three-dimensional face identification method with topology robustness
US20140003654A1 (en) * 2012-06-29 2014-01-02 Nokia Corporation Method and apparatus for identifying line-of-sight and related objects of subjects in images and videos
CN103985172A (en) * 2014-05-14 2014-08-13 南京国安光电科技有限公司 An access control system based on three-dimensional face identification
CN104143080A (en) * 2014-05-21 2014-11-12 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud
CN105354555A (en) * 2015-11-17 2016-02-24 南京航空航天大学 Probabilistic graphical model-based three-dimensional face recognition method
US20160196467A1 (en) * 2015-01-07 2016-07-07 Shenzhen Weiteshi Technology Co. Ltd. Three-Dimensional Face Recognition Device Based on Three Dimensional Point Cloud and Three-Dimensional Face Recognition Method Based on Three-Dimensional Point Cloud
CN109087422A (en) * 2018-08-02 2018-12-25 四川三思德科技有限公司 A kind of retail shop's network system, access control management method, device and Cloud Server
CN109657592A (en) * 2018-12-12 2019-04-19 大连理工大学 A kind of face identification system and method for intelligent excavator
CN110070647A (en) * 2019-03-21 2019-07-30 深圳壹账通智能科技有限公司 A kind of intelligent community management method and device thereof based on recognition of face
CN110164007A (en) * 2019-05-21 2019-08-23 一石数字技术成都有限公司 A kind of access control system of identity-based evidence and facial image incidence relation
CN110175529A (en) * 2019-04-30 2019-08-27 东南大学 A kind of three-dimensional face features' independent positioning method based on noise reduction autoencoder network
CN110688947A (en) * 2019-09-26 2020-01-14 西安知象光电科技有限公司 Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation
CN110910551A (en) * 2019-10-25 2020-03-24 深圳奥比中光科技有限公司 3D face recognition access control system and 3D face recognition-based access control method
CN110992546A (en) * 2019-12-02 2020-04-10 杭州磊盛智能科技有限公司 Face recognition gate and anti-trailing method thereof
CN112257492A (en) * 2020-08-27 2021-01-22 重庆科技学院 Real-time intrusion detection and tracking method for multiple cameras
CN112348139A (en) * 2021-01-08 2021-02-09 山东欧龙电子科技有限公司 Tool management operation table, system and method based on RFID (radio frequency identification device) label identification
CN112350818A (en) * 2020-11-04 2021-02-09 西南交通大学 High-speed chaotic secure transmission method based on coherent detection
WO2021174125A1 (en) * 2020-02-28 2021-09-02 Aurora Solar Inc. Automated three-dimensional building model estimation
CN113835074A (en) * 2021-08-04 2021-12-24 南京常格科技发展有限公司 People flow dynamic monitoring method based on millimeter wave radar
CN114038035A (en) * 2021-11-05 2022-02-11 赵鑫 Artificial intelligence recognition device based on big data
US20220148184A1 (en) * 2020-11-10 2022-05-12 Here Global B.V. Method, apparatus, and system using a machine learning model to segment planar regions
CN115499621A (en) * 2022-08-22 2022-12-20 武汉辰因科技有限公司 Infrared image real-time processing system
CN115588222A (en) * 2022-10-09 2023-01-10 杭州指安科技股份有限公司 Door lock-based data security face recognition system and method

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof
US20140003654A1 (en) * 2012-06-29 2014-01-02 Nokia Corporation Method and apparatus for identifying line-of-sight and related objects of subjects in images and videos
CN103489011A (en) * 2013-09-16 2014-01-01 广东工业大学 Three-dimensional face identification method with topology robustness
CN103985172A (en) * 2014-05-14 2014-08-13 南京国安光电科技有限公司 An access control system based on three-dimensional face identification
CN104143080A (en) * 2014-05-21 2014-11-12 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud
US20160196467A1 (en) * 2015-01-07 2016-07-07 Shenzhen Weiteshi Technology Co. Ltd. Three-Dimensional Face Recognition Device Based on Three Dimensional Point Cloud and Three-Dimensional Face Recognition Method Based on Three-Dimensional Point Cloud
CN105354555A (en) * 2015-11-17 2016-02-24 南京航空航天大学 Probabilistic graphical model-based three-dimensional face recognition method
CN109087422A (en) * 2018-08-02 2018-12-25 四川三思德科技有限公司 A kind of retail shop's network system, access control management method, device and Cloud Server
CN109657592A (en) * 2018-12-12 2019-04-19 大连理工大学 A kind of face identification system and method for intelligent excavator
CN110070647A (en) * 2019-03-21 2019-07-30 深圳壹账通智能科技有限公司 A kind of intelligent community management method and device thereof based on recognition of face
CN110175529A (en) * 2019-04-30 2019-08-27 东南大学 A kind of three-dimensional face features' independent positioning method based on noise reduction autoencoder network
CN110164007A (en) * 2019-05-21 2019-08-23 一石数字技术成都有限公司 A kind of access control system of identity-based evidence and facial image incidence relation
CN110688947A (en) * 2019-09-26 2020-01-14 西安知象光电科技有限公司 Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation
CN110910551A (en) * 2019-10-25 2020-03-24 深圳奥比中光科技有限公司 3D face recognition access control system and 3D face recognition-based access control method
CN110992546A (en) * 2019-12-02 2020-04-10 杭州磊盛智能科技有限公司 Face recognition gate and anti-trailing method thereof
WO2021174125A1 (en) * 2020-02-28 2021-09-02 Aurora Solar Inc. Automated three-dimensional building model estimation
CN112257492A (en) * 2020-08-27 2021-01-22 重庆科技学院 Real-time intrusion detection and tracking method for multiple cameras
CN112350818A (en) * 2020-11-04 2021-02-09 西南交通大学 High-speed chaotic secure transmission method based on coherent detection
US20220148184A1 (en) * 2020-11-10 2022-05-12 Here Global B.V. Method, apparatus, and system using a machine learning model to segment planar regions
CN112348139A (en) * 2021-01-08 2021-02-09 山东欧龙电子科技有限公司 Tool management operation table, system and method based on RFID (radio frequency identification device) label identification
CN113835074A (en) * 2021-08-04 2021-12-24 南京常格科技发展有限公司 People flow dynamic monitoring method based on millimeter wave radar
CN114038035A (en) * 2021-11-05 2022-02-11 赵鑫 Artificial intelligence recognition device based on big data
CN115499621A (en) * 2022-08-22 2022-12-20 武汉辰因科技有限公司 Infrared image real-time processing system
CN115588222A (en) * 2022-10-09 2023-01-10 杭州指安科技股份有限公司 Door lock-based data security face recognition system and method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
张光玺;汤汶;万韬阮;薛涛;: "基于深度学习的抗噪声点云识别网络设计" *
张宁仙: "《基于深度学习的三维实时人脸识别技术研究》" *
李志斌;夏坤;周奕轩;杨勇;吴文峰;: "基于激光全息投影的三维图像重构研究" *
李文,刘艳丽,邢冠宇: "《深度人脸识别的光照分析》" *
李维宇: "《基于DES算法的网络数据自动加密传输方法》" *

Similar Documents

Publication Publication Date Title
Islam et al. BHMUS: Blockchain based secure outdoor health monitoring scheme using UAV in smart city
CN100414974C (en) Image capturing system
US10740964B2 (en) Three-dimensional environment modeling based on a multi-camera convolver system
US20220174039A1 (en) Systems and methods of physical infrastructure and information technology infrastructure security
Chiou et al. Zero-shot multi-view indoor localization via graph location networks
CN105893988A (en) Iris acquisition method and terminal thereof
CN111583485A (en) Community access control system, access control method and device, access control unit and medium
Zhang et al. Cloak of invisibility: Privacy-friendly photo capturing and sharing system
CN106960453B (en) Photograph taking fixing by gross bearings method and device
CN115588222A (en) Door lock-based data security face recognition system and method
CN112865958A (en) Privacy protection system and method for searching target through Internet of things camera
CN115830762A (en) Safety community access control platform, control method and control terminal
CN109856979B (en) Environment adjusting method, system, terminal and medium
CN111832346B (en) Face recognition method, device, electronic equipment and readable storage medium
Liu et al. Study on multi-view video based on IOT and its application in intelligent security system
KR20220037027A (en) System and method for monitoring the ground using hybrid unmanned airship
CN114493594B (en) Ocean data sharing method, system and medium based on blockchain and federal learning
CN106203047A (en) A kind of movable storage device with identification verification function
CN113297176B (en) Database access method based on Internet of things
CN111783594B (en) Alarm method and device and electronic equipment
Liu et al. Lost and Found! associating target persons in camera surveillance footage with smartphone identifiers
CN114491465A (en) Credible user identity authentication method based on RFID
Xiong et al. Privacy-Preserving Outsourcing Learning for Connected Autonomous Vehicles: Challenges, Solutions and Perspectives
CN109145772B (en) Data processing method and device, computer readable storage medium and electronic equipment
Singh et al. An unorthodox security framework using adapted blockchain architecture for Internet of drones

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230321