CN117218783A - Internet of things safety management system and method - Google Patents

Internet of things safety management system and method Download PDF

Info

Publication number
CN117218783A
CN117218783A CN202311179407.0A CN202311179407A CN117218783A CN 117218783 A CN117218783 A CN 117218783A CN 202311179407 A CN202311179407 A CN 202311179407A CN 117218783 A CN117218783 A CN 117218783A
Authority
CN
China
Prior art keywords
image
face
feature
interaction
face detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311179407.0A
Other languages
Chinese (zh)
Inventor
刘超
肖智卿
许多
熊慧
周柏魁
梁文聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Yunbai Technology Co ltd
Original Assignee
Guangdong Yunbai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Yunbai Technology Co ltd filed Critical Guangdong Yunbai Technology Co ltd
Priority to CN202311179407.0A priority Critical patent/CN117218783A/en
Publication of CN117218783A publication Critical patent/CN117218783A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a safety management system and a safety management method of the Internet of things, which are characterized in that a camera is used for collecting a face detection image of an object to be recognized, and an image processing and analyzing algorithm is introduced into the rear end to perform feature comparison and analysis on the face detection image and a face reference image with preset authority so as to realize verification on the identity of a worker. Thus, unauthorized persons can be prevented from entering the dangerous area, and illegal invasion and theft are reduced. Meanwhile, the system can also generate early warning prompts according to the face recognition result, timely find abnormal conditions and take corresponding measures. In addition, depending on the personnel of different predetermined rights, the system may provide them with different path plans, ensuring that they can reach the destination along the specified path. Therefore, workers can be prevented from running in disorder and passing through dangerous areas, and accidents are reduced. And the path planning function can also improve the working efficiency and reduce the time waste.

Description

Internet of things safety management system and method
Technical Field
The application relates to the field of Internet of things, and in particular relates to a system and a method for safety management of the Internet of things.
Background
Along with the development of society, the information technology has rapidly advanced, and the internet of things plays an important role in the information age as an important component of a new generation of information technology. In coal mining, since coal mine is a high-risk working environment, various potential safety hazards and risks exist, such as gas explosion, mine collapse, fire and the like. Therefore, ensuring the safety of coal mine personnel is an urgent and important task.
However, conventional internet of things security management systems typically require relying on manual identity verification and path planning. Manual operations may present subjective and inattentive risks, leading to inaccuracy in identity verification and path planning. Meanwhile, the manual operation also increases the workload and the time cost.
In addition, some existing identity verification management systems which rely on a machine to carry out face image comparison often have certain delay in identity verification and early warning, and are difficult to cope with complex coal mining scenes. That is, there may be a large number of workers and complex work areas in the coal mine, and it is difficult for the current system to quickly and accurately identify and distinguish different kinds and authorities, which may lead to erroneous judgment and erroneous path planning, increasing the risk of the workers.
Accordingly, an optimized internet of things security management system is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a safety management system and a safety management method of the Internet of things, which are characterized in that a camera is used for collecting a face detection image of an object to be recognized, and an image processing and analyzing algorithm is introduced into the rear end to compare and analyze the characteristics of the face detection image and a face reference image with a preset authority so as to realize verification of the identity of a worker. Thus, unauthorized persons can be prevented from entering the dangerous area, and illegal invasion and theft are reduced. Meanwhile, the system can also generate early warning prompts according to the face recognition result, timely find abnormal conditions and take corresponding measures. In addition, depending on the personnel of different predetermined rights, the system may provide them with different path plans, ensuring that they can reach the destination along the specified path. Therefore, workers can be prevented from running in disorder and passing through dangerous areas, and accidents are reduced. And the path planning function can also improve the working efficiency and reduce the time waste.
According to one aspect of the present application, there is provided an internet of things security management system, comprising:
the face detection image acquisition module is used for acquiring a face detection image of an object to be identified acquired by the camera;
the preset authority reference image acquisition module is used for acquiring a face reference image with preset authority from the database;
the human face image feature interaction analysis module is used for carrying out image feature interaction association analysis on the human face detection image and the human face reference image so as to obtain interaction features among human face images;
and the early warning and path planning module is used for generating an early warning prompt and/or a path planning chart based on the interaction characteristics between the face images.
According to another aspect of the present application, there is provided a security management method for the internet of things, including:
acquiring a face detection image of an object to be identified acquired by a camera;
acquiring a face reference image with a preset authority from a database;
performing image feature interaction correlation analysis on the face detection image and the face reference image to obtain interaction features among the face images;
and generating an early warning prompt and/or a path planning chart based on the interaction characteristics between the face images.
Compared with the prior art, the safety management system and method for the internet of things provided by the application have the advantages that the camera is used for collecting the face detection image of the object to be identified, and the image processing and analyzing algorithm is introduced into the rear end to perform feature comparison and analysis on the face detection image and the face reference image with the preset authority so as to realize verification on the identity of the staff. Thus, unauthorized persons can be prevented from entering the dangerous area, and illegal invasion and theft are reduced. Meanwhile, the system can also generate early warning prompts according to the face recognition result, timely find abnormal conditions and take corresponding measures. In addition, depending on the personnel of different predetermined rights, the system may provide them with different path plans, ensuring that they can reach the destination along the specified path. Therefore, workers can be prevented from running in disorder and passing through dangerous areas, and accidents are reduced. And the path planning function can also improve the working efficiency and reduce the time waste.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a block diagram of an Internet of things security management system according to an embodiment of the application;
FIG. 2 is a system architecture diagram of an Internet of things security management system according to an embodiment of the application;
FIG. 3 is a block diagram of a facial image feature interaction analysis module in an Internet of things security management system according to an embodiment of the application;
FIG. 4 is a block diagram of an early warning and path planning module in an Internet of things security management system according to an embodiment of the application;
fig. 5 is a flowchart of an internet of things security management method according to an embodiment of the present application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Traditional internet of things security management systems typically rely on manual identity verification and path planning. Manual operations may present subjective and inattentive risks, leading to inaccuracy in identity verification and path planning. Meanwhile, the manual operation also increases the workload and the time cost. In addition, some existing identity verification management systems which rely on a machine to carry out face image comparison often have certain delay in identity verification and early warning, and are difficult to cope with complex coal mining scenes. That is, there may be a large number of workers and complex work areas in the coal mine, and it is difficult for the current system to quickly and accurately identify and distinguish different kinds and authorities, which may lead to erroneous judgment and erroneous path planning, increasing the risk of the workers. Accordingly, an optimized internet of things security management system is desired.
In the technical scheme of the application, the safety management system of the Internet of things is provided. Fig. 1 is a block diagram of an internet of things security management system according to an embodiment of the present application. Fig. 2 is a system architecture diagram of an internet of things security management system according to an embodiment of the present application. As shown in fig. 1 and 2, the internet of things security management system 300 according to an embodiment of the present application includes: a face detection image acquisition module 310, configured to acquire a face detection image of an object to be identified acquired by a camera; a predetermined authority reference image acquisition module 320, configured to acquire a face reference image having a predetermined authority from a database; the face image feature interaction analysis module 330 is configured to perform image feature interaction association analysis on the face detection image and the face reference image to obtain interaction features between face images; and the early warning and path planning module 340 is configured to generate an early warning prompt and/or a path planning chart based on the interaction characteristics between the face images.
In particular, the face detection image acquisition module 310 and the predetermined authority reference image acquisition module 320 are configured to acquire a face detection image of an object to be identified acquired by a camera; and acquiring the face reference image with the preset authority from the database. It should be understood that in the technical scheme of the application, the camera is used for collecting the face detection image of the object to be identified, and an image processing and analyzing algorithm is introduced at the rear end to perform feature comparison and analysis on the face detection image and the face reference image with the preset authority so as to realize verification on the identity of the staff. Therefore, first, a face detection image of an object to be recognized is acquired by a camera; and acquiring the face reference image with the preset authority from the database.
In particular, the facial image feature interaction analysis module 330 is configured to perform image feature interaction correlation analysis on the facial detection image and the facial reference image to obtain interaction features between facial images. In particular, in one specific example of the present application, as shown in fig. 3, the facial image feature interaction analysis module 330 includes: the face detection image enhancement unit 331 is configured to perform bilateral filtering processing on the face detection image to obtain an enhanced face detection image; a face image feature extraction unit 332, configured to obtain a face detection feature map and a face reference feature map by using the enhanced face detection image and the face reference image through a dual detection network model; and the global feature interaction analysis unit 333 is configured to perform full-perception interaction association coding on the face detection feature map and the face reference feature map to obtain interaction features between the face images.
Specifically, the face detection image enhancing unit 331 is configured to perform bilateral filtering processing on the face detection image to obtain an enhanced face detection image. It is considered that various types of noise, such as gaussian noise, pretzel noise, and the like, may exist in the face detection image in the face detection process. These noises can interfere with the accuracy of the face detection algorithm, resulting in false detection or missed detection. It should be understood that bilateral filtering is a nonlinear filtering method that can remove noise while maintaining edge information, effectively improving image quality. Therefore, in the technical scheme of the application, bilateral filtering processing is further carried out on the face detection image so as to obtain the enhanced face detection image. By performing bilateral filtering processing on the original image, image noise and details of the enhanced image can be removed. And the bilateral filtering has the characteristic of protecting edges, can keep edge information in the image, and enhances the contrast of the edges. In face detection, a face generally has obvious edge characteristics, and the face can be clearer and more prominent by enhancing the edge contrast, so that the performance of a face detection algorithm is improved.
Notably, bilateral filtering is an image filtering technique that smoothes the image while maintaining edge sharpness. Unlike traditional linear filtering methods (such as mean filtering and Gaussian filtering), bilateral filtering considers two factors of spatial distance between pixels and pixel value similarity, so that detail information of an image is more accurately reserved. In bilateral filtering, the new value of each pixel is obtained by weighted averaging of its surrounding pixels. The weights here are determined by two factors: spatial distance weight: representing the spatial distance between pixels, the closer the distance, the greater the weight. This weighting function usually employs a gaussian function to attenuate the effect of distance to ensure that pixels closer to the target pixel contribute more to the smoothing result; pixel value similarity weight: representing the similarity between pixel values, the greater the weight if the pixel values are similar. This weighting function typically measures the similarity of pixel values using a gaussian function of gray value differences to ensure that similar pixels contribute more to the smoothed result. The final weight for each pixel can be obtained by multiplying the spatial distance weight by the pixel value similarity weight. And multiplying the value of each pixel with the corresponding weight, summing all the pixels, and finally normalizing the result to obtain the smoothed image.
Accordingly, in one possible implementation, the face detection image may be subjected to bilateral filtering processing to obtain an enhanced face detection image, for example, by: acquiring a face detection image to be processed; determining parameters of bilateral filtering, including the size of a filter, the standard deviation of a spatial distance weight, the standard deviation of a pixel value similarity weight and the like; and carrying out bilateral filtering processing on the face detection image. The method comprises the following specific steps: traversing each pixel in the image; for the current pixel, determining the range of the filter, i.e. determining the size of surrounding pixels; calculating a spatial distance weight and a pixel value similarity weight between the current pixel and surrounding pixels; multiplying the spatial distance weight and the pixel value similarity weight to obtain the final weight of the current pixel; multiplying the value of the current pixel with the final weight and summing all pixels; normalizing the result to obtain an enhancement value of the current pixel; repeating steps a to f until all pixels have been traversed; outputting the enhanced face detection image: and outputting an image formed by the pixel values subjected to bilateral filtering processing as an enhanced face detection image.
Specifically, the face image feature extraction unit 332 is configured to pass the enhanced face detection image and the face reference image through a dual detection network model to obtain a face detection feature map and a face reference feature map. That is, in the technical solution of the present application, the feature mining of the enhanced face detection image and the face reference image is performed separately using a convolutional neural network model having excellent performance in terms of implicit feature extraction of the image, particularly, considering that the face of the miners may have a high similarity when performing feature contrast and identity recognition of the face image, and it is difficult to distinguish and recognize. Therefore, in order to further improve the accuracy of identity verification of the object to be identified so as to perform more accurate early warning and path planning, in the technical scheme of the application, the enhanced face detection image and the face reference image are further processed through a dual detection network model comprising a first image encoder and a second image encoder so as to obtain a face detection feature map and a face reference feature map. In particular, here, the first image encoder and the second image encoder have the same network structure. It should be understood that the feature extraction of the enhanced face detection image and the face reference image is performed by using the dual detection network model including the image encoder with the same network structure, so that feature information with insignificant difference between the images of the two images at the image source domain end can be mined, and face recognition and identity verification of the object to be identified are performed, thereby improving the accuracy of face recognition and identity verification of the object to be identified. Specifically, each layer of the first image encoder is used for respectively carrying out convolution processing, pooling processing and nonlinear activation processing based on a local feature matrix on the enhanced face detection image in forward transmission of the layer so as to output an initial face detection feature image by the last layer of the first image encoder; inputting the initial face detection feature map into a spatial attention layer of the first image encoder to obtain the face detection feature map; and performing convolution processing, pooling processing based on a local feature matrix and nonlinear activation processing on the face reference image in forward transfer of layers by using each layer of the second image encoder to output an initial face reference feature image by a last layer of the second image encoder; and inputting the initial face reference feature map into a spatial attention layer of the second image encoder to obtain the face reference feature map.
Specifically, the global feature interaction analysis unit 333 is configured to perform full-perception interaction association coding on the face detection feature map and the face reference feature map to obtain interaction features between the face images. In particular, in one specific example of the present application, the global feature interaction analysis unit 333 between face images includes: the human face feature full-perception subunit is used for respectively passing the human face detection feature image and the human face reference feature image through a full-perception module to obtain a human face detection full-perception feature vector and a human face reference full-perception feature vector; and the human face full-perception feature interaction subunit is used for carrying out feature interaction based on an attention mechanism on the human face detection full-perception feature vector and the human face reference full-perception feature vector by using the inter-feature attention layer so as to obtain an inter-human-face image interaction feature vector as the inter-human-face image interaction feature.
More specifically, the full-face feature sensing subunit is configured to pass the face detection feature map and the face reference feature map through a full-sensing module to obtain a full-face detection feature vector and a full-face reference feature vector. It should be appreciated that, considering that different regions of a face may have associated features of different degrees when feature comparison of the face detection image and the face reference image is actually performed to perform identity verification of the object to be identified, it has an important role for face recognition, whereas pure CNN methods have difficulty in learning explicit global and remote semantic information interactions due to inherent limitations of convolution operations. Therefore, in the technical scheme of the application, the face detection feature map and the face reference feature map are further respectively passed through a full-perception module to obtain a face detection full-perception feature vector and a face reference full-perception feature vector. In particular, the full-perception module may perform global perception and integration on the face detection feature map and the face reference feature map, respectively, to extract semantic information with higher hierarchy. Therefore, the obtained attention layer between the features can better represent the features and the attributes of the face to be recognized for the face detection full-perception feature vector, and more accurate and useful feature representation is provided for the functions of follow-up identity verification, early warning and the like.
Accordingly, in one possible implementation manner, the face detection feature map and the face reference feature map may be respectively passed through a full-perception module to obtain a face detection full-perception feature vector and a face reference full-perception feature vector, for example: acquiring a face detection feature image and a face reference feature image to be processed; determining parameters of a full perception module; and constructing a module for extracting the full-perception characteristics of the face detection. The method comprises the following specific steps: carrying out multi-layer convolution operation on the face detection feature map, and extracting features; carrying out global average pooling on the extracted features to obtain a global average value of the feature map; the global average value is used as a full-perception feature vector for face detection; and constructing a module for extracting the full-perception feature of the face reference. The method comprises the following specific steps: carrying out multi-layer convolution operation on the face reference feature map to extract features; carrying out global average pooling on the extracted features to obtain a global average value of the feature map; taking the global average value as a face reference full-perception feature vector; outputting a face detection full-perception feature vector and a face reference full-perception feature vector: and outputting the full-perception feature vector of the face detection and the full-perception feature vector of the face reference as results for subsequent interactive feature extraction or other related tasks between face images.
More specifically, the human face full-perception feature interaction subunit is configured to perform feature interaction based on an attention mechanism on the human face detection full-perception feature vector and the human face reference full-perception feature vector by using the inter-feature attention layer to obtain an inter-human-image interaction feature vector as the inter-human-image interaction feature. That is, in the technical solution of the present application, the attention layer is used to perform feature interaction based on an attention mechanism on the face detection full-perception feature vector and the face reference full-perception feature vector to obtain an inter-face image interaction feature vector, so as to capture association and interaction between the full-perception feature of the face detection image and the full-perception feature of the face reference image. It should be appreciated that since the goal of the traditional attention mechanism is to learn an attention weight matrix, a greater weight is given to important features and a lesser weight is given to secondary features, thereby selecting more critical information to the current task goal. This approach is more focused on weighting the importance of individual features, while ignoring the dependency between features. The attention layer between the features can capture the correlation and the mutual influence between the full-perception features of the face detection image and the full-perception features of the face reference image through the feature interaction based on the attention mechanism, learn the dependency relationship between different features, and interact and integrate the features according to the dependency relationship, so that the interaction feature vector between the face images is obtained.
It should be noted that, in other specific examples of the present application, the face detection feature map and the face reference feature map may also be subjected to full-perception interactive association coding in other manners to obtain the interactive features between the face images, for example: acquiring a face detection feature image and a face reference feature image to be processed; determining parameters of full-perception interactive association coding, including the size of a feature map, the number of channels, the number of layers of an encoder, the size of a convolution kernel and the like; and constructing a full-perception interactive association encoder. The method comprises the following specific steps: respectively carrying out multi-layer convolution operation on the face detection feature map and the face reference feature map, and extracting features; splicing the two feature images together to obtain a combined feature image; performing further convolution operation on the combined feature map to extract higher-level features; and extracting interactive features by using the feature map extracted by the encoder. The method comprises the following specific steps: carrying out global average pooling on the feature images output by the encoder to obtain global average values of the feature images; multiplying the global average value by the feature map to obtain interaction features; and outputting the interaction characteristics as a result for subsequent interaction analysis between face images or other related tasks.
It should be noted that, in other specific examples of the present application, the image feature interaction correlation analysis may be performed on the face detection image and the face reference image in other manners to obtain interaction features between face images, for example: preprocessing the face detection image and the face reference image, including adjusting the size of the image, graying or color conversion, and the like, so as to facilitate subsequent processing; performing face detection on the face detection image by using a face detection algorithm, and determining the face position and the boundary box in the image; extracting the characteristics of the face areas in the face detection image and the face reference image by using a face characteristic extraction algorithm (such as a face characteristic extractor based on deep learning) to obtain characteristic vectors of each face; and matching each face feature vector in the face detection image with each face feature vector in the face reference image, and calculating the similarity or distance between the face feature vectors and the face reference image. Common matching methods include cosine similarity, euclidean distance, etc.; and according to the result of the feature matching, carrying out interactive association analysis on each face feature vector in the face detection image and each face feature vector in the face reference image. Some statistical methods or machine learning models may be used to analyze the relationships between them, such as calculating averages, variances, correlation coefficients, etc.; and extracting interaction characteristics among the face images according to the result of the characteristic interaction association analysis. These features may include similarity, correlation, overlap, etc. between faces; according to the obtained interaction characteristics, various applications such as face recognition, face comparison, face clustering and the like can be performed.
Specifically, the early warning and path planning module 340 is configured to generate an early warning prompt and/or a path planning chart based on the interaction features between the face images. In particular, in one specific example of the present application, as shown in fig. 4, the early warning and path planning module 340 includes: the face feature distribution optimizing unit 341 is configured to perform hilbert orthogonal spatial domain representation decoupling on the interaction feature vectors between face images to obtain optimized interaction feature vectors between face images; an image matching unit 342, configured to pass the interaction feature vector between the optimized face images through a classifier to obtain a classification result, where the classification result is used to indicate whether a face detection image of an object to be identified is matched with a face reference image with a predetermined authority; and a result generating unit 343, configured to generate an early warning prompt and/or a path planning chart based on the classification result.
Specifically, the face feature distribution optimizing unit 341 is configured to perform hilbert orthogonal spatial domain representation decoupling on the interaction feature vector between face images to obtain an optimized interaction feature vector between face images. In the technical scheme of the application, the human face detection full-perception feature vector and the human face reference full-perception feature vector respectively express the image semantic features of the human face detection image and the human face reference image, so that the dependency relationship features among the image semantic features of different source images can be extracted through the feature interaction based on the attention mechanism. Here, considering that the distribution of the dependency relationship between the face detection image and the face reference image on the local distribution of the features is uneven, that is, the source image semantics of the face detection image and the face reference image exist at the local parts with larger deviation degree, so that the source image semantics similarity of some parts is far greater than that of other parts, and the dependency relationship features of the image semantic features expressed by the interaction feature vectors between the face images have diversified local feature expressions among the local feature distributions, so that when the interaction feature vectors between the face images pass through the classifier, the generalization effect of the interaction feature vectors between the face images as a whole in the classification domain is affected, that is, the accuracy of the classification result is affected. Based on this, the applicant of the present application, when classifying the inter-face image interaction feature vector, preferably performs hilbert orthogonal spatial domain representation decoupling on the inter-face image interaction feature vector, for example denoted as V, as:
wherein V is the interaction characteristic vector between the face images,is the global feature mean value of the interaction feature vector between the face images, V is 2 Is the two norms of the interaction feature vector between the face images, L is the length of the interaction feature vector between the face images, and I is the unit vector, </u >>Representing vector subtraction, cov 1D (. Cndot.) represents the covariance matrix, V' isAnd optimizing the interaction feature vector between the face images. Here, the hilbert orthogonal spatial domain representation decoupling is used for performing orthogonal spatial domain decoupling of domain-invariant (domain-invariant) representation from within the overall domain representation of the inter-face-image interaction feature vector V by emphasizing intrinsic domain-specific (domain-specific) information within the diversified feature representation of the inter-face-image interaction feature vector V, that is, by performing domain-invariant (domain-invariant) representation from within the overall domain representation of the inter-face-image interaction feature vector V based on vector-self-spatial metrics and vector-self-inner product representation, so as to improve domain-adaptive generalization performance of the inter-face-image interaction feature vector V within a classification domain, thereby improving accuracy of classification results of the inter-face-image interaction feature vector. Therefore, the verification and early warning of the identity of the staff can be realized, abnormal conditions can be found, corresponding measures are taken, unauthorized persons are prevented from entering a dangerous area, and illegal invasion and theft are reduced. In addition, the safety management system can provide different path planning according to personnel with different preset authorities, ensures that the personnel with different work types can reach the destination according to the appointed path, avoids the personnel from running in disorder and passing through dangerous areas, reduces the occurrence of accidents, improves the working efficiency and reduces the time waste.
Specifically, the image matching unit 342 is configured to pass the interaction feature vector between the optimized face images through a classifier to obtain a classification result, where the classification result is used to indicate whether the face detection image of the object to be identified is matched with the face reference image with the predetermined authority. The classification processing is performed by utilizing the interactive association characteristic information between the full-perception characteristic of the face detection image and the full-perception characteristic of the face reference image, so that association degree and consistency evaluation are performed on the characteristics of the face images of the two images, and identity verification of an object to be identified is realized. More specifically, performing full-connection coding on the interaction feature vectors between the optimized face images by using a plurality of full-connection layers of the classifier to obtain coded classification feature vectors; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
A classifier refers to a machine learning model or algorithm that is used to classify input data into different categories or labels. The classifier is part of supervised learning, which performs classification tasks by learning mappings from input data to output categories.
Fully connected layers are one type of layer commonly found in neural networks. In the fully connected layer, each neuron is connected to all neurons of the upper layer, and each connection has a weight. This means that each neuron in the fully connected layer receives inputs from all neurons in the upper layer, and weights these inputs together, and then passes the result to the next layer.
The Softmax classification function is a commonly used activation function for multi-classification problems. It converts each element of the input vector into a probability value between 0 and 1, and the sum of these probability values equals 1. The Softmax function is commonly used at the output layer of a neural network, and is particularly suited for multi-classification problems, because it can map the network output into probability distributions for individual classes. During the training process, the output of the Softmax function may be used to calculate the loss function and update the network parameters through a back propagation algorithm. Notably, the output of the Softmax function does not change the relative magnitude relationship between elements, but rather normalizes them. Thus, the Softmax function does not change the characteristics of the input vector, but simply converts it into a probability distribution form.
Specifically, the result generating unit 343 is configured to generate an early warning prompt and/or a path planning chart based on the classification result. That is, based on the verification results, early warning cues and/or a path planning map are generated. It should be noted that, in the technical solution of the present application, when performing face recognition and authority detection on an object to be identified, feature analysis and comparison are required to be performed based on the face detection image and each face reference image with predetermined authority in the database, so as to complete authority matching and path planning. Therefore, unauthorized personnel can be prevented from entering a dangerous area, illegal invasion and theft are reduced, and meanwhile, an early warning prompt is generated for the unauthorized personnel to be identified, so that abnormal conditions can be found in time and corresponding measures can be taken. Different path plans are provided for people with different preset authorities so as to ensure that the people can reach a destination according to a specified path, avoid the workers from running out and passing through dangerous areas, and reduce accidents.
It should be noted that, in other specific examples of the present application, the early warning prompt and/or the path planning chart may also be generated based on the interaction features between the face images in other manners, for example: the method is used for acquiring interaction characteristics among the face images, and the characteristics capture the relevance and interaction information among the face images; according to specific application scenes and requirements, an early warning rule is formulated, for example, a specific interaction characteristic mode between face images is detected to represent potential danger or abnormal conditions; analyzing the extracted interaction characteristics among the face images, and judging whether potential danger or abnormal conditions exist according to the early warning rules; if a potential danger or abnormal situation exists, a corresponding early warning prompt is generated, and the early warning prompt can be text information, an acoustic alarm or other forms of warning signals; determining a path planning target according to specific application scenes and requirements, for example, searching a specific face in a crowd or executing a specific task; according to the interaction characteristics between the face images, determining an optimal path or action strategy by using a path planning algorithm so as to reach a preset target; graphically presenting the result of the path planning, which may be path marks, navigation instructions or other forms of path planning map on the map; and outputting the generated early warning prompt and/or the generated path planning chart as a result, and being used for real-time early warning, path navigation and other applications.
As described above, the internet of things security management system 300 according to the embodiment of the present application may be implemented in various wireless terminals, for example, a server having an internet of things security management algorithm, etc. In one possible implementation, the internet of things security management system 300 according to an embodiment of the present application may be integrated into a wireless terminal as one software module and/or hardware module. For example, the internet of things security management system 300 may be a software module in the operating system of the wireless terminal or may be an application developed for the wireless terminal; of course, the security management system 300 of the internet of things may also be one of a plurality of hardware modules of the wireless terminal.
Alternatively, in another example, the internet of things security management system 300 and the wireless terminal may be separate devices, and the internet of things security management system 300 may be connected to the wireless terminal through a wired and/or wireless network and transmit the interaction information in a agreed data format.
Further, a method for safety management of the Internet of things is also provided.
Fig. 5 is a flowchart of an internet of things security management method according to an embodiment of the present application. As shown in fig. 5, the method for managing security of internet of things according to the embodiment of the application includes the steps of: s1, acquiring a face detection image of an object to be identified acquired by a camera; s2, acquiring a face reference image with a preset authority from a database; s3, carrying out image feature interaction association analysis on the face detection image and the face reference image to obtain interaction features among the face images; s4, generating an early warning prompt and/or a path planning chart based on the interaction characteristics between the face images.
In summary, the method for safety management of the internet of things according to the embodiment of the application is explained, wherein a camera is used for collecting a face detection image of an object to be identified, and an image processing and analyzing algorithm is introduced into the rear end to perform feature comparison and analysis on the face detection image and a face reference image with a preset authority so as to realize verification on the identity of a worker. Thus, unauthorized persons can be prevented from entering the dangerous area, and illegal invasion and theft are reduced. Meanwhile, the system can also generate early warning prompts according to the face recognition result, timely find abnormal conditions and take corresponding measures. In addition, depending on the personnel of different predetermined rights, the system may provide them with different path plans, ensuring that they can reach the destination along the specified path. Therefore, workers can be prevented from running in disorder and passing through dangerous areas, and accidents are reduced. And the path planning function can also improve the working efficiency and reduce the time waste.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (7)

1. The utility model provides a thing networking security management system which characterized in that includes:
the face detection image acquisition module is used for acquiring a face detection image of an object to be identified acquired by the camera;
the preset authority reference image acquisition module is used for acquiring a face reference image with preset authority from the database;
the human face image feature interaction analysis module is used for carrying out image feature interaction association analysis on the human face detection image and the human face reference image so as to obtain interaction features among human face images;
and the early warning and path planning module is used for generating an early warning prompt and/or a path planning chart based on the interaction characteristics between the face images.
2. The internet of things security management system of claim 1, wherein the facial image feature interaction analysis module comprises:
the human face detection image enhancement unit is used for carrying out bilateral filtering processing on the human face detection image to obtain an enhanced human face detection image;
the facial image feature extraction unit is used for obtaining a facial detection feature image and a facial reference feature image by the enhanced facial detection image and the facial reference image through a dual detection network model;
and the global feature interaction analysis unit is used for carrying out full-perception interaction association coding on the face detection feature image and the face reference feature image respectively so as to obtain interaction features between the face images.
3. The internet of things security management system of claim 2, wherein the dual detection network model is a dual detection network model comprising a first image encoder and a second image encoder, the first image encoder and the second image encoder having the same network structure.
4. The internet of things security management system according to claim 3, wherein the global feature interaction analysis unit between face images comprises:
the human face feature full-perception subunit is used for respectively passing the human face detection feature image and the human face reference feature image through a full-perception module to obtain a human face detection full-perception feature vector and a human face reference full-perception feature vector;
and the human face full-perception feature interaction subunit is used for carrying out feature interaction based on an attention mechanism on the human face detection full-perception feature vector and the human face reference full-perception feature vector by using the inter-feature attention layer so as to obtain an inter-human-face image interaction feature vector as the inter-human-face image interaction feature.
5. The internet of things security management system of claim 4, wherein the early warning and path planning module comprises:
the human face feature distribution optimizing unit is used for performing Hilbert orthogonal space domain representation decoupling on the human face image interaction feature vectors so as to obtain optimized human face image interaction feature vectors;
the image matching unit is used for enabling the interaction feature vectors among the optimized face images to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether a face detection image of an object to be identified is matched with a face reference image with a preset authority;
and the result generation unit is used for generating an early warning prompt and/or a path planning chart based on the classification result.
6. The internet of things security management system according to claim 5, wherein the face feature distribution optimizing unit is configured to: performing Hilbert orthogonal space domain representation decoupling on the interaction feature vectors among the face images by using the following optimization formula to obtain the interaction feature vectors among the optimized face images;
wherein, the optimization formula is:
wherein V is the interaction characteristic vector between the face images,is the global feature mean value of the interaction feature vector between the face images, V is 2 Is the two norms of the interaction feature vector between the face images, L is the length of the interaction feature vector between the face images, and I is the unit vector, </u >>Representing vector subtraction, cov 1D (. Cndot.) represents covariance matrix, and V' is the interaction eigenvector between the optimized face images.
7. The Internet of things safety management method is characterized by comprising the following steps of:
acquiring a face detection image of an object to be identified acquired by a camera;
acquiring a face reference image with a preset authority from a database;
performing image feature interaction correlation analysis on the face detection image and the face reference image to obtain interaction features among the face images;
and generating an early warning prompt and/or a path planning chart based on the interaction characteristics between the face images.
CN202311179407.0A 2023-09-12 2023-09-12 Internet of things safety management system and method Pending CN117218783A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311179407.0A CN117218783A (en) 2023-09-12 2023-09-12 Internet of things safety management system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311179407.0A CN117218783A (en) 2023-09-12 2023-09-12 Internet of things safety management system and method

Publications (1)

Publication Number Publication Date
CN117218783A true CN117218783A (en) 2023-12-12

Family

ID=89042007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311179407.0A Pending CN117218783A (en) 2023-09-12 2023-09-12 Internet of things safety management system and method

Country Status (1)

Country Link
CN (1) CN117218783A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015090126A1 (en) * 2013-12-16 2015-06-25 北京天诚盛业科技有限公司 Facial characteristic extraction and authentication method and device
CN112733965A (en) * 2021-02-03 2021-04-30 西安理工大学 Label-free image classification method based on small sample learning
US20210390355A1 (en) * 2020-06-13 2021-12-16 Zhejiang University Image classification method based on reliable weighted optimal transport (rwot)
CN114820241A (en) * 2022-06-20 2022-07-29 北京北投智慧城市科技有限公司 Scene information management method and system for smart community
CN114821725A (en) * 2022-04-28 2022-07-29 中国矿业大学(北京) Miner face recognition system based on neural network
CN115291210A (en) * 2022-07-26 2022-11-04 哈尔滨工业大学 Three-dimensional image pipeline identification method of 3D-CNN ground penetrating radar combined with attention mechanism
CN115983848A (en) * 2023-02-15 2023-04-18 杭银消费金融股份有限公司 Security monitoring method and system for encrypted electronic wallet
CN116343301A (en) * 2023-03-27 2023-06-27 滨州市沾化区退役军人服务中心 Personnel information intelligent verification system based on face recognition
CN116343513A (en) * 2023-03-07 2023-06-27 江苏纬信工程咨询有限公司 Rural highway beyond-sight-distance risk point safety monitoring and early warning method and system thereof
CN116363578A (en) * 2023-03-01 2023-06-30 大连海事大学 Ship closed cabin personnel monitoring method and system based on vision
CN116403253A (en) * 2023-03-01 2023-07-07 华能(广东)能源开发有限公司汕头电厂 Face recognition monitoring management system and method based on convolutional neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015090126A1 (en) * 2013-12-16 2015-06-25 北京天诚盛业科技有限公司 Facial characteristic extraction and authentication method and device
US20210390355A1 (en) * 2020-06-13 2021-12-16 Zhejiang University Image classification method based on reliable weighted optimal transport (rwot)
CN112733965A (en) * 2021-02-03 2021-04-30 西安理工大学 Label-free image classification method based on small sample learning
CN114821725A (en) * 2022-04-28 2022-07-29 中国矿业大学(北京) Miner face recognition system based on neural network
CN114820241A (en) * 2022-06-20 2022-07-29 北京北投智慧城市科技有限公司 Scene information management method and system for smart community
CN115291210A (en) * 2022-07-26 2022-11-04 哈尔滨工业大学 Three-dimensional image pipeline identification method of 3D-CNN ground penetrating radar combined with attention mechanism
CN115983848A (en) * 2023-02-15 2023-04-18 杭银消费金融股份有限公司 Security monitoring method and system for encrypted electronic wallet
CN116363578A (en) * 2023-03-01 2023-06-30 大连海事大学 Ship closed cabin personnel monitoring method and system based on vision
CN116403253A (en) * 2023-03-01 2023-07-07 华能(广东)能源开发有限公司汕头电厂 Face recognition monitoring management system and method based on convolutional neural network
CN116343513A (en) * 2023-03-07 2023-06-27 江苏纬信工程咨询有限公司 Rural highway beyond-sight-distance risk point safety monitoring and early warning method and system thereof
CN116343301A (en) * 2023-03-27 2023-06-27 滨州市沾化区退役军人服务中心 Personnel information intelligent verification system based on face recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邵文泽: "一种适应不同距离的低清人脸深度识别算法", 南京邮电大学学报, vol. 1, no. 43, 22 February 2023 (2023-02-22), pages 1 - 10 *

Similar Documents

Publication Publication Date Title
CN111133433B (en) Automatic authentication for access control using face recognition
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
CN112381075B (en) Method and system for carrying out face recognition under specific scene of machine room
US20200410709A1 (en) Location determination apparatus, location determination method and computer program
CN109241870B (en) Coal mine underground personnel identity identification method based on gait identification
CN114218998A (en) Power system abnormal behavior analysis method based on hidden Markov model
CN115620178A (en) Real-time detection method for abnormal and dangerous behaviors of power grid of unmanned aerial vehicle
CN116543283A (en) Multimode target detection method considering modal uncertainty
CN116665305A (en) Method and system for detecting worker behaviors based on computer vision and knowledge graph
CN117218783A (en) Internet of things safety management system and method
CN109558771B (en) Behavior state identification method, device and equipment of marine ship and storage medium
Ibitoye et al. Masked Faces Classification using Deep Convolutional Neural Network with VGG-16 Architecture
CN115690514A (en) Image recognition method and related equipment
CN111597896B (en) Abnormal face recognition method, recognition device, recognition apparatus, and storage medium
Praganingrum et al. Image Processing Applications in Construction Projects: Challenges and Opportunities
Saifullah et al. Real-time mask-wearing detection in video streams using deep convolutional neural networks for face recognition.
Fadlil et al. The Application of The Manhattan Method to Human Face Recognition
CN117292338B (en) Vehicle accident identification and analysis method based on video stream analysis
CN117058627B (en) Public place crowd safety distance monitoring method, medium and system
Amirgaliyev et al. AUTOMATING THE CUSTOMER VERIFICATION PROCESS IN A CAR SHARING SYSTEM BASED ON MACHINE LEARNING METHODS.
US20240127587A1 (en) Apparatus and method for integrated anomaly detection
Zhang et al. Visual fusion of network security data in image recognition
Nguyen et al. Robust lip feature detection in facial images
Asha Real-Time Face Mask Detection in Video Streams Using Deep Learning Technique
CN117994700A (en) Intelligent construction site personnel behavior recognition system and method based on AI intelligent recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination