CN115130082B - Intelligent sensing and safety control method for ruggedized computer - Google Patents

Intelligent sensing and safety control method for ruggedized computer Download PDF

Info

Publication number
CN115130082B
CN115130082B CN202211029826.1A CN202211029826A CN115130082B CN 115130082 B CN115130082 B CN 115130082B CN 202211029826 A CN202211029826 A CN 202211029826A CN 115130082 B CN115130082 B CN 115130082B
Authority
CN
China
Prior art keywords
face
image
control method
intelligent sensing
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211029826.1A
Other languages
Chinese (zh)
Other versions
CN115130082A (en
Inventor
杨凯文
李萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Clp Great Wall Shengfei Information System Co ltd
Original Assignee
Clp Great Wall Shengfei Information System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Clp Great Wall Shengfei Information System Co ltd filed Critical Clp Great Wall Shengfei Information System Co ltd
Priority to CN202211029826.1A priority Critical patent/CN115130082B/en
Publication of CN115130082A publication Critical patent/CN115130082A/en
Application granted granted Critical
Publication of CN115130082B publication Critical patent/CN115130082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

Aiming at the professional requirements of a special computer for avoiding potential safety hazards such as peeking and user leaving, the method utilizes an ultra-wide-angle camera installed on the special computer to shoot the environment near the computer, identifies the identity of the user, scans the surrounding environment of the user, senses and reacts to abnormal environmental states, and implements safety protection control on the special computer.

Description

Intelligent sensing and safety control method for ruggedized computer
Technical Field
The invention belongs to the cross application field of machine vision and machine learning technology, and particularly relates to an intelligent sensing and safety control system for a special computer.
Background
A special purpose computer within a ruggedized computer generally refers to a computer system for a particular use or application, such as an application having security requirements. Modern computer equipment is generally provided with image-based identity authentication equipment and method such as face recognition, and the identity of a person logging in the computer is confirmed by recognizing face biological characteristics, so that the use safety of the computer is ensured. However, for some special purpose computers, the existing login verification system based on image face recognition is not enough to meet the security requirements, for example, the system can not respond well to the security risks such as peeking, the user leaving and the like.
In particular, some special computers are not in a stable environment such as a machine room, but need to be applied to special occasions such as the field and ships. In these occasions, the background of the environment is complex, the illumination changes violently, the whole equipment shakes, and the like. Therefore, when the conventional method adopting image detection faces the complex environments, the login can not be quickly and accurately realized and the condition after the login can not be comprehensively and accurately monitored.
For this reason, the algorithm for image detection needs to be specifically optimized for the complex environment, so as to improve the safety of the special computer in use in the complex environment. Meanwhile, how to ensure the safety of the computer in the use process is also an urgent problem to be solved.
Disclosure of Invention
To solve the above problems, the following inventions have been proposed.
Aiming at the professional requirements of a special computer for avoiding potential safety hazards such as peeking and user leaving, the method utilizes an ultra-wide-angle camera installed on the special computer to shoot the environment near the computer, identifies the identity of the user, scans the surrounding environment of the user, senses and reacts to abnormal environmental states, and implements safety protection control on the special computer.
Intelligent sensing and safety control method for ruggedized computer
Step 1: acquiring visible light and infrared composite waveband images at the same moment, and recording the images as V (x, y, z), wherein x and y are position coordinates, and z is a mark of an image imaging waveband, so that a hybrid observation model is established:
Figure 100002_DEST_PATH_IMAGE002
Figure 100002_DEST_PATH_IMAGE004
calculating the probability that any coordinate in the mixed observation O (x, y) is a face, further obtaining a binary image B (x, y) marked by the face according to a threshold value, and further marking the range of the face in the mixed observation; to pair
Figure 100002_DEST_PATH_IMAGE006
Filtering by adopting pixel-by-pixel median to obtain filtered image
Figure 100002_DEST_PATH_IMAGE008
(ii) a Get
Figure 100002_DEST_PATH_IMAGE010
Is set of pixels, denoted as
Figure 100002_DEST_PATH_IMAGE012
Figure 100002_DEST_PATH_IMAGE014
Wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE016
represent
Figure 39319DEST_PATH_IMAGE008
A subset of the rectangular areas in the array,
Figure 100002_DEST_PATH_IMAGE018
Figure 100002_DEST_PATH_IMAGE020
respectively representing the width and height of the rectangular area;
Figure 100002_DEST_PATH_IMAGE022
a symbol represents the number of pixels included in a set of pixel compositions;
Figure 100002_DEST_PATH_IMAGE024
if the rectangular area is solved according to the formula (4)
Figure 100002_DEST_PATH_IMAGE026
If the formula (5) is satisfied, the region is considered to be
Figure 510489DEST_PATH_IMAGE026
If the image is a face area, otherwise, the face cannot be found in the current image; wherein
Figure 100002_DEST_PATH_IMAGE028
Figure 100002_DEST_PATH_IMAGE030
Is a control threshold; after the face area is determined, the face area is divided into sub-images, convolution operation is carried out on the sub-images respectively to extract features, and whether the face area is a legal login person or not is judged according to the features;
step 2: after the login is finished, scanning surrounding areas by using different window sizes respectively, and sending the surrounding areas into a neural network model to judge whether the obtained image contains the face of a non-login person; if yes, sending out warning information; after warning information is sent out in continuous multi-frame images, the special computer is controlled to enter an unregistered state;
wherein the neural network model is: the hidden layer consists of three convolution layers and a full connecting layer; the outputs of the three convolutional layers are:
Figure 100002_DEST_PATH_IMAGE032
Figure 100002_DEST_PATH_IMAGE034
Figure 100002_DEST_PATH_IMAGE036
wherein
Figure 100002_DEST_PATH_IMAGE038
Figure 100002_DEST_PATH_IMAGE040
Figure 100002_DEST_PATH_IMAGE042
The convolution kernels of each layer are increased in sequence; p and q represent relative position coordinates in a convolution kernel;
Figure 100002_DEST_PATH_IMAGE044
Figure 100002_DEST_PATH_IMAGE046
Figure 100002_DEST_PATH_IMAGE048
respectively, each layer of linear deviation; the excitation function is:
Figure 100002_DEST_PATH_IMAGE050
the collection adopts an ultra wide angle camera.
And step 2, regularly acquiring an environment image by using the ultra-wide angle camera, and monitoring the login environment around the special computer.
The width and the height of the rectangle of the face part obtained by the method in the step 1 are respectively
Figure 100002_DEST_PATH_IMAGE052
As the upper threshold of the window.
Setting a lower threshold
Figure 100002_DEST_PATH_IMAGE054
And 2, after the computer enters the non-login state in the step 1, the method in the step 1 needs to be adopted again for verification and then the computer enters the login state.
Figure 567483DEST_PATH_IMAGE038
Is a convolution kernel of size 3 x 3 of a rectangle.
Figure 459215DEST_PATH_IMAGE040
Is a convolution kernel of size 9 x 9 of a rectangle.
Figure 846334DEST_PATH_IMAGE042
Is a rectangular roll with 17 x 17 sizeAnd (4) accumulating kernels.
The output layer is connected behind the full connection layer of the neural network model.
The invention has the advantages that:
1. the infrared image and the visible light image are overlapped into an image through weighting, and the algorithm is optimized, so that the human face range can be quickly and accurately defined, and the accurate extraction of the human face features is ensured by matching with the feature extraction algorithm. Therefore, the method is higher in identification efficiency than the method of directly using the collected picture or simply processing the collected picture. And then, the rapid and accurate extraction of the human face features is realized by optimizing an extraction template and an algorithm. Through the operation, compared with the traditional method for carrying out face recognition by using algorithms such as a neural network or image segmentation recognition, the face recognition method is quicker, the anti-interference performance is stronger, and the login safety and speed can be ensured in a severe environment.
2. When environment scanning is carried out, the requirement on real-time responsiveness is not as high as that of a login link, so that in order to improve scanning efficiency and accuracy, only a visible light image is used, a neural network model is optimized, a variable convolution kernel is selected, a special excitation function is set, model training is carried out by adopting a better cost function, and the rapidness and the accuracy of face recognition of a non-login user in a severe environment are ensured.
3. The invention realizes the mutual matching of face login authentication and face scanning of other illegal users by continuously scanning the environment after face scanning login and alarming and returning to a section which is not logged in when danger is found, thereby realizing the safety precaution in the whole process from login to use. Aiming at professional requirements of a special computer for avoiding potential safety hazards such as peeking and user leaving, the method shoots the environment near the computer, identifies the identity of the user, scans the surrounding environment of the user, senses and reacts to abnormal environment states.
Detailed Description
Step 1The user identity identification and login control method based on the ultra-wide-angle shot image adopts an ultra-wide-angle camera to shoot a visible light and infrared composite band diagram of the face of a userAnd comparing whether the face appearing in the image is consistent with the face image of the user registered in advance by using a computer vision identification method, and allowing the user to log in the system after judging that the face is consistent.
The ultra-wide-angle camera refers to a camera with a lens visual range larger than 170 degrees; the collection of the multimode image means that a photosensitive element of the camera can collect images in a composite waveband of an infrared waveband and a visible light waveband.
Due to the manufacturing process of the optical lens, the lens of the ultra-wide angle camera generates larger image distortion than a conventional visual angle lens, and difficulty is caused to a computer vision identification task. In order to improve the accuracy of human face feature recognition by computer vision, the invention provides a method for jointly using human identity recognition by using visible light and infrared composite band images.
By adopting the camera capable of acquiring the visible light and infrared composite band images, the camera is arranged at a proper position on the upper part of the special computer, so that the camera can easily acquire the face images of a user. Collecting visible light and infrared composite wave band images possibly containing human faces at the same time and recording the images as
Figure DEST_PATH_IMAGE056
. Wherein
Figure DEST_PATH_IMAGE058
Is the position coordinates of a pixel in the image,
Figure DEST_PATH_IMAGE060
for marking the image-forming wavelength band when
Figure DEST_PATH_IMAGE062
When the temperature of the water is higher than the set temperature,
Figure DEST_PATH_IMAGE064
a pixel representing a visible light band when
Figure DEST_PATH_IMAGE066
When the temperature of the water is higher than the set temperature,
Figure DEST_PATH_IMAGE068
a pixel representing the infrared band.
Is composed of
Figure 763606DEST_PATH_IMAGE056
A mixed observation model was established as follows.
Figure DEST_PATH_IMAGE002A
Figure DEST_PATH_IMAGE004A
Hybrid observation
Figure DEST_PATH_IMAGE070
Expressed as the weighted sum of the pixels at the corresponding positions in the visible and infrared bands.
Order to
Figure DEST_PATH_IMAGE072
Pixel values representing mixed observations
Figure DEST_PATH_IMAGE074
Is a probabilistic representation of the face that is,
Figure DEST_PATH_IMAGE076
distribution parameters representing a face pixel blending observation. Modeling by adopting a Gaussian model, then:
Figure DEST_PATH_IMAGE078
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE080
it is shown that the circumferential ratio,
Figure DEST_PATH_IMAGE082
the function of a natural index is represented,
Figure DEST_PATH_IMAGE084
is the mean parameter of the gaussian model,
Figure DEST_PATH_IMAGE086
is the variance parameter of the gaussian model.
Sample data of a complex band image of a human face is prepared, and parameters can be learned by maximum likelihood method according to equations (1) and (2)
Figure 14721DEST_PATH_IMAGE084
Figure 77355DEST_PATH_IMAGE086
Figure DEST_PATH_IMAGE088
Figure DEST_PATH_IMAGE090
The optimal solution of (a). Using the optimal solution as
Figure 623874DEST_PATH_IMAGE072
Can calculate a mixture of observations
Figure 797366DEST_PATH_IMAGE070
Any coordinate is the probability of the face, and a binary image of the face mark is further obtained according to a threshold value
Figure 214572DEST_PATH_IMAGE006
And then marking the range of the human face in the mixed observation.
Figure DEST_PATH_IMAGE092
To pair
Figure 621676DEST_PATH_IMAGE006
Filtering by adopting pixel-by-pixel median to obtain filtered image
Figure 248967DEST_PATH_IMAGE008
Based on experimental data, a preferred filter window is 9 x 9 at an original image resolution of 640 x 480.
The above method is in mixed observation
Figure 226150DEST_PATH_IMAGE070
The part of the face is marked as a set of several pixels
Figure 497862DEST_PATH_IMAGE008
Get it
Figure 167878DEST_PATH_IMAGE010
Is set of pixels, denoted as
Figure 751306DEST_PATH_IMAGE012
Figure 407547DEST_PATH_IMAGE022
The symbol represents the number of pixels included in the set of pixel components. Definition of
Figure 658399DEST_PATH_IMAGE026
Represents a set satisfying the following conditions:
Figure DEST_PATH_IMAGE014A
wherein, the first and the second end of the pipe are connected with each other,
Figure 171420DEST_PATH_IMAGE016
to represent
Figure 242144DEST_PATH_IMAGE008
Of the rectangular area of the first set of pixels,
Figure 935032DEST_PATH_IMAGE018
Figure 305970DEST_PATH_IMAGE020
respectively representing the width and height of the rectangular area.
Figure DEST_PATH_IMAGE094
The intersection of sets, i.e. the set consisting of pixels belonging to both sets, is represented.
And (3) calculating:
Figure DEST_PATH_IMAGE024A
if the rectangular area is solved according to the formula (4)
Figure 927575DEST_PATH_IMAGE026
If the formula (5) is satisfied, the region is considered to be
Figure 626541DEST_PATH_IMAGE026
And if not, the face cannot be found in the current image. In the formulae (4) and (5)
Figure 749218DEST_PATH_IMAGE028
Figure 974663DEST_PATH_IMAGE030
To control the threshold, test preferences based on experimental data
Figure DEST_PATH_IMAGE096
Figure DEST_PATH_IMAGE098
A rectangular region obtained by solving the above conditions satisfying both (4) and (5)
Figure 65372DEST_PATH_IMAGE026
The area mapped to the original image is the position of the face in the original image. The method for extracting the region where the human face is located from the visible light and infrared composite wave band image has higher precision compared with the traditional human face detection and extraction method based on the visible light Gaussian model, and has higher efficiency compared with the recent popular human face detection and extraction method based on high-dimensional parameter models such as neural network and the likeThe method meets the engineering application requirements, avoids the long waiting time of the user during login, and improves the user experience while ensuring the safety.
Further, remember
Figure DEST_PATH_IMAGE100
Corresponding to visible light and infrared composite band image
Figure 517213DEST_PATH_IMAGE056
The region where the face is located.
Figure DEST_PATH_IMAGE102
Respectively, a position coordinate and a band mark corresponding to one pixel. Will be provided with
Figure 318947DEST_PATH_IMAGE100
Dividing the x and y directions into 8 × 8 equal-size subsets, and respectively recording each subset as:
Figure DEST_PATH_IMAGE104
Figure DEST_PATH_IMAGE106
、…、
Figure DEST_PATH_IMAGE108
,
Figure DEST_PATH_IMAGE110
Figure DEST_PATH_IMAGE112
、…、
Figure DEST_PATH_IMAGE114
,…,
Figure DEST_PATH_IMAGE116
Figure DEST_PATH_IMAGE118
、…、
Figure DEST_PATH_IMAGE120
. The original image size is not evenly divided by 8 and can be complemented with zero pixels.
Further, the following template is preferable
Figure DEST_PATH_IMAGE122
Figure DEST_PATH_IMAGE124
Figure DEST_PATH_IMAGE126
Figure DEST_PATH_IMAGE128
. Wherein
Figure DEST_PATH_IMAGE130
Figure DEST_PATH_IMAGE132
Figure DEST_PATH_IMAGE134
Figure DEST_PATH_IMAGE136
Figure DEST_PATH_IMAGE138
And (3) calculating:
Figure DEST_PATH_IMAGE140
in the formula
Figure DEST_PATH_IMAGE142
According to the definition, the number of pixels of the face area is represented; value of p and qIn the range of 1-8.
Figure DEST_PATH_IMAGE144
Representing the normalized coefficient, as defined above:
Figure DEST_PATH_IMAGE146
a total of 8 × 4=256 results can be obtained according to equation (6), and a 256-dimensional vector can be formed:
Figure DEST_PATH_IMAGE148
Figure DEST_PATH_IMAGE150
Figure DEST_PATH_IMAGE152
Figure DEST_PATH_IMAGE154
for the above-mentioned vector
Figure DEST_PATH_IMAGE156
Binary classification is performed for identifying whether the face part in the image belongs to a specific user. Shooting a plurality of human face visible light and infrared composite wave band images of a specific user, and calculating according to the method to obtain corresponding vectors
Figure 475864DEST_PATH_IMAGE156
And forming a training sample set. And a binary classification algorithm such as a support vector machine is adopted to obtain a recognition model of the specific user face image, wherein the recognition model can recognize whether the input is a positive sample (the specific user face) or a negative sample (the specific user face is not used). Through the operation of the template, the template can be quickly and accurately extractedAnd the detection effect is better by taking the characteristics.
The system is in the non-logging state after the special computer is started. Inputting a group of new visible light and infrared composite wave band images to the recognition model through a camera, judging whether the input is a positive sample or a negative sample by adopting the recognition model, and if the input is judged to be the positive sample, allowing the system to be logged in. After logging in, the special computer system is in a logging-in state.
Step 2The login environment monitoring method based on the ultra-wide-angle shot image scans and monitors the environment around a user in the ultra-wide-angle image, and if people except the user are found, a monitoring alarm is started.
After a specific user of the special purpose computer has logged in using the method described in step 1, the system enters a logged-in state. And in the login state, the method is adopted, and the surrounding environment of the special computer is monitored at regular time by shooting images.
The timing monitoring method and process are as follows.
Collecting a visible light and infrared composite wave band image at intervals, and detecting whether a non-specific face exists in the image by a non-specific face detection algorithm on the visible light wave band image part. The non-specific face refers to a face image which is irrelevant to identity. When the non-specific human face is detected, only visible wave band images are adopted, and infrared wave band images are not adopted, so that the calculation efficiency is improved.
The width and height of the rectangle of the face part obtained by the method in the step 1 are respectively
Figure 563906DEST_PATH_IMAGE052
As the upper threshold, a lower threshold is additionally set
Figure 237464DEST_PATH_IMAGE054
Get it
Figure DEST_PATH_IMAGE158
Figure DEST_PATH_IMAGE160
As a non-specific faceThe range of the detected size candidates is,
Figure DEST_PATH_IMAGE162
the step size of the detection area is increased in the width and height directions,
Figure DEST_PATH_IMAGE164
and (3) detecting the visible light waveband image acquired in the step (2) for the step length of the starting point of the detection area in the width and height directions. The detection process is as S21-S24.
S21, setting the coordinates of the starting point of the initial detection area as
Figure DEST_PATH_IMAGE166
The width and height of the initial detection area are
Figure DEST_PATH_IMAGE168
. Namely, it is
Figure 341424DEST_PATH_IMAGE166
Figure DEST_PATH_IMAGE170
The formed rectangular area is an initial detection area. For visible light wave band image
Figure 682406DEST_PATH_IMAGE064
A sub-graph within the detection region is subjected to non-specific face detection.
S21.1, if the sub-image returned by the non-specific face detection method is a face, detecting whether the sub-image belongs to a positive sample of a specific user by adopting the method in the step 1 in combination with the visible light and infrared composite band image of the corresponding area.
If a positive sample belongs to the particular user, step S22 is continued, otherwise step S25 is passed to.
S21.2 otherwise, if the non-specific face detection method returns that the sub-image is not a face, continuing with step S22.
S22, according to step length
Figure 206928DEST_PATH_IMAGE164
The coordinates of the start point of the movement detection area are
Figure DEST_PATH_IMAGE172
Figure DEST_PATH_IMAGE174
Representing the number of movements of the step as an integer; after the coordinates of the starting point of the detection area are moved each time, the detection is performed according to the method described in S21 until the complete area of the image is completely scanned by the size of the current detection area. Go to step S23.
S23, according to step length
Figure 898941DEST_PATH_IMAGE162
Adjusting the size of the detection area to
Figure DEST_PATH_IMAGE176
And steps S21, S22 are repeated.
S24, sending out warning information to indicate that a human face with an identity other than a specific user is detected in the current image; as an optimized automatic safety control method, after warning information is sent out in continuous multi-frame images, a special computer is controlled to enter an unregistered state, and the safety of data information is protected. After the computer enters the non-login state, the method in the step 1 needs to be adopted again to verify and then enter the login state.
And S21, adopting a convolution fast neural network model to carry out face detection on the subgraph. The neural network model consists of input, output and hidden layers.
The input of the neural network model is a visible light wave band image
Figure 573636DEST_PATH_IMAGE064
Is denoted by
Figure DEST_PATH_IMAGE178
Figure DEST_PATH_IMAGE180
Representing image pixels in sub-pixelsCoordinates in the figure.
The hidden layer of the neural network model consists of three convolutional layers and a full-connection layer. Each layer is a function of the previous layer; the first convolutional layer is defined as follows:
Figure DEST_PATH_IMAGE032A
Figure 473852DEST_PATH_IMAGE038
a convolution kernel of size 3 x 3 of a rectangle; p and q represent relative position coordinates in the convolution kernel, based on the relative position with the reference position
Figure 903696DEST_PATH_IMAGE180
The deviation degree of (A) is taken as
Figure DEST_PATH_IMAGE182
Figure 817426DEST_PATH_IMAGE044
Is a linear deviation. The convolution kernel and the linear deviation parameter are determined by learning. Excitation function
Figure DEST_PATH_IMAGE184
For a nonlinear piecewise function:
Figure DEST_PATH_IMAGE186
the log is a natural logarithmic function, and e represents a natural index. Nonlinear piecewise function
Figure 200871DEST_PATH_IMAGE184
And establishing nonlinear mapping of input and output, further reducing overfitting of model classification on a traditional nonlinear natural logarithm model through the nonlinear piecewise function, and reducing the identification deviation of the model to learning and testing samples.
The second convolutional layer is defined as follows:
Figure DEST_PATH_IMAGE034A
Figure 782026DEST_PATH_IMAGE040
is a convolution kernel with the size of 9 x 9 of a rectangle, and p and q represent relative position coordinates in the convolution kernel, so that the values of p and q are as
Figure DEST_PATH_IMAGE188
9 integers in between.
Figure 54875DEST_PATH_IMAGE046
Is a linear deviation. The convolution kernel and the linear deviation parameter are determined by learning.
Figure 783797DEST_PATH_IMAGE184
Is defined as (8).
The third convolutional layer is defined as follows:
Figure DEST_PATH_IMAGE036A
Figure 597032DEST_PATH_IMAGE042
is a rectangular convolution kernel with the size of 17 × 17, and p and q represent relative position coordinates in the convolution kernel, so that the values of p and q are as follows
Figure DEST_PATH_IMAGE190
17 integers in between.
Figure 2999DEST_PATH_IMAGE048
Is a linear deviation. The convolution kernel and the linear deviation parameter are determined by learning.
Figure 181170DEST_PATH_IMAGE184
Is defined as (8).
The three convolution layers jointly extract the face-related local image features, convolution kernels with different sizes are adopted, the local image features in different sizes are met, and convolution operation is provided with a fast parallel computing method on a modern computer, so that the method is high in implementation efficiency.
The fully connected layers after the three convolutional layers are defined as follows:
Figure DEST_PATH_IMAGE192
Figure DEST_PATH_IMAGE194
representing a connection between the third convolutional layer and some two nodes in the fully-connected layer,
Figure DEST_PATH_IMAGE196
a node in the third convolutional layer is defined,
Figure DEST_PATH_IMAGE198
and defining nodes of a full connection layer, wherein the full connection layer comprises 256 nodes, and one connection exists between any two nodes of the third convolution layer and the full connection layer.
Figure DEST_PATH_IMAGE200
Is a linear deviation.
Figure 538333DEST_PATH_IMAGE184
Is defined as (8).
Figure DEST_PATH_IMAGE202
Figure DEST_PATH_IMAGE204
And the connection between a certain node in the full connection layer and the output layer Face is represented.
Figure 122636DEST_PATH_IMAGE198
Define a full connection layerThe node of (2). One connection exists between any node of the full-connection layer and the Face.
Figure DEST_PATH_IMAGE206
Is a linear deviation.
Figure 147224DEST_PATH_IMAGE184
Is defined as (8).
The Face value is 0 or 1, and when the Face =0, the input image is represented as a non-Face image, and when the Face =1, the input image is represented as a Face image. The face is specifically a non-specific face.
Learning the neural network model (7-12) by adopting the following cost function, and detecting the non-specific face by using the model after learning is finished:
Figure DEST_PATH_IMAGE208
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE210
the true value of the learning sample corresponding to a certain image sample,
Figure DEST_PATH_IMAGE212
representing the output value calculated from the learning sample input substitution model. 0
Figure DEST_PATH_IMAGE214
The method is used for controlling parameters, and is beneficial to improving the robustness of the model to noise. Preferably, take
Figure DEST_PATH_IMAGE216
. And solving the cost function iteratively to obtain the optimized value of each parameter in the model (7-12).
After model learning is completed, timing monitoring may be implemented. And collecting a visible light and infrared composite wave band image at intervals, detecting whether a non-specific human face exists in the image by adopting the neural network model for the visible light wave band image part, and taking measures according to S21-S24.
The invention provides an intelligent sensing and safety control method for a special computer, which realizes the intelligent sensing and safety control of the special computer through automatic image identification. Table 1 shows the application test indexes of the method, and table 2 shows the application test indexes of the prior art. The experimental result shows that the invention has no obvious advantages compared with the prior art when the invention is environment-friendly, but the performance of the invention on the field and bumpy ships with complex environment is far higher than that of the prior art. Therefore, the method can effectively identify the abnormal environment, quickly react to the abnormal environment and realize the intelligent perception and the safety control of the special computer.
TABLE 1
Figure DEST_PATH_IMAGE218
TABLE 2
Figure DEST_PATH_IMAGE220
The above embodiments are only limited examples, and the space cannot be exhaustive, so the scope of the claims is not limited thereto, and all technical solutions similar to the above products and methods are within the scope of the present application.

Claims (10)

1. An intelligent sensing and safety control method for a ruggedized computer is characterized by comprising the following steps:
step 1: acquiring visible light and infrared composite waveband images at the same moment, and recording the images as V (x, y, z), wherein x and y are position coordinates, and z is a mark of an image imaging waveband, so that a hybrid observation model is established:
Figure DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE004
calculating the probability that any coordinate in the mixed observation O (x, y) is a face, further obtaining a binary image B (x, y) marked by the face according to a threshold value, and further marking the range of the face in the mixed observation; to pair
Figure DEST_PATH_IMAGE006
Filtering by adopting pixel-by-pixel median to obtain a filtered image
Figure DEST_PATH_IMAGE008
(ii) a Get
Figure DEST_PATH_IMAGE010
Is set of pixels, denoted as
Figure DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE014
Wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE016
to represent
Figure 582835DEST_PATH_IMAGE008
Of the rectangular area of the first set of pixels,
Figure DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE020
respectively representing the width and height of the rectangular area;
Figure DEST_PATH_IMAGE022
the symbol represents the number of pixels included in the set of pixel compositions;
Figure DEST_PATH_IMAGE024
if the rectangular area is solved according to the formula (4)
Figure DEST_PATH_IMAGE026
If the formula (5) is satisfied, the region is considered to be
Figure 502118DEST_PATH_IMAGE026
If the image is a face region, the face cannot be found in the current image; wherein
Figure DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE030
Is a control threshold; after the face area is determined, the face area is divided into subgraphs, the subgraphs are respectively subjected to convolution operation to extract features, and whether the subgraphs are legal loggers or not is judged according to the features;
step 2: after login is finished, scanning surrounding areas by using different window sizes respectively, and sending the surrounding areas into a neural network model to judge whether the obtained image has a human face of a non-login person; if yes, sending out warning information; after warning information is sent out in continuous multi-frame images, the special computer is controlled to enter an unregistered state;
wherein the neural network model is: the hidden layer consists of three convolution layers and a full connecting layer; the outputs of the three convolutional layers are:
Figure DEST_PATH_IMAGE032
Figure DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE036
wherein
Figure DEST_PATH_IMAGE038
Figure DEST_PATH_IMAGE040
Figure DEST_PATH_IMAGE042
The convolution kernels of each layer are respectively increased in sequence; p and q represent relative position coordinates in a convolution kernel;
Figure DEST_PATH_IMAGE044
Figure DEST_PATH_IMAGE046
Figure DEST_PATH_IMAGE048
respectively, each layer of linear deviation; the excitation function is:
Figure DEST_PATH_IMAGE050
2. the intelligent sensing and security control method for hardened computers according to claim 1, characterized in that: the collection adopts an ultra wide angle camera.
3. The intelligent sensing and security control method for hardened computers according to claim 1, characterized in that: and step 2, regularly acquiring an environment image by using the ultra-wide-angle camera, and monitoring the login environment around the special computer.
4. The intelligent sensing and security control method for hardened computers according to claim 1, characterized in that: root of herbaceous plantThe width and height of the rectangle of the face part obtained by the method in the step 1 are respectively
Figure DEST_PATH_IMAGE052
As the upper threshold of the window.
5. The intelligent sensing and security control method for ruggedized computers of claim 4, wherein: setting a lower threshold
Figure DEST_PATH_IMAGE054
6. The intelligent sensing and security control method for ruggedized computers, as claimed in claim 1, wherein: and 2, after the computer enters the non-login state in the step 1, the computer needs to adopt the method in the step 1 again to verify and then enter the login state.
7. The intelligent sensing and security control method for hardened computers according to claim 1, characterized in that:
Figure 321432DEST_PATH_IMAGE038
is a convolution kernel of size 3 x 3 of a rectangle.
8. The intelligent sensing and security control method for hardened computers according to claim 1, characterized in that:
Figure 164623DEST_PATH_IMAGE040
is a convolution kernel of size 9 x 9 of a rectangle.
9. The intelligent sensing and security control method for hardened computers according to claim 1, characterized in that:
Figure 618738DEST_PATH_IMAGE042
is rectangular with a dimension of 17%17.
10. The intelligent sensing and security control method for ruggedized computers, as claimed in claim 1, wherein: the output layer is connected behind the full connection layer of the neural network model.
CN202211029826.1A 2022-08-26 2022-08-26 Intelligent sensing and safety control method for ruggedized computer Active CN115130082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211029826.1A CN115130082B (en) 2022-08-26 2022-08-26 Intelligent sensing and safety control method for ruggedized computer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211029826.1A CN115130082B (en) 2022-08-26 2022-08-26 Intelligent sensing and safety control method for ruggedized computer

Publications (2)

Publication Number Publication Date
CN115130082A CN115130082A (en) 2022-09-30
CN115130082B true CN115130082B (en) 2022-11-04

Family

ID=83387918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211029826.1A Active CN115130082B (en) 2022-08-26 2022-08-26 Intelligent sensing and safety control method for ruggedized computer

Country Status (1)

Country Link
CN (1) CN115130082B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803301A (en) * 2017-03-28 2017-06-06 广东工业大学 A kind of recognition of face guard method and system based on deep learning
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控系统集成有限公司 Artificial intelligence convolutional neural networks face identification system
WO2020258121A1 (en) * 2019-06-27 2020-12-30 深圳市汇顶科技股份有限公司 Face recognition method and apparatus, and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10354159B2 (en) * 2016-09-06 2019-07-16 Carnegie Mellon University Methods and software for detecting objects in an image using a contextual multiscale fast region-based convolutional neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803301A (en) * 2017-03-28 2017-06-06 广东工业大学 A kind of recognition of face guard method and system based on deep learning
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控系统集成有限公司 Artificial intelligence convolutional neural networks face identification system
WO2020258121A1 (en) * 2019-06-27 2020-12-30 深圳市汇顶科技股份有限公司 Face recognition method and apparatus, and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于卷积神经网络的人脸识别研究综述;鲍睿栋等;《软件导刊》;20180415(第04期);全文 *

Also Published As

Publication number Publication date
CN115130082A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
Jourabloo et al. Face de-spoofing: Anti-spoofing via noise modeling
Zhang et al. Deep-IRTarget: An automatic target detector in infrared imagery using dual-domain feature extraction and allocation
Cai et al. DRL-FAS: A novel framework based on deep reinforcement learning for face anti-spoofing
Tian et al. Detection and separation of smoke from single image frames
Raghavendra et al. Scaling-robust fingerprint verification with smartphone camera in real-life scenarios
CN104951940B (en) A kind of mobile payment verification method based on personal recognition
WO2015149534A1 (en) Gabor binary pattern-based face recognition method and device
Chen et al. An adaptive CNNs technology for robust iris segmentation
Ishikura et al. Saliency detection based on multiscale extrema of local perceptual color differences
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN106778517A (en) A kind of monitor video sequence image vehicle knows method for distinguishing again
CN112580590A (en) Finger vein identification method based on multi-semantic feature fusion network
CN108416291B (en) Face detection and recognition method, device and system
CN109800643A (en) A kind of personal identification method of living body faces multi-angle
Zhang et al. License plate localization in unconstrained scenes using a two-stage CNN-RNN
Yeh et al. Face liveness detection based on perceptual image quality assessment features with multi-scale analysis
CN111222380A (en) Living body detection method and device and recognition model training method thereof
CN110852292B (en) Sketch face recognition method based on cross-modal multi-task depth measurement learning
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
CN112434647A (en) Human face living body detection method
CN111767879A (en) Living body detection method
Szankin et al. Influence of thermal imagery resolution on accuracy of deep learning based face recognition
Potje et al. Extracting deformation-aware local features by learning to deform
Wang et al. Domain generalization for face anti-spoofing via negative data augmentation
Huang et al. Multi-Teacher Single-Student Visual Transformer with Multi-Level Attention for Face Spoofing Detection.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant