CN115130082B - Intelligent sensing and safety control method for ruggedized computer - Google Patents
Intelligent sensing and safety control method for ruggedized computer Download PDFInfo
- Publication number
- CN115130082B CN115130082B CN202211029826.1A CN202211029826A CN115130082B CN 115130082 B CN115130082 B CN 115130082B CN 202211029826 A CN202211029826 A CN 202211029826A CN 115130082 B CN115130082 B CN 115130082B
- Authority
- CN
- China
- Prior art keywords
- face
- image
- control method
- intelligent sensing
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
Aiming at the professional requirements of a special computer for avoiding potential safety hazards such as peeking and user leaving, the method utilizes an ultra-wide-angle camera installed on the special computer to shoot the environment near the computer, identifies the identity of the user, scans the surrounding environment of the user, senses and reacts to abnormal environmental states, and implements safety protection control on the special computer.
Description
Technical Field
The invention belongs to the cross application field of machine vision and machine learning technology, and particularly relates to an intelligent sensing and safety control system for a special computer.
Background
A special purpose computer within a ruggedized computer generally refers to a computer system for a particular use or application, such as an application having security requirements. Modern computer equipment is generally provided with image-based identity authentication equipment and method such as face recognition, and the identity of a person logging in the computer is confirmed by recognizing face biological characteristics, so that the use safety of the computer is ensured. However, for some special purpose computers, the existing login verification system based on image face recognition is not enough to meet the security requirements, for example, the system can not respond well to the security risks such as peeking, the user leaving and the like.
In particular, some special computers are not in a stable environment such as a machine room, but need to be applied to special occasions such as the field and ships. In these occasions, the background of the environment is complex, the illumination changes violently, the whole equipment shakes, and the like. Therefore, when the conventional method adopting image detection faces the complex environments, the login can not be quickly and accurately realized and the condition after the login can not be comprehensively and accurately monitored.
For this reason, the algorithm for image detection needs to be specifically optimized for the complex environment, so as to improve the safety of the special computer in use in the complex environment. Meanwhile, how to ensure the safety of the computer in the use process is also an urgent problem to be solved.
Disclosure of Invention
To solve the above problems, the following inventions have been proposed.
Aiming at the professional requirements of a special computer for avoiding potential safety hazards such as peeking and user leaving, the method utilizes an ultra-wide-angle camera installed on the special computer to shoot the environment near the computer, identifies the identity of the user, scans the surrounding environment of the user, senses and reacts to abnormal environmental states, and implements safety protection control on the special computer.
Intelligent sensing and safety control method for ruggedized computer
Step 1: acquiring visible light and infrared composite waveband images at the same moment, and recording the images as V (x, y, z), wherein x and y are position coordinates, and z is a mark of an image imaging waveband, so that a hybrid observation model is established:
calculating the probability that any coordinate in the mixed observation O (x, y) is a face, further obtaining a binary image B (x, y) marked by the face according to a threshold value, and further marking the range of the face in the mixed observation; to pairFiltering by adopting pixel-by-pixel median to obtain filtered image(ii) a GetIs set of pixels, denoted as;
Wherein the content of the first and second substances,representA subset of the rectangular areas in the array,、respectively representing the width and height of the rectangular area;a symbol represents the number of pixels included in a set of pixel compositions;
if the rectangular area is solved according to the formula (4)If the formula (5) is satisfied, the region is considered to beIf the image is a face area, otherwise, the face cannot be found in the current image; wherein、Is a control threshold; after the face area is determined, the face area is divided into sub-images, convolution operation is carried out on the sub-images respectively to extract features, and whether the face area is a legal login person or not is judged according to the features;
step 2: after the login is finished, scanning surrounding areas by using different window sizes respectively, and sending the surrounding areas into a neural network model to judge whether the obtained image contains the face of a non-login person; if yes, sending out warning information; after warning information is sent out in continuous multi-frame images, the special computer is controlled to enter an unregistered state;
wherein the neural network model is: the hidden layer consists of three convolution layers and a full connecting layer; the outputs of the three convolutional layers are:
wherein、、The convolution kernels of each layer are increased in sequence; p and q represent relative position coordinates in a convolution kernel;、、respectively, each layer of linear deviation; the excitation function is:
the collection adopts an ultra wide angle camera.
And step 2, regularly acquiring an environment image by using the ultra-wide angle camera, and monitoring the login environment around the special computer.
The width and the height of the rectangle of the face part obtained by the method in the step 1 are respectivelyAs the upper threshold of the window.
And 2, after the computer enters the non-login state in the step 1, the method in the step 1 needs to be adopted again for verification and then the computer enters the login state.
The output layer is connected behind the full connection layer of the neural network model.
The invention has the advantages that:
1. the infrared image and the visible light image are overlapped into an image through weighting, and the algorithm is optimized, so that the human face range can be quickly and accurately defined, and the accurate extraction of the human face features is ensured by matching with the feature extraction algorithm. Therefore, the method is higher in identification efficiency than the method of directly using the collected picture or simply processing the collected picture. And then, the rapid and accurate extraction of the human face features is realized by optimizing an extraction template and an algorithm. Through the operation, compared with the traditional method for carrying out face recognition by using algorithms such as a neural network or image segmentation recognition, the face recognition method is quicker, the anti-interference performance is stronger, and the login safety and speed can be ensured in a severe environment.
2. When environment scanning is carried out, the requirement on real-time responsiveness is not as high as that of a login link, so that in order to improve scanning efficiency and accuracy, only a visible light image is used, a neural network model is optimized, a variable convolution kernel is selected, a special excitation function is set, model training is carried out by adopting a better cost function, and the rapidness and the accuracy of face recognition of a non-login user in a severe environment are ensured.
3. The invention realizes the mutual matching of face login authentication and face scanning of other illegal users by continuously scanning the environment after face scanning login and alarming and returning to a section which is not logged in when danger is found, thereby realizing the safety precaution in the whole process from login to use. Aiming at professional requirements of a special computer for avoiding potential safety hazards such as peeking and user leaving, the method shoots the environment near the computer, identifies the identity of the user, scans the surrounding environment of the user, senses and reacts to abnormal environment states.
Detailed Description
Step 1The user identity identification and login control method based on the ultra-wide-angle shot image adopts an ultra-wide-angle camera to shoot a visible light and infrared composite band diagram of the face of a userAnd comparing whether the face appearing in the image is consistent with the face image of the user registered in advance by using a computer vision identification method, and allowing the user to log in the system after judging that the face is consistent.
The ultra-wide-angle camera refers to a camera with a lens visual range larger than 170 degrees; the collection of the multimode image means that a photosensitive element of the camera can collect images in a composite waveband of an infrared waveband and a visible light waveband.
Due to the manufacturing process of the optical lens, the lens of the ultra-wide angle camera generates larger image distortion than a conventional visual angle lens, and difficulty is caused to a computer vision identification task. In order to improve the accuracy of human face feature recognition by computer vision, the invention provides a method for jointly using human identity recognition by using visible light and infrared composite band images.
By adopting the camera capable of acquiring the visible light and infrared composite band images, the camera is arranged at a proper position on the upper part of the special computer, so that the camera can easily acquire the face images of a user. Collecting visible light and infrared composite wave band images possibly containing human faces at the same time and recording the images as. WhereinIs the position coordinates of a pixel in the image,for marking the image-forming wavelength band whenWhen the temperature of the water is higher than the set temperature,a pixel representing a visible light band whenWhen the temperature of the water is higher than the set temperature,a pixel representing the infrared band.
Hybrid observationExpressed as the weighted sum of the pixels at the corresponding positions in the visible and infrared bands.
Order toPixel values representing mixed observationsIs a probabilistic representation of the face that is,distribution parameters representing a face pixel blending observation. Modeling by adopting a Gaussian model, then:
wherein the content of the first and second substances,it is shown that the circumferential ratio,the function of a natural index is represented,is the mean parameter of the gaussian model,is the variance parameter of the gaussian model.
Sample data of a complex band image of a human face is prepared, and parameters can be learned by maximum likelihood method according to equations (1) and (2)、、、The optimal solution of (a). Using the optimal solution asCan calculate a mixture of observationsAny coordinate is the probability of the face, and a binary image of the face mark is further obtained according to a threshold valueAnd then marking the range of the human face in the mixed observation.
To pairFiltering by adopting pixel-by-pixel median to obtain filtered imageBased on experimental data, a preferred filter window is 9 x 9 at an original image resolution of 640 x 480.
The above method is in mixed observationThe part of the face is marked as a set of several pixelsGet itIs set of pixels, denoted as。The symbol represents the number of pixels included in the set of pixel components. Definition ofRepresents a set satisfying the following conditions:
wherein, the first and the second end of the pipe are connected with each other,to representOf the rectangular area of the first set of pixels,、respectively representing the width and height of the rectangular area.The intersection of sets, i.e. the set consisting of pixels belonging to both sets, is represented.
And (3) calculating:
if the rectangular area is solved according to the formula (4)If the formula (5) is satisfied, the region is considered to beAnd if not, the face cannot be found in the current image. In the formulae (4) and (5)、To control the threshold, test preferences based on experimental data,。
A rectangular region obtained by solving the above conditions satisfying both (4) and (5)The area mapped to the original image is the position of the face in the original image. The method for extracting the region where the human face is located from the visible light and infrared composite wave band image has higher precision compared with the traditional human face detection and extraction method based on the visible light Gaussian model, and has higher efficiency compared with the recent popular human face detection and extraction method based on high-dimensional parameter models such as neural network and the likeThe method meets the engineering application requirements, avoids the long waiting time of the user during login, and improves the user experience while ensuring the safety.
Further, rememberCorresponding to visible light and infrared composite band imageThe region where the face is located.Respectively, a position coordinate and a band mark corresponding to one pixel. Will be provided withDividing the x and y directions into 8 × 8 equal-size subsets, and respectively recording each subset as:、、…、,、、…、,…,、、…、. The original image size is not evenly divided by 8 and can be complemented with zero pixels.
And (3) calculating:
in the formulaAccording to the definition, the number of pixels of the face area is represented; value of p and qIn the range of 1-8.Representing the normalized coefficient, as defined above:
a total of 8 × 4=256 results can be obtained according to equation (6), and a 256-dimensional vector can be formed:
for the above-mentioned vectorBinary classification is performed for identifying whether the face part in the image belongs to a specific user. Shooting a plurality of human face visible light and infrared composite wave band images of a specific user, and calculating according to the method to obtain corresponding vectorsAnd forming a training sample set. And a binary classification algorithm such as a support vector machine is adopted to obtain a recognition model of the specific user face image, wherein the recognition model can recognize whether the input is a positive sample (the specific user face) or a negative sample (the specific user face is not used). Through the operation of the template, the template can be quickly and accurately extractedAnd the detection effect is better by taking the characteristics.
The system is in the non-logging state after the special computer is started. Inputting a group of new visible light and infrared composite wave band images to the recognition model through a camera, judging whether the input is a positive sample or a negative sample by adopting the recognition model, and if the input is judged to be the positive sample, allowing the system to be logged in. After logging in, the special computer system is in a logging-in state.
Step 2The login environment monitoring method based on the ultra-wide-angle shot image scans and monitors the environment around a user in the ultra-wide-angle image, and if people except the user are found, a monitoring alarm is started.
After a specific user of the special purpose computer has logged in using the method described in step 1, the system enters a logged-in state. And in the login state, the method is adopted, and the surrounding environment of the special computer is monitored at regular time by shooting images.
The timing monitoring method and process are as follows.
Collecting a visible light and infrared composite wave band image at intervals, and detecting whether a non-specific face exists in the image by a non-specific face detection algorithm on the visible light wave band image part. The non-specific face refers to a face image which is irrelevant to identity. When the non-specific human face is detected, only visible wave band images are adopted, and infrared wave band images are not adopted, so that the calculation efficiency is improved.
The width and height of the rectangle of the face part obtained by the method in the step 1 are respectivelyAs the upper threshold, a lower threshold is additionally setGet it、As a non-specific faceThe range of the detected size candidates is,the step size of the detection area is increased in the width and height directions,and (3) detecting the visible light waveband image acquired in the step (2) for the step length of the starting point of the detection area in the width and height directions. The detection process is as S21-S24.
S21, setting the coordinates of the starting point of the initial detection area asThe width and height of the initial detection area are. Namely, it is、The formed rectangular area is an initial detection area. For visible light wave band imageA sub-graph within the detection region is subjected to non-specific face detection.
S21.1, if the sub-image returned by the non-specific face detection method is a face, detecting whether the sub-image belongs to a positive sample of a specific user by adopting the method in the step 1 in combination with the visible light and infrared composite band image of the corresponding area.
If a positive sample belongs to the particular user, step S22 is continued, otherwise step S25 is passed to.
S21.2 otherwise, if the non-specific face detection method returns that the sub-image is not a face, continuing with step S22.
S22, according to step lengthThe coordinates of the start point of the movement detection area are,Representing the number of movements of the step as an integer; after the coordinates of the starting point of the detection area are moved each time, the detection is performed according to the method described in S21 until the complete area of the image is completely scanned by the size of the current detection area. Go to step S23.
S23, according to step lengthAdjusting the size of the detection area toAnd steps S21, S22 are repeated.
S24, sending out warning information to indicate that a human face with an identity other than a specific user is detected in the current image; as an optimized automatic safety control method, after warning information is sent out in continuous multi-frame images, a special computer is controlled to enter an unregistered state, and the safety of data information is protected. After the computer enters the non-login state, the method in the step 1 needs to be adopted again to verify and then enter the login state.
And S21, adopting a convolution fast neural network model to carry out face detection on the subgraph. The neural network model consists of input, output and hidden layers.
The input of the neural network model is a visible light wave band imageIs denoted by。Representing image pixels in sub-pixelsCoordinates in the figure.
The hidden layer of the neural network model consists of three convolutional layers and a full-connection layer. Each layer is a function of the previous layer; the first convolutional layer is defined as follows:
a convolution kernel of size 3 x 3 of a rectangle; p and q represent relative position coordinates in the convolution kernel, based on the relative position with the reference positionThe deviation degree of (A) is taken as。Is a linear deviation. The convolution kernel and the linear deviation parameter are determined by learning. Excitation functionFor a nonlinear piecewise function:
the log is a natural logarithmic function, and e represents a natural index. Nonlinear piecewise functionAnd establishing nonlinear mapping of input and output, further reducing overfitting of model classification on a traditional nonlinear natural logarithm model through the nonlinear piecewise function, and reducing the identification deviation of the model to learning and testing samples.
The second convolutional layer is defined as follows:
is a convolution kernel with the size of 9 x 9 of a rectangle, and p and q represent relative position coordinates in the convolution kernel, so that the values of p and q are as9 integers in between.Is a linear deviation. The convolution kernel and the linear deviation parameter are determined by learning.Is defined as (8).
The third convolutional layer is defined as follows:
is a rectangular convolution kernel with the size of 17 × 17, and p and q represent relative position coordinates in the convolution kernel, so that the values of p and q are as follows17 integers in between.Is a linear deviation. The convolution kernel and the linear deviation parameter are determined by learning.Is defined as (8).
The three convolution layers jointly extract the face-related local image features, convolution kernels with different sizes are adopted, the local image features in different sizes are met, and convolution operation is provided with a fast parallel computing method on a modern computer, so that the method is high in implementation efficiency.
The fully connected layers after the three convolutional layers are defined as follows:
representing a connection between the third convolutional layer and some two nodes in the fully-connected layer,a node in the third convolutional layer is defined,and defining nodes of a full connection layer, wherein the full connection layer comprises 256 nodes, and one connection exists between any two nodes of the third convolution layer and the full connection layer.Is a linear deviation.Is defined as (8).
And the connection between a certain node in the full connection layer and the output layer Face is represented.Define a full connection layerThe node of (2). One connection exists between any node of the full-connection layer and the Face.Is a linear deviation.Is defined as (8).
The Face value is 0 or 1, and when the Face =0, the input image is represented as a non-Face image, and when the Face =1, the input image is represented as a Face image. The face is specifically a non-specific face.
Learning the neural network model (7-12) by adopting the following cost function, and detecting the non-specific face by using the model after learning is finished:
wherein the content of the first and second substances,the true value of the learning sample corresponding to a certain image sample,representing the output value calculated from the learning sample input substitution model. 0The method is used for controlling parameters, and is beneficial to improving the robustness of the model to noise. Preferably, take. And solving the cost function iteratively to obtain the optimized value of each parameter in the model (7-12).
After model learning is completed, timing monitoring may be implemented. And collecting a visible light and infrared composite wave band image at intervals, detecting whether a non-specific human face exists in the image by adopting the neural network model for the visible light wave band image part, and taking measures according to S21-S24.
The invention provides an intelligent sensing and safety control method for a special computer, which realizes the intelligent sensing and safety control of the special computer through automatic image identification. Table 1 shows the application test indexes of the method, and table 2 shows the application test indexes of the prior art. The experimental result shows that the invention has no obvious advantages compared with the prior art when the invention is environment-friendly, but the performance of the invention on the field and bumpy ships with complex environment is far higher than that of the prior art. Therefore, the method can effectively identify the abnormal environment, quickly react to the abnormal environment and realize the intelligent perception and the safety control of the special computer.
TABLE 1
TABLE 2
The above embodiments are only limited examples, and the space cannot be exhaustive, so the scope of the claims is not limited thereto, and all technical solutions similar to the above products and methods are within the scope of the present application.
Claims (10)
1. An intelligent sensing and safety control method for a ruggedized computer is characterized by comprising the following steps:
step 1: acquiring visible light and infrared composite waveband images at the same moment, and recording the images as V (x, y, z), wherein x and y are position coordinates, and z is a mark of an image imaging waveband, so that a hybrid observation model is established:
calculating the probability that any coordinate in the mixed observation O (x, y) is a face, further obtaining a binary image B (x, y) marked by the face according to a threshold value, and further marking the range of the face in the mixed observation; to pairFiltering by adopting pixel-by-pixel median to obtain a filtered image(ii) a GetIs set of pixels, denoted as;
Wherein, the first and the second end of the pipe are connected with each other,to representOf the rectangular area of the first set of pixels,、respectively representing the width and height of the rectangular area;the symbol represents the number of pixels included in the set of pixel compositions;
if the rectangular area is solved according to the formula (4)If the formula (5) is satisfied, the region is considered to beIf the image is a face region, the face cannot be found in the current image; wherein、Is a control threshold; after the face area is determined, the face area is divided into subgraphs, the subgraphs are respectively subjected to convolution operation to extract features, and whether the subgraphs are legal loggers or not is judged according to the features;
step 2: after login is finished, scanning surrounding areas by using different window sizes respectively, and sending the surrounding areas into a neural network model to judge whether the obtained image has a human face of a non-login person; if yes, sending out warning information; after warning information is sent out in continuous multi-frame images, the special computer is controlled to enter an unregistered state;
wherein the neural network model is: the hidden layer consists of three convolution layers and a full connecting layer; the outputs of the three convolutional layers are:
wherein、、The convolution kernels of each layer are respectively increased in sequence; p and q represent relative position coordinates in a convolution kernel;、、respectively, each layer of linear deviation; the excitation function is:
2. the intelligent sensing and security control method for hardened computers according to claim 1, characterized in that: the collection adopts an ultra wide angle camera.
3. The intelligent sensing and security control method for hardened computers according to claim 1, characterized in that: and step 2, regularly acquiring an environment image by using the ultra-wide-angle camera, and monitoring the login environment around the special computer.
6. The intelligent sensing and security control method for ruggedized computers, as claimed in claim 1, wherein: and 2, after the computer enters the non-login state in the step 1, the computer needs to adopt the method in the step 1 again to verify and then enter the login state.
10. The intelligent sensing and security control method for ruggedized computers, as claimed in claim 1, wherein: the output layer is connected behind the full connection layer of the neural network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211029826.1A CN115130082B (en) | 2022-08-26 | 2022-08-26 | Intelligent sensing and safety control method for ruggedized computer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211029826.1A CN115130082B (en) | 2022-08-26 | 2022-08-26 | Intelligent sensing and safety control method for ruggedized computer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115130082A CN115130082A (en) | 2022-09-30 |
CN115130082B true CN115130082B (en) | 2022-11-04 |
Family
ID=83387918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211029826.1A Active CN115130082B (en) | 2022-08-26 | 2022-08-26 | Intelligent sensing and safety control method for ruggedized computer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115130082B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803301A (en) * | 2017-03-28 | 2017-06-06 | 广东工业大学 | A kind of recognition of face guard method and system based on deep learning |
CN110414305A (en) * | 2019-04-23 | 2019-11-05 | 苏州闪驰数控系统集成有限公司 | Artificial intelligence convolutional neural networks face identification system |
WO2020258121A1 (en) * | 2019-06-27 | 2020-12-30 | 深圳市汇顶科技股份有限公司 | Face recognition method and apparatus, and electronic device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10354159B2 (en) * | 2016-09-06 | 2019-07-16 | Carnegie Mellon University | Methods and software for detecting objects in an image using a contextual multiscale fast region-based convolutional neural network |
-
2022
- 2022-08-26 CN CN202211029826.1A patent/CN115130082B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803301A (en) * | 2017-03-28 | 2017-06-06 | 广东工业大学 | A kind of recognition of face guard method and system based on deep learning |
CN110414305A (en) * | 2019-04-23 | 2019-11-05 | 苏州闪驰数控系统集成有限公司 | Artificial intelligence convolutional neural networks face identification system |
WO2020258121A1 (en) * | 2019-06-27 | 2020-12-30 | 深圳市汇顶科技股份有限公司 | Face recognition method and apparatus, and electronic device |
Non-Patent Citations (1)
Title |
---|
基于卷积神经网络的人脸识别研究综述;鲍睿栋等;《软件导刊》;20180415(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115130082A (en) | 2022-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jourabloo et al. | Face de-spoofing: Anti-spoofing via noise modeling | |
Zhang et al. | Deep-IRTarget: An automatic target detector in infrared imagery using dual-domain feature extraction and allocation | |
Cai et al. | DRL-FAS: A novel framework based on deep reinforcement learning for face anti-spoofing | |
Tian et al. | Detection and separation of smoke from single image frames | |
Raghavendra et al. | Scaling-robust fingerprint verification with smartphone camera in real-life scenarios | |
CN104951940B (en) | A kind of mobile payment verification method based on personal recognition | |
WO2015149534A1 (en) | Gabor binary pattern-based face recognition method and device | |
Chen et al. | An adaptive CNNs technology for robust iris segmentation | |
Ishikura et al. | Saliency detection based on multiscale extrema of local perceptual color differences | |
CN111709313B (en) | Pedestrian re-identification method based on local and channel combination characteristics | |
CN106778517A (en) | A kind of monitor video sequence image vehicle knows method for distinguishing again | |
CN112580590A (en) | Finger vein identification method based on multi-semantic feature fusion network | |
CN108416291B (en) | Face detection and recognition method, device and system | |
CN109800643A (en) | A kind of personal identification method of living body faces multi-angle | |
Zhang et al. | License plate localization in unconstrained scenes using a two-stage CNN-RNN | |
Yeh et al. | Face liveness detection based on perceptual image quality assessment features with multi-scale analysis | |
CN111222380A (en) | Living body detection method and device and recognition model training method thereof | |
CN110852292B (en) | Sketch face recognition method based on cross-modal multi-task depth measurement learning | |
CN115240280A (en) | Construction method of human face living body detection classification model, detection classification method and device | |
CN112434647A (en) | Human face living body detection method | |
CN111767879A (en) | Living body detection method | |
Szankin et al. | Influence of thermal imagery resolution on accuracy of deep learning based face recognition | |
Potje et al. | Extracting deformation-aware local features by learning to deform | |
Wang et al. | Domain generalization for face anti-spoofing via negative data augmentation | |
Huang et al. | Multi-Teacher Single-Student Visual Transformer with Multi-Level Attention for Face Spoofing Detection. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |