CN110728252A - Face detection method applied to regional personnel motion trail monitoring - Google Patents
Face detection method applied to regional personnel motion trail monitoring Download PDFInfo
- Publication number
- CN110728252A CN110728252A CN201911005541.2A CN201911005541A CN110728252A CN 110728252 A CN110728252 A CN 110728252A CN 201911005541 A CN201911005541 A CN 201911005541A CN 110728252 A CN110728252 A CN 110728252A
- Authority
- CN
- China
- Prior art keywords
- face
- personnel
- data
- server
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
- G06V40/173—Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a face detection method applied to regional personnel motion trail monitoring, belonging to the technical field of face detection; the technical problem to be solved is as follows: the improvement of a human face detection method applied to the monitoring of the movement track of regional personnel is provided; the technical scheme for solving the technical problem is as follows: setting a video analysis service program in a server, deploying a face recognition algorithm, and capturing an image by the face recognition algorithm through a network camera to be a photo or a video; inputting the face data of the active personnel in the area into a server, and establishing a white list face library; each network camera is used for monitoring the personnel flow conditions at different positions in the area, and acquiring and storing the face images of the personnel independently, a program arranged in the server analyzes and processes the data, and compares the identified face image data with a white list face library to determine which registered personnel appear in which network camera in the corresponding time period; the invention is applied to monitoring the movement track of the personnel.
Description
Technical Field
The invention discloses a face detection method applied to regional personnel motion trail monitoring, and belongs to the technical field of face detection.
Background
With the high-speed development of information technology, the application of Artificial Intelligence (AI) is pushed to a new height, and the use fields of AI technology are more and more, from medical diagnosis, industrial automation, financial investment, intelligent home furnishing and transportation, from groups to individuals; the most important application is face recognition, and the technology is continuously improved and developed, is already put into practical use at present and is applied to various large information security fields such as transportation, intelligent security, image retrieval, identity verification and the like.
The face recognition technology is applied to the field of monitoring regional personnel, the monitoring efficiency can be effectively improved, and the following method is mainly adopted for detecting the motion track of the regional personnel at present: the method comprises the steps that a certain number of positioning base stations are installed in an area where the base stations are located, so that base station signals can cover the whole area needing positioning, personnel to be monitored are arranged to wear positioning labels, the personnel wearing the positioning labels can be monitored and positioned after entering the area, and the position distribution of managed personnel can be monitored on a map of a supervision center; the face characteristics of the personnel under the label name need to be collected by a camera for judging the identity of the personnel, the existing human face recognition algorithm is used for recognition, the face recognition accuracy and efficiency are not ideal all the time due to the influence of various factors such as face characteristic change, complex background environment and the like, and the data processing of a corresponding control module is delayed due to the fact that a calculation method is not optimized enough.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to solve the technical problems that: the improvement of the human face detection method applied to the regional personnel motion trail monitoring is provided.
In order to solve the technical problems, the invention adopts the technical scheme that: a face detection method applied to regional personnel motion trail monitoring comprises the following steps:
the method comprises the following steps: installing a network camera in an area to be monitored, installing a deployment data processing server in a monitoring room, wherein a display screen and a control keyboard are arranged on the server, a video analysis service program is arranged in the server, and a face recognition algorithm is deployed, and images collected by the face recognition algorithm are captured into photos or videos by the network camera;
step two: inputting the face data of the active personnel in the area into a server, and establishing a white list face library; in the using process, simultaneously storing the face data of the persons not in the white list, and establishing a non-admittance person face library;
step three: starting a detection system, wherein each network camera is used for monitoring personnel flow conditions at different positions in an area and independently collecting and storing face images of the personnel, each network camera feeds back collected face image data to a server, a program built in the server analyzes and processes the data, the identified face image data is compared with a white list face library, and which registered personnel appear in which network camera in a corresponding time period is confirmed;
step four: the server displays the action track of the personnel on a display screen and stores the data log into a data storage module.
The specific steps of the server in the third step of processing the received face image are as follows:
step 3.1: firstly, carrying out data preprocessing on an acquired face image to reduce the complexity of the whole image;
step 3.2: the adjusted Gamma algorithm is adopted to adjust the color of the preprocessed image, the face mask is obtained, the original image is firstly processed with reverse color, then the Gaussian fuzzy processing with a certain radius is carried out, and the calculation formula is as follows:
γ[i,j,N(i,j)]=α[128-BFmask(i,j)/128];
and satisfies the following conditions:the mask in the above formula adopts a two-dimensional gaussian mask,is a constant number of times that the number of the first and second,are all coordinates in a two-dimensional space,is the coordinate of any point within the mask,coordinates of the center of the mask;
in the above formula, σ is 1.0, and the gaussian mask can be obtained by normalizing the data;
step 3.3: adopting an AdaBoost algorithm to detect the face, and labeling the specific position of the face:
step 3.3.1: annotated set of images S { (I)1,ωl),(I2,ω2),(I3,ω3),…,(Inωn) Where ω isnE { -1, 1}, which represents whether the image has a face;
setting an initial weight mul,i1/2m or 1/2l each correspond to μjNegative and positive sample cases of (j ═1 … n, l corresponds to the number of negative and positive samples, let k equal to 1;
step 3.3.2: mu tol,iNormalizing to generate probability distribution;
step 3.3.3: each feature riTrain a classifier hiEvaluating the error rate, selecting the best hkError is ek;
Step 3.3.4: order toWherein tau isk=ek/(l-ek),ρj0, l corresponds to IjIs classified as either false or correct;
step 3.3.5: increasing the value of K, and repeatedly executing the step 3.3.2 until K is equal to K, wherein K is a pre-specified range;
step 3.3.6: the following classifiers were used:
Training a batch of simple classifiers, acting on different training samples each time, and finally integrating the classifiers in a cascading manner to form a strong classifier and marking coordinates of each characteristic position of the face;
step 3.4: extracting texture features of face information in the image by adopting LBP (local binary pattern) method: before extracting texture features, carrying out blocking processing on an image, independently calculating an LBP value of each block, and then summarizing all block histogram features into a complete LBP feature histogram; calculating the characteristic values of different radii and different pixel points by using a circular LBP operator, wherein the formula of the circular LBP operator with the calculation scale of (P, R) is as follows:
in the above formula, gi-gjRepresenting a difference between the ith field and the central pixel value;
step 3.5: the face recognition is carried out by utilizing the convolutional neural network, the relation between the depth and the performance of the convolutional neural network is explored according to the VGGNet model structure, and the data conversion is carried out by the following formula so as to reduce the passive influence of the factors such as expression and posture on the face recognition:
in the above formula, the first and second carbon atoms are,for the final recognition result data to be acquired,for the weight matrix, f (x) is the activation function,is an offset value.
Compared with the prior art, the invention has the beneficial effects that: the invention provides an improved human face data recognition processing method which can be applied to the action tracks of monitored personnel, comprehensively considers human face recognition algorithm and outdoor environment interference factors by means of computer graphics, big data analysis and other technologies, correspondingly improves the recognition algorithm of collected human face images, effectively improves the positioning and track monitoring efficiency and accuracy, can quickly confirm the identity of a target by being applied to a regional personnel action track monitoring system, and can track and check historical track data of corresponding personnel.
Drawings
The invention is further described below with reference to the accompanying drawings:
FIG. 1 is a schematic structural diagram of a face detection system for monitoring movement tracks of regional personnel according to the present invention;
FIG. 2 is a flow chart illustrating the steps of data processing of a face image according to the present invention;
FIG. 3 is a LBP histogram of image extraction during processing of image data according to the present invention;
FIG. 4 is a block diagram of the present invention employing a convolutional neural network in processing image data.
Detailed Description
As shown in fig. 1, the detection method of the present invention requires setting corresponding hardware, including a Linux server with a GPU for deploying a face recognition algorithm and a video parsing service program, a web camera for identifying people and capturing faces, and a person who can be analyzed and identified by the video parsing service; a management server for managing the application programs.
According to the invention, a camera is used for collecting the face image of a monitored person in an area, and a recognition algorithm is carried out on the face image data; because the face recognition accuracy depends on the optimization degree of the face recognition algorithm and the quality and quantity of the face samples prestored in the database, and the face samples of the same person under different environmental conditions are different, the quality of the collected face samples can be influenced by the inclusion of illumination intensity or complex background images, and the recognition rate is reduced. Therefore, the acquired portrait picture is preprocessed before the acquired image is analyzed, and the complexity of the image is reduced as much as possible.
The structure of a face image recognition core algorithm used by the system is shown in fig. 2, a face photo is derived from a collected picture, the collected picture is adjusted in color by an improved Gamma algorithm, then face detection is carried out by using an AdaBoost algorithm, a detected face is marked, texture features are extracted from a corresponding calculation result by using LBP, and finally face recognition is carried out by using a convolutional neural network to obtain a final recognition result.
The Gamma algorithm adopted by the application is improved, the identification accuracy can be improved, the human face state of pedestrians in the area can have large difference due to different positions, and the existing Gamma algorithm is limited.
The method improves the Gamma algorithm and improves the image identification accuracy.
γ[i,j,N(i,j)]=α[128-BFmask(i,j)/128]As formula 1;
the obtaining principle of the mask is as follows: firstly, the original image is subjected to reverse color processing, and then Gaussian blur processing with a certain radius is performed.
Here the mask uses a two-dimensional gaussian mask,it is only a constant number that is,are essentially coordinates in two-dimensional space,is the coordinate of any point within the mask,is the coordinate of the center of the mask (statistically called mean, i.e., the mean of the coordinates);as formula 2;
the coordinate matrix is represented as:
wherein (x, y) in the middle of the matrix is in the formulaWhen in useIn traversing (x-1, y-1.) (x +1, y +1),the satisfied matrix data is:
0.3679、0.6065、0.3679
0.6065、1.000、0.6065
0.3679、0.6065、0.3679
then, taking σ as 1.0, normalizing to obtain a gaussian mask, and obtaining data results as follows:
0.075、0.124、0.075
0.124、0.204、0.124
0.075、0.124、0.075
before LBP (local binary pattern) feature extraction, the image needs to be subjected to blocking processing, the LBP value of each block is calculated independently, and then histogram features of all blocks are gathered into a complete LBP feature histogram; in order to adapt to the texture characteristics of various scales, the circular domain is adopted to replace the LBP operator in the square domain, namely the circular LBP operator, so that the requirements of gray scale invariance and rotation invariance are met. Therefore, the light interference can be resisted, and the interference of the expression on the face recognition can be improved.
The circular LBP operator can calculate the characteristic values of different radius sizes and different pixel points.
The circle computation LBP operator formula for the scale (P, R) is as follows:
wherein g isi-gjRepresenting the difference between the ith field and the central pixel value, which can be calculated to 2PA number of different LBP codes.
After the LBP operator is coded according to the LBP operator, certain subtle characteristics are still defective; therefore, before calculating the above-mentioned codes, the images are divided into blocks, the LBP histogram is extracted for each image, and finally they are connected in order to form the LBP features of the picture, as shown in fig. 3;
the method adopts AdaBoost face positioning steps as follows:
the method comprises the following steps: annotated set of images S { (I)1,ω1),(I2,ω2),(I3,ω3),…,(Inωn) Where ω isnAnd e { -1, 1}, which represents whether the image has a human face. Initial weight mul,i1/2m or 1/2l each correspond to μjJ is 1 … n, l corresponds to the number of negative and positive samples, let k be 1;
step two: mu tol,iNormalizing to generate probability distribution;
step three: each feature riTrain a classifier hiEvaluating the error rate, selecting the best hkError is ek;
Step four: order toWherein tau isk=ek/(1-ek),ρi{0, 1} corresponds to IjIs classified as either false or correct;
step five: increasing the value of K, and continuing to execute the second step until K is equal to K, wherein K is a pre-specified range;
step six: the following classifiers were used:
The missing detection rate of the weak classifiers is reduced, but the false judgment rate is increased, then a complex classifier, namely a strong classifier, is applied to a small part of results which are not eliminated, namely a batch of simple classifiers are trained firstly, each time, different training samples are acted on, and finally, the classifiers are integrated in a cascading mode to form a strong classifier. It resembles a decision tree, with classifier n +1 being called only if classifier n is not excluded.
According to the relationship between the depth and performance of the convolutional neural network explored by VGGNet, the network structure adopted is shown in fig. 4, and comprises 3 layers of convolution kernels, 32 convolution kernels of 2x2, 64 convolution kernels of 2x2 and 96 convolution kernels of 2x2, which are connected by a nonlinear activation function;
3 pooling layers are arranged in the network structure, and are all 2x2 max-posing, so that the passive influence of expressions, postures and the like on the face can be reduced;
finally, through the full connection layer, the weight matrixActivating function f (x), bias valueObtaining the final feature vectorCalculating the characteristic vector by the following formula:
when the system is used, corresponding hardware of the detection system is installed in a specific area, so that artificial intelligence application taking face recognition as a core is improved, the identity of a target is quickly confirmed, intelligent, accurate and quick face comparison and perfect video image big data analysis mining application are provided, personnel management monitoring applications such as real-time face tracking monitoring early warning, personnel identity quick comparison retrieval approval, personnel historical track tracking and reverse checking and the like are comprehensively solved, and personnel management monitoring applications such as person searching, person finding, early warning, tracking and the like are provided, and the detection system is different from the existing personnel motion track system.
The detection system can also integrate the functions of video images, electronic maps, positioning monitoring, personnel attendance checking, personnel distribution, electronic fences, monitoring alarm and the like at the later stage, has stronger practicability, and identifies and triggers alarm for various actions such as crossing warning surfaces, regional invasion, entering regions, leaving regions and the like; through the association of the video monitoring system and other auxiliary systems, rich video plans including television wall linkage, alarm video recording and the like can be provided, and the method is beneficial to relevant departments to find accident points at the first time, quickly react and control accident loss to the minimum range; the system adds a plurality of preset positions of a plurality of monitoring points along the way into a plan, once the problems are found, screenshots can be taken and marked, and related departments are informed in time, so that the plan can greatly improve the inspection quality and the arrival rate.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (2)
1. A face detection method applied to regional personnel motion trail monitoring is characterized by comprising the following steps: the method comprises the following steps:
the method comprises the following steps: installing a network camera in an area to be monitored, installing a deployment data processing server in a monitoring room, wherein a display screen and a control keyboard are arranged on the server, a video analysis service program is arranged in the server, and a face recognition algorithm is deployed, and images collected by the face recognition algorithm are captured into photos or videos by the network camera;
step two: inputting the face data of the active personnel in the area into a server, and establishing a white list face library; in the using process, simultaneously storing the face data of the persons not in the white list, and establishing a non-admittance person face library;
step three: starting a detection system, wherein each network camera is used for monitoring personnel flow conditions at different positions in an area and independently collecting and storing face images of the personnel, each network camera feeds back collected face image data to a server, a program built in the server analyzes and processes the data, the identified face image data is compared with a white list face library, and which registered personnel appear in which network camera in a corresponding time period is confirmed;
step four: the server displays the action track of the personnel on a display screen and stores the data log into a data storage module.
2. The face detection method applied to regional personnel motion trail monitoring according to claim 1, characterized in that: the specific steps of the server in the third step of processing the received face image are as follows:
step 3.1: firstly, carrying out data preprocessing on an acquired face image to reduce the complexity of the whole image;
step 3.2: the adjusted Gamma algorithm is adopted to adjust the color of the preprocessed image, the face mask is obtained, the original image is firstly processed with reverse color, then the Gaussian fuzzy processing with a certain radius is carried out, and the calculation formula is as follows:
γ[i,j,N(i,j)]=α[128-BFmask(i,j)/128]; (1)
the mask in the above formula adopts a two-dimensional gaussian mask,is a constant number of times that the number of the first and second,are all coordinates in a two-dimensional space,is the coordinate of any point within the mask,is the coordinate of the mask center;
in the above formula, σ is 1.0, and the gaussian mask can be obtained by normalizing the data;
step 3.3: adopting an AdaBoost algorithm to detect the face, and labeling the specific position of the face:
step 3.3.1: annotated set of images S { (I)1,ω1),(I2,ω2),(I3,ω3),…,(Inωn) Where ω isnE { -1, 1}, which represents whether the image has a face;
setting an initial weight mul,i1/2m or 1/21 m respectivelyCorresponds to μjJ is 1 … n, l corresponds to the number of negative and positive samples, let k be 1;
step 3.3.2: mu tol,iNormalizing to generate probability distribution;
step 3.3.3: each feature riTrain a classifier hiEvaluating the error rate, selecting the best hkError is ek;
Step 3.3.4: order toWherein tau isk=ek/(1-ek),ρi{0, 1} corresponds to IjIs classified as either false or correct;
step 3.3.5: increasing the value of K, and repeatedly executing the step 3.3.2 until K is equal to K, wherein K is a pre-specified range;
step 3.3.6: the following classifiers were used:
wherein a isk=-logτk;
Training a batch of simple classifiers, acting on different training samples each time, and finally integrating the classifiers in a cascading manner to form a strong classifier and marking coordinates of each characteristic position of the face;
step 3.4: extracting texture features of face information in the image by adopting LBP (local binary pattern) method: before extracting texture features, carrying out blocking processing on an image, independently calculating an LBP value of each block, and then summarizing all block histogram features into a complete LBP feature histogram; calculating the characteristic values of different radii and different pixel points by using a circular LBP operator, wherein the formula of the circular LBP operator with the calculation scale of (P, R) is as follows:
in the above formula, gi-gjIs shown asThe difference of the i fields from the central pixel value;
step 3.5: the face recognition is carried out by utilizing the convolutional neural network, the relation between the depth and the performance of the convolutional neural network is explored according to the VGGNet model structure, and the data conversion is carried out by the following formula so as to reduce the passive influence of the factors such as expression and posture on the face recognition:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911005541.2A CN110728252B (en) | 2019-10-22 | 2019-10-22 | Face detection method applied to regional personnel motion trail monitoring |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911005541.2A CN110728252B (en) | 2019-10-22 | 2019-10-22 | Face detection method applied to regional personnel motion trail monitoring |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110728252A true CN110728252A (en) | 2020-01-24 |
CN110728252B CN110728252B (en) | 2023-08-04 |
Family
ID=69220674
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911005541.2A Active CN110728252B (en) | 2019-10-22 | 2019-10-22 | Face detection method applied to regional personnel motion trail monitoring |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110728252B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110889339A (en) * | 2019-11-12 | 2020-03-17 | 南京甄视智能科技有限公司 | Head and shoulder detection-based dangerous area grading early warning method and system |
CN111553386A (en) * | 2020-04-07 | 2020-08-18 | 哈尔滨工程大学 | AdaBoost and CNN-based intrusion detection method |
CN112016526A (en) * | 2020-10-16 | 2020-12-01 | 金税信息技术服务股份有限公司 | Behavior monitoring and analyzing system, method, device and equipment for site activity object |
CN112084957A (en) * | 2020-09-11 | 2020-12-15 | 广东联通通信建设有限公司 | Mobile target retention detection method and system |
CN112364722A (en) * | 2020-10-23 | 2021-02-12 | 岭东核电有限公司 | Nuclear power operator monitoring processing method and device and computer equipment |
CN112966575A (en) * | 2021-02-23 | 2021-06-15 | 光控特斯联(重庆)信息技术有限公司 | Target face recognition method and device applied to smart community |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794265A (en) * | 2005-12-31 | 2006-06-28 | 北京中星微电子有限公司 | Method and device for distinguishing face expression based on video frequency |
US20110103685A1 (en) * | 2009-11-02 | 2011-05-05 | Apple Inc. | Image Adjustment Using Extended Range Curves |
CN106599854A (en) * | 2016-12-19 | 2017-04-26 | 河北工业大学 | Method for automatically recognizing face expressions based on multi-characteristic fusion |
CN107316032A (en) * | 2017-07-06 | 2017-11-03 | 中国医学科学院北京协和医院 | One kind sets up facial image identifier method |
CN109325448A (en) * | 2018-09-21 | 2019-02-12 | 广州广电卓识智能科技有限公司 | Face identification method, device and computer equipment |
CN110119656A (en) * | 2018-02-07 | 2019-08-13 | 中国石油化工股份有限公司 | Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations |
-
2019
- 2019-10-22 CN CN201911005541.2A patent/CN110728252B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794265A (en) * | 2005-12-31 | 2006-06-28 | 北京中星微电子有限公司 | Method and device for distinguishing face expression based on video frequency |
US20110103685A1 (en) * | 2009-11-02 | 2011-05-05 | Apple Inc. | Image Adjustment Using Extended Range Curves |
CN106599854A (en) * | 2016-12-19 | 2017-04-26 | 河北工业大学 | Method for automatically recognizing face expressions based on multi-characteristic fusion |
CN107316032A (en) * | 2017-07-06 | 2017-11-03 | 中国医学科学院北京协和医院 | One kind sets up facial image identifier method |
CN110119656A (en) * | 2018-02-07 | 2019-08-13 | 中国石油化工股份有限公司 | Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations |
CN109325448A (en) * | 2018-09-21 | 2019-02-12 | 广州广电卓识智能科技有限公司 | Face identification method, device and computer equipment |
Non-Patent Citations (2)
Title |
---|
"An Adjustable Face Recognition System for Illumination Compensation Based on Differential Evolution", 《2018 XLIV LATIN AMERICAN COMPUTER CONFERENCE》, pages 234 - 241 * |
李纪鑫: "基于BP神经网络的人脸检测Adaboost算法", 《计算机测量与控制》, vol. 28, no. 8 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110889339A (en) * | 2019-11-12 | 2020-03-17 | 南京甄视智能科技有限公司 | Head and shoulder detection-based dangerous area grading early warning method and system |
CN110889339B (en) * | 2019-11-12 | 2020-10-02 | 南京甄视智能科技有限公司 | Head and shoulder detection-based dangerous area grading early warning method and system |
CN111553386A (en) * | 2020-04-07 | 2020-08-18 | 哈尔滨工程大学 | AdaBoost and CNN-based intrusion detection method |
CN111553386B (en) * | 2020-04-07 | 2022-05-20 | 哈尔滨工程大学 | AdaBoost and CNN-based intrusion detection method |
CN112084957A (en) * | 2020-09-11 | 2020-12-15 | 广东联通通信建设有限公司 | Mobile target retention detection method and system |
CN112016526A (en) * | 2020-10-16 | 2020-12-01 | 金税信息技术服务股份有限公司 | Behavior monitoring and analyzing system, method, device and equipment for site activity object |
CN112364722A (en) * | 2020-10-23 | 2021-02-12 | 岭东核电有限公司 | Nuclear power operator monitoring processing method and device and computer equipment |
CN112966575A (en) * | 2021-02-23 | 2021-06-15 | 光控特斯联(重庆)信息技术有限公司 | Target face recognition method and device applied to smart community |
CN112966575B (en) * | 2021-02-23 | 2023-04-18 | 光控特斯联(重庆)信息技术有限公司 | Target face recognition method and device applied to smart community |
Also Published As
Publication number | Publication date |
---|---|
CN110728252B (en) | 2023-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110728252B (en) | Face detection method applied to regional personnel motion trail monitoring | |
CN108805093B (en) | Escalator passenger tumbling detection method based on deep learning | |
Gibert et al. | Deep multitask learning for railway track inspection | |
CN109902628B (en) | Library seat management system based on vision thing networking | |
CN111932583A (en) | Space-time information integrated intelligent tracking method based on complex background | |
CN106951889A (en) | Underground high risk zone moving target monitoring and management system | |
KR101731243B1 (en) | A video surveillance apparatus for identification and tracking multiple moving objects with similar colors and method thereof | |
CN106570490B (en) | A kind of pedestrian's method for real time tracking based on quick clustering | |
CN110119726A (en) | A kind of vehicle brand multi-angle recognition methods based on YOLOv3 model | |
CN110688980B (en) | Human body posture classification method based on computer vision | |
CN112287827A (en) | Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole | |
CN111681382A (en) | Method for detecting temporary fence crossing in construction site based on visual analysis | |
CN112183472A (en) | Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet | |
Wechsler et al. | Automatic video-based person authentication using the RBF network | |
CN111353343A (en) | Business hall service standard quality inspection method based on video monitoring | |
CN111353338A (en) | Energy efficiency improvement method based on business hall video monitoring | |
CN113191273A (en) | Oil field well site video target detection and identification method and system based on neural network | |
CN117475353A (en) | Video-based abnormal smoke identification method and system | |
CN117953009A (en) | Space-time feature-based crowd personnel trajectory prediction method | |
Mantini et al. | Camera Tampering Detection using Generative Reference Model and Deep Learned Features. | |
Pinthong et al. | The License Plate Recognition system for tracking stolen vehicles | |
CN117423157A (en) | Mine abnormal video action understanding method combining migration learning and regional invasion | |
CN117037264A (en) | Prison personnel abnormal behavior identification method based on target and key point detection | |
CN111860097A (en) | Abnormal behavior detection method based on fuzzy theory | |
CN111160150A (en) | Video monitoring crowd behavior identification method based on depth residual error neural network convolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |