CN113591607B - Station intelligent epidemic situation prevention and control system and method - Google Patents

Station intelligent epidemic situation prevention and control system and method Download PDF

Info

Publication number
CN113591607B
CN113591607B CN202110783584.4A CN202110783584A CN113591607B CN 113591607 B CN113591607 B CN 113591607B CN 202110783584 A CN202110783584 A CN 202110783584A CN 113591607 B CN113591607 B CN 113591607B
Authority
CN
China
Prior art keywords
image
target
information
video
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110783584.4A
Other languages
Chinese (zh)
Other versions
CN113591607A (en
Inventor
宋强
何艳丽
曲强
史怡
张逸伟
张婉莹
王琳
董慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Liaoning USTL
Original Assignee
University of Science and Technology Liaoning USTL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Liaoning USTL filed Critical University of Science and Technology Liaoning USTL
Priority to CN202110783584.4A priority Critical patent/CN113591607B/en
Publication of CN113591607A publication Critical patent/CN113591607A/en
Application granted granted Critical
Publication of CN113591607B publication Critical patent/CN113591607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a station intelligent epidemic situation prevention and control system and a method, wherein the system carries out video acquisition through a camera, and carries out preliminary analysis on video target information by gait recognition in a server to obtain related information; the matrix video controller forms the image into a matrix video frame image, and the server carries out face recognition on the matrix video frame image after gait recognition through face recognition to further acquire face characteristic information for confirming identity; the face characteristic information is applied to a target tracking algorithm of video compressed sensing to track target information, and the frame sequence position of an image of the target in the video is found out, so that an image containing the target and a contact target is derived, and the target information and the contact target information are further confirmed through feedback searching. Under the condition of wide spread of special epidemic situations, the travel track of the diagnosis target and closely contacted people can be rapidly analyzed and confirmed.

Description

Station intelligent epidemic situation prevention and control system and method
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a station intelligent epidemic situation prevention and control system and method.
Background
In the prior art, the method of the station epidemic prevention and control system mainly uses infrared temperature measurement and is confirmed by the information of the entrance gate, and often when the target is confirmed, the track of the target in the station and the related target closely contacted with the target are difficult to analyze, so that the method needs to be further perfected. In order to solve the problem of uncontrollable caused by fast epidemic propagation and large personnel mobility, the method adopts gait recognition to primarily analyze the video target information to obtain related information. And performing face recognition on the matrix video frame image after gait recognition through face recognition, and further acquiring face characteristic information for confirming the identity. The face characteristic information is applied to a target tracking algorithm of video compression sensing to track target information, and the frame sequence position of an image of the target in the video is found out, so that the image containing the target and a contact target is derived, the target information and the contact target information are further confirmed through feedback search, the manual screening search is replaced, and the intelligent epidemic prevention and control function of the station is achieved.
Disclosure of Invention
In order to solve the problems in the background technology, the invention provides a station intelligent epidemic situation prevention and control system and a station intelligent epidemic situation prevention and control method, which can rapidly analyze and confirm the travel track of a target and closely contacted personnel under the condition that special epidemic situations are widely spread.
In order to achieve the above purpose, the invention is realized by adopting the following technical scheme:
a station intelligent epidemic situation prevention and control system comprises a camera, a matrix video controller and a server; the system acquires video through a camera, and performs preliminary analysis on video target information by gait recognition in a server to acquire related information; the matrix video controller forms the image into a matrix video frame image, and the server carries out face recognition on the matrix video frame image after gait recognition through face recognition to further acquire face characteristic information for confirming identity; the face characteristic information is applied to a target tracking algorithm of video compressed sensing to track target information, and the frame sequence position of an image of the target in the video is found out, so that an image containing the target and a contact target is derived, and the target information and the contact target information are further confirmed through feedback searching.
A method for a station intelligent epidemic situation prevention and control system comprises the following steps:
step one: the gait recognition optimization algorithm takes gait information based on a video sequence as a recognition object, performs frame cutting on a dynamic video image sequence to obtain static picture information, and then performs moving object detection by using a background subtraction method to obtain a picture only containing task information, wherein the obtained binarized character information also contains noise generated due to various reasons and needs to be removed by a certain morphological processing algorithm; in the process of feature extraction, gait energy information is extracted to obtain a gait energy diagram (GEI), and the opening angles of the two legs of a person at different moments during walking are used as input variables of a network after the two kinds of information are subjected to weighted feature fusion in an input layer of the neural network, and in order to reduce the robustness of the neural network and improve the recognition accuracy of the network, a GA genetic algorithm is added to optimize the weight and the threshold of the network, so that a better gait recognition and classification effect is achieved;
step two: the face recognition algorithm adopts a double-symmetrical LeNet parallel connection network structure, the synchronous model adopts two paths of parallel networks to respectively process images, and can independently acquire high-level feature vectors and combine the high-level feature vectors at an output layer; the global features and the local features of the input image are extracted respectively by adopting a DCT-LBP combined processing method, so that feature expression is better carried out, and the performance of a face detection and recognition system is improved; when the image information reaches the output layer after a series of processing, carrying out comparison and classification on the face image information and the information in the database by adopting Softmax regression classification to obtain correct and complete character information; the cosine correction is added in the regression classification, so that redundancy can be reduced, the generalization capability is enhanced, the overfitting is reduced, and the face recognition accuracy is increased;
step three: the video compression perception target tracking algorithm firstly utilizes image sharpening to highlight target image edge textures, and then utilizes a rectangular filter to normalize a face image and obtain feature vectors; then compressing the Haar-like features of the target sample and the background sample by utilizing dynamic compressed sensing, establishing a target model by utilizing compressed Haar-like feature vectors, and training an Adaboost algorithm Bayesian cascade classifier; finally, a naive Bayes classifier is utilized to identify the target image and the background image, so that the dynamic tracking of face recognition is realized;
step four: the gait recognition algorithm, the face recognition algorithm and the video compressed sensing target tracking algorithm are fused, and the specific process is as follows: 1) Primarily screening the station monitoring video target characters through gait recognition, and recording and storing; 2) Performing a face recognition process according to the character images after primary screening to further confirm target information; 3) According to the output face characteristic information, a target positioning process is carried out, a video compression sensing target tracking algorithm is adopted to derive a travel track containing target information, a frame sequence image containing a target is recorded, and finally the frame sequence image is fed back to a central processing unit, so that the target person and the information of a person in close contact with the target person can be conveniently searched; the purpose of intelligent epidemic situation prevention and control of the station is achieved.
Further, in the first step, the gait recognition optimization algorithm includes the following steps:
1) Feature extraction:
Figure BDA0003157912390000021
wherein: n is the number of frames contained in one cycle of the extracted binarized gait sequence; (x) , y) represent the coordinate values in the image, respectively; b (x, y) is the pixel value of the (x, y) point in the t-th frame of the image; g (x, y) is the calculated energy map;
Figure BDA0003157912390000031
Figure BDA0003157912390000032
wherein: (X) c ,Y c ) The centroid coordinates obtained after calculation are obtained; n is the number of pixel points of the contour pixels; (x, y) is a coordinate value; θ i The calculation result of the included angle theta of the ith pixel point;
2) GA-BP classification and identification:
the neural network is widely applied to the identification and classification of images, but the common algorithm is easy to fall into the defect of local optimization, the network can be trained by utilizing the genetic algorithm, the learning problem of the neural network is solved, and the threshold searching range is reduced; the neural network is used for carrying out accurate solution, so that the purposes of global optimization and high speed and high efficiency can be well achieved; the neural network recognition rate can be used as a target parameter and optimized through a genetic algorithm; the method comprises the steps of utilizing the weight and the threshold value of an individual representative network, taking the prediction error of an individual initialized BP neural network as an adaptation value of the individual, and searching an optimal individual through selection, intersection and variation operations;
further, the face recognition algorithm in the second step includes the following steps:
1) DCT-LBP joint treatment
S=a·DCT+b·LBP (4)
Wherein: a is the weighting coefficient of the DCT, b is the weighting coefficient of the LBP, and a+b=1; s is the weighted image;
2) Convolution operation
Figure BDA0003157912390000033
Wherein: x represents a two-dimensional input vector, the dimensions of which are (m, n); y represents m×n size feature map size; f represents an activation function; w represents a convolution kernel of size J×I; b represents bias;
3) Pooled sampling
Figure BDA0003157912390000034
Wherein:
Figure BDA0003157912390000035
a j-th feature map representing a first pooling layer; f represents an activation function; />
Figure BDA0003157912390000036
Respectively represent characteristic diagrams->
Figure BDA0003157912390000037
Multiplicative bias and additive bias of (a);
4) Full connection
Figure BDA0003157912390000038
Wherein: f represents an activation function; n is the number of neurons of the l-1 layer; l represents the current layer number;
Figure BDA0003157912390000041
representing connection parameters between the jth element of the first-1 layer and the ith element of the first layer; />
Figure BDA0003157912390000042
Is the bias term for the ith cell of the first layer,
Figure BDA0003157912390000043
representing the output value of the i-th cell of the first layer;
5) Softmax regression classification
In the network training, in order to make the cosine similarity evaluation standard the same when testing, the Euclidean distance of the sample similarity is converted into the cosine distance, the weight and the feature are normalized to the processed value S, so that the value S is automatically learned, and the difference feature separation on the hypersphere achieves a better effect; the loss function joint expression at this time is:
Figure BDA0003157912390000044
wherein: lambda is the balance coefficient of the normalized joint expression.
Further, the video compression sensing target tracking algorithm in the third step comprises the following steps:
the dynamic signal processed by dynamic compressive sensing has time variability, and X is set t For the sparse signal projected by the sparse matrix, the expression form of the state space equation of the dynamic compressed sensing model is as follows:
X t =f t (X t-1 )+v t (9)
Y t =A t X tt (10)
wherein: y is Y t Expressed as an observation equation; f (f) t Represented as a state transfer function in a state space equation; v t ,ω t Expressed as process noise and observation noise, respectively, typically defaults to white gaussian noise with an average value of 0;
according to the idea of the dynamic video compressed sensing theory, the information contained in the original signal is represented by a small amount of sampling observation signals, so that the dimension of the signal is reduced; carrying out compression projection on the characteristic space vector X of the high-dimensional original signal to a low-dimensional space by utilizing a random measurement matrix P to obtain a low-dimensional compression characteristic space vector;
haar-like feature calculation, namely calculating sub-image feature values of all sample windows by scanning a large number of sample windows, wherein the feature values are rectangular gray scale pixel differences in the detected image, but a large number of operations are generated in the process; meanwhile, in order to keep the image scale unchanged, the calculated amount is further increased, so that the face detection speed and the training efficiency of the classifier are reduced; the mathematical expression of the Haar-like characteristic value is as follows:
feature=∑ i∈(1,2,…N) ω i RectSum(γ i ) (11)
wherein: omega i Representing a weight matrix; n represents the number of rectangular characteristic values; rectsum (gamma) i ) A sum of all rectangular eigenvalues representing the sample image;
in order to improve the operation rate of the feature vector value in compression projection, the sum of rectangular areas and the square sum thereof can be rapidly calculated by adopting an integral graph algorithm, so that the operation amount is reduced, and the operation rate is improved; the integral graph operation formula is as follows:
J(m,n)=∑ m′<m,n′<n H(m′,n′) (12)
wherein: j (m, n) represents an integrated value of the image at the pixel (m, n); h (m ', n') represents the gray value of the image at the pixel (m ', n');
the mathematical expression of the naive bayes classifier model is:
Figure BDA0003157912390000051
wherein: c (C) i Expressed as a category of data attributes; x is X A Represented as a test sample;
in the Adaboost algorithm, the weak classifier expression is:
Figure BDA0003157912390000052
wherein: h (x, f, p, θ) is denoted as a weak classifier; x is represented as a sub-window image; f (x) is expressed as a characteristic function of the sub-window; θ represents a threshold value of the f (x) function; the classification process is the process of obtaining the threshold value of the function.
Compared with the prior art, the invention has the beneficial effects that:
the invention adopts gait recognition to primarily analyze the video target information to obtain the related information. And performing face recognition on the matrix video frame image after gait recognition through face recognition, and further acquiring face characteristic information for confirming the identity. The face characteristic information is applied to a target tracking algorithm of video compression sensing to track target information, and the frame sequence position of an image of the target in the video is found out, so that the image containing the target and a contact target is derived, the target information and the contact target information are further confirmed through feedback search, the manual screening search is replaced, and the intelligent epidemic prevention and control function of the station is achieved.
Drawings
FIG. 1 is a main body design framework diagram of a station intelligent epidemic prevention and control system of the invention;
FIG. 2 is a flow chart of a method of the station intelligent epidemic prevention and control system of the present invention;
FIG. 3 is a block diagram of a video compression aware target tracking algorithm of the present invention-update of the time-of-t classifier;
fig. 4 is a diagram of a video compression-aware target tracking algorithm of the present invention, target tracking at time t+1.
Detailed Description
The following detailed description of the embodiments of the invention is provided with reference to the accompanying drawings.
As shown in fig. 1, the station intelligent epidemic situation prevention and control system comprises a camera, a matrix video controller and a server; the system acquires video through a camera, and performs preliminary analysis on video target information by gait recognition in a server to acquire related information; the matrix video controller forms a matrix video frame image from the image after gait recognition by face recognition, and the server performs face recognition on the matrix video frame image after gait recognition by face recognition to further acquire face characteristic information for confirming identity; the face characteristic information is applied to a target tracking algorithm of video compressed sensing to track target information, and the frame sequence position of an image of the target in the video is found out, so that an image containing the target and a contact target is derived, and the target information and the contact target information are further confirmed through feedback searching.
The video processing system further comprises a video controller, wherein the video controller is a controller for performing preliminary processing on videos, can filter out unclear videos, performs undistorted compression on videos occupying large storage space, and finally obtains video sequences which can be used for subsequent analysis.
As shown in fig. 2, a method for a station intelligent epidemic prevention and control system comprises the following steps:
step one: gait recognition optimization algorithm. The algorithm completes the identification based on the gait data through analysis and research on the gait data. The accuracy of feature extraction directly relates to the recognition accuracy, so that dual features, namely gait energy diagram (GEI) and angle information between two legs of walking time, are extracted, the two features are subjected to weighted feature fusion at an input layer of a neural network, and the recognition accuracy of each feature when the features are independently input is calculated to determine the weight value distributed by each feature. The method adopted in the classification stage is to add a GA genetic algorithm to optimize on the basis of a BP neural network, the main object of optimization is the weight and the threshold of the neural network, the robustness of the neural network after optimization can be obviously enhanced, and finally the recognition capability of gait recognition can be obviously improved.
(1) Gait data acquisition and morphological processing
The gait data acquisition adopts three methods, namely acquisition based on video, acquisition based on a ground sensor and acquisition based on a human body sensor. The algorithm adopts a mode based on video sequences, and is convenient for the design of practical application products because the method has low implementation difficulty.
The acquired gait information is mainly in the form of video of an image, the first step to be performed is frame cutting, and then the obtained picture is subjected to image positioning and background removal, so that the binarized image information can be obtained. The image contains certain noise, morphological processing is needed, the noise is removed, and corrosion and expansion operations are mainly adopted. The expansion can eliminate structural fracture in the image, fill the hole, expand the area from the periphery to the outside, and is an operation of solving local maximum value, so that the highlight area in the image is increased; the purpose of the erosion is to remove incoherent elements and reduce the area from the periphery inwards, so that noise can be removed and the highlight area in the image is reduced for the operation of local minima. The formula of the corrosion treatment is shown as (1), and the formula of the expansion treatment is shown as (2):
Figure BDA0003157912390000061
wherein: the process of etching with B is indicated as a translation operation x for B, and the result is a set of all x, i.e. B remains in a after translating x. In other words, the set obtained by corroding a with B is a set of the origin positions of B when B is completely included in a, i.e., B after translation is not superimposed on the background of a.
Figure BDA0003157912390000062
Wherein: b≡represents the reflection of set B, (B) x Representing the reflection in and out displacement x for B. Thus, the equation shows that expanding A with B is to map B with respect to the origin and then translate it by x, where the intersection of A and B cannot be empty. In other words, the set obtained by expanding A with B is the set of the origin position of B when the displacement of B intersects with A with at least one non-zero element.
(2) Feature extraction
Gait characteristics are key steps of gait recognition, and whether the extracted information is accurate and comprehensive directly determines whether classification recognition can be successful or not. At present, a model-free method is mainly adopted to extract features, and the features are mainly divided into three types: based on the appearance characterization, which mainly uses a normalized cumulative energy method to form a gait energy diagram (GEI); exchange-based characterization, mainly involving Principal Component Analysis (PCA) and fourier transformation; based on the characterization of the distribution, human walking is characterized by statistical distributions generated throughout the gait cycle, principally optical flow distribution, probability distribution and texture-based distribution. Because gait recognition is a continuous periodic process, the method of the invention adopts a weighted fusion method based on both the appearance-based gait energy diagram and the angle information to better preserve the time information of the gait.
Figure BDA0003157912390000071
Wherein: n is the number of frames contained in one cycle of the extracted binarized gait sequence; (x, y) respectively represent coordinate values in the image; b (x, y) is the pixel value of the (x, y) point of the image in the t-th frame, and G (x, y) is the calculated energy map.
Figure BDA0003157912390000072
Figure BDA0003157912390000073
Wherein: (X) c ,Y c ) The centroid coordinates obtained after calculation are obtained; n is the number of pixel points of the contour pixels; (x, y) is a coordinate value; θ i And the calculation result of the included angle theta of the ith pixel point.
(3) GA-BP classification identification
The GA-BP classification recognition is an improvement and innovation of recognition classification algorithm, and is capable of improving recognition precision and enhancing adaptability. The invention aims to adopt a Genetic Algorithm (GA) to optimize a neural network (BP) to form a GA-BP algorithm as a final recognition classification algorithm. The genetic algorithm optimization BP neural network is divided into 3 parts of network structure determination, genetic algorithm optimization and network prediction. The BP neural network structure determining part determines according to the number of input and output parameters of the fitting function, and further calculates the length of the genetic algorithm individual. The genetic optimization algorithm optimizes the weight and the threshold value in the BP neural network through the genetic algorithm, and all individuals in the population contain all the weight and the threshold value in one network. The individual calculates the fitness value of the individual through the fitness function, and the genetic algorithm finds the optimal fitness value through the selection, crossing and mutation operation, so that the optimal individual is determined. The BP neural network predicts the optimal individual obtained by genetic algorithm to assign the initial weight and threshold value of the network, and the network outputs the prediction function after training.
Step two: face recognition optimization algorithm. The algorithm is based on a classical convolutional neural network model LeNet, adopts a double-symmetrical LeNet parallel connection network structure, adopts two paths of parallel networks for the synchronous model, respectively performs image processing, can independently acquire high-level feature vectors, and performs combination at an output layer. The DCT-LBP combined processing method is adopted to extract the global features and the local features of the input image respectively, so that the feature expression is better, and the performance of the face detection and recognition system is improved. When the image information reaches the output layer after a series of processing, the Softmax regression classification is adopted to compare and classify the face image information with the information in the database, so as to obtain the correct and complete character information. And the cosine correction is added in the regression classification, so that redundancy can be reduced, the generalization capability is enhanced, the overfitting is reduced, and the face recognition accuracy is increased.
(1) DCT-LBP joint treatment
The DCT-LBP combined processing method is used for extracting the information of the global features and the local features of the face image, independently obtaining the high-level feature vectors, better carrying out feature expression and being beneficial to improving the performance of the face detection and recognition system. The problem of high-frequency information loss in the DCT process is solved, and the limitation of LBP feature extraction is overcome. The DCT-LBP joint processing thus achieves good results in the feature extraction process.
DCT-LBP combined processing, in particular to binarization processing of pixel level LBP coding, which is used for obtaining a statistical histogram of jump times in a local binary mode thereof so as to extract local characteristics of face information. The calculation formula is shown as formula (6):
Figure BDA0003157912390000081
wherein->
Figure BDA0003157912390000082
Wherein: a, a c Is the center pixel gray value; a, a i The gray value of the pixel of the surrounding neighborhood point; s is a threshold function; f (f) LBP (m c ,n c ) Is the LBP value of the center pixel.
The DCT discrete transformation obtains low-frequency coefficients as global features of the face information. The calculation formula is shown as formula (7):
Figure BDA0003157912390000083
and carrying out weighted fusion joint processing on the LBP and the DCT, wherein the extraction calculation formula is shown as formula (8):
S=a·DCT+b·LBP (8)
wherein: a is the weighting coefficient of the DCT, b is the weighting coefficient of the LBP, and a+b=1; s is the weighted image.
(2) Convolution operation
The image is subjected to feature extraction through a convolution layer and is sent to pooling sampling. Each convolution kernel can extract unique characteristic values, and finally, superposition is carried out, and then, the edge characteristics and the integral characteristics are extracted by utilizing the high-level convolution kernels, so that the characteristic vectors are extracted. Convolution operation is the most common operation of digital image processing, and the specific formula of a convolution layer is as follows:
Figure BDA0003157912390000091
wherein: x represents a two-dimensional input vector, the dimensions of which are (m, n); y represents m×n size feature map size; f represents an activation function; w represents a convolution kernel of size J×I; b denotes the bias.
(3) Pooled sampling
Pooled sampling is a nonlinear downsampling method that reduces the size of the feature map of convolution operations. The pooling function is:
Figure BDA0003157912390000092
wherein:
Figure BDA0003157912390000093
a j-th feature map representing a first pooling layer; f represents an activation function; />
Figure BDA0003157912390000094
Respectively represent characteristic diagrams->
Figure BDA0003157912390000095
Multiplicative bias and additive bias of (a).
(4) Full connection
The full-connection layer is connected with neurons among each other, and after the pooling layer and before the output layer, the image dimension can be reduced, and the two-dimensional image is changed into one-dimensional vector, so that the output layer classification is facilitated. The calculation formula is as follows:
Figure BDA0003157912390000096
wherein: f represents an activation function; n is the number of neurons of the l-1 layer; l represents the current layer number; w (W) ij (1-1) Representing connection parameters between the i-th unit of the first layer-1 and the i-th unit of the first layer; b i (l-1) Is the bias term, x, of the ith cell of the first layer l i Representing the output value of the i-th cell of the first layer.
(5) Softmax regression classification
Sofimax regression is a linear multi-classification model, an extension of the Logistic regression model. On the multi-classification problem, the true weight vector can be converted into a probability distribution, the loss function expression:
Figure BDA0003157912390000097
wherein: m is the number of samples; n is the number of species; x is x i Feature vectors for the ith sample; y is i Is a kind identification; w and b are respectively used as weight matrix and offset of the full connection layer; w (W) j The j-th column of the weight matrix; b j Is the corresponding bias term.
In order to eliminate the change in class generated by the loss function, the characteristic relation is closely and easily judged, and the class cosine similarity loss function is provided
Figure BDA0003157912390000098
Wherein:
Figure BDA0003157912390000099
is the included angle between the feature vector of the ith sample and its corresponding class weight vector.
In order to facilitate the discrimination of the trained features, the method is studied under the common supervision of Softmax loss and intra-class similarity loss, and the obtained loss function expression is
Figure BDA0003157912390000101
Wherein: lambda is a scalar used to balance two loss functions
In the network training, in order to make the cosine similarity evaluation standard the same during the test, the Euclidean distance of the sample similarity is converted into the cosine distance, and the weight and the feature are normalized to the processed value S, so that the value S is automatically learned, and the difference feature separation on the hypersphere is better. The loss function at this time is expressed in combination as
Figure BDA0003157912390000102
Wherein: lambda is the balance coefficient of the normalized joint expression.
Step three: video compression aware target tracking algorithm. The algorithm firstly highlights the edge texture of the target image by utilizing image sharpening, and then normalizes the face image by utilizing a rectangular filter and acquires the feature vector. And then compressing the Haar-like features of the target sample and the background sample by using dynamic compressed sensing, establishing a target model by using the compressed Haar-like feature vector, and training an Adaboost algorithm Bayesian cascade classifier. And finally, identifying the target image and the background image by using a naive Bayes classifier, and realizing the dynamic tracking of face recognition.
The dynamic signal processed by dynamic compressive sensing has time variability, and X is set t For the sparse signal projected by the sparse matrix, the expression form of the state space equation of the dynamic compressed sensing model is as follows:
X t =f t (X t-1 )+v t (16)
Y t =A t X tt (17)
wherein: y is Y t Expressed as an observation equation; f (f) t Represented as a state transfer function in a state space equation; v t ,ω t Expressed as process noise and observation noise, respectively, are usually defaulted to gaussian white noise with an average value of 0.
According to the idea of the dynamic video compressed sensing theory, the information contained in the original signal is represented by a small amount of sampling observation signals, so that the dimension of the signal is reduced. And carrying out compression projection on the characteristic space vector X of the high-dimensional original signal to a low-dimensional space by using the random measurement matrix P to obtain a low-dimensional compression characteristic space vector.
Haar-like feature computation, i.e. computing sub-image feature values of all sample windows by scanning a large number of sample windows, the feature values being rectangular gray scale pixel differences in the detected image, but the process generates a large number of operations. Meanwhile, in order to keep the image scale unchanged, the calculated amount is further increased, so that the face detection speed and the training efficiency of the classifier are reduced. The mathematical expression of the Haar-like characteristic value is as follows:
feature=∑ i∈(1,2,...Ni RectSum(γ i ) (18)
wherein: omega i Representing a weight matrix; n represents the number of rectangular characteristic values; rectsum (gamma) i ) Representing the sum of all rectangular eigenvalues of the sample image.
In order to improve the operation rate of the feature vector value in the compression projection, the sum of rectangular areas and the square sum thereof can be rapidly calculated by adopting an integral graph algorithm, so that the operation amount is reduced, and the operation rate is improved. The integral graph operation formula is as follows:
J(m,n)=∑ m′<m,n′<n H(m′,n′) (19)
wherein: j (m, n) represents an integrated value of the image at the pixel (m, n); h (m ', n') represents the gray value of the image at the pixel (m ', n').
Calculating the integral of the primary original image, and completing integral operation in the form of accumulated summation of rows in the follow-up tracking process, wherein the operation formula is as follows:
S(m,n)=S(m,n-1)+I(m,n)
J(m,n)=J(m-1,n)+S(m,n) (20)
wherein: in the initial image integration, S (0, -1) =0, j (-1, 0) =0; s (m, n) is calculated first, and J (m, n) is calculated.
The method is characterized in that a dedicated integral diagram lookup table is established for each image, and convolution linear time operation is directly searched and completed through the integral diagram lookup table established in advance in the calculation stage of the integral process, so that the time of the convolution operation is not related to the window size of a rectangular area.
The cascade classifier is constructed by adopting the theory of Bayesian classification, and has a firm mathematical basis and stable classification efficiency. Because the classifier model has few estimated parameters, the classifier model has good robustness against missing data. According to the method, the characteristics of the target sample and the background sample are used as classifier models to construct the trained data vector, and the attribute correlation is small, so that the naive Bayes model is simple in algorithm, the classification error rate is reduced, and the overall performance is improved.
The mathematical expression of the naive bayes classifier model is:
Figure BDA0003157912390000111
wherein: c (C) i Expressed as a category of data attributes; x is X A Represented as test samples.
The projection of the high-dimensional random vector obeys Gaussian distribution, and a sample X is set A The low-dimensional eigenvector of (a) is mu, and each element is independently distributed. The feature vectors are classified by na iotave bayes assuming equal prior probabilities for the positive and negative samples. Let a be the sample label, positive samples denoted a=1, negative samples denoted a=0, the acquisition of bayesian classifier response is:
Figure BDA0003157912390000112
wherein: h (u) represents the bayesian classifier response when the low-dimensional feature vector is μ; a is denoted as sample tag and a e {0,1}; p (a=1) =p (a=0) represents positive and negative sample prior probabilities, respectively; mu (mu) j Represented as sample image X A Is a j-th feature of (a).
On the basis of a naive Bayes classifier model, an Adaboost algorithm in a Boosting technology is introduced, and a plurality of naive Bayes classifier models are combined. The Adaboost algorithm has strong self-adaptation capability, and the core idea is as follows: the strong classifier is decomposed into a plurality of linear combinations of weak classifiers, and each shellfish She Siruo classifier is learned and trained through different training set samples, so that the recognition accuracy of the Bayesian cascade classifier of the Adaboost algorithm is improved.
In the Adaboost algorithm, the weak classifier expression is:
Figure BDA0003157912390000121
wherein: h (x, f, p, θ) is denoted as a weak classifier; x is represented as a sub-window image; f (x) is expressed as a characteristic function of the sub-window; θ represents a threshold value of the f (x) function; the classification process is the process of obtaining the threshold value of the function.
The weak classifier builds the strong classifier as follows:
the image sample is X A Then n training set samples are represented as { (X) 1 ,a 1 ),(X 2 ,a 2 ),...,(X n ,a n ) X is }, then i A is a sample image i Represented as positive and negative samples, and a i E {0,1}. Initializing operation, let t=1, weight ω for each sample i Initializing, and then carrying out weight normalization processing, wherein the mathematical expression is as follows:
Figure BDA0003157912390000122
training the weak classifier by training, and calculating the weighted error rate of the corresponding feature on the weak classifier, wherein the mathematical expression is as follows:
Figure BDA0003157912390000123
by comparison, ε t The weak classifier with the minimum error rate is the optimal weak classifier. And pass through a weak classifier epsilon t Updating the distribution of samples, wherein the mathematical expression is as follows:
Figure BDA0003157912390000124
if test sample x i If the classification is correct, then there is ε i =0; if the classification is wrong, epsilon exists i =1,
Figure BDA0003157912390000125
And combining the operations to obtain the cascade strong classifier, wherein the mathematical expression is as follows:
Figure BDA0003157912390000126
wherein the method comprises the steps of
Figure BDA0003157912390000127
The conditional probability in classifier response H (u) is
Figure BDA0003157912390000128
Figure BDA0003157912390000129
Figure BDA00031579123900001210
Sampling a plurality of positive samples and negative background samples around a target in an i+1 frame image, extracting features of the positive and negative samples, and obtaining the average value u of all the positive and negative samples according to operation 1 Sum of variances sigma 1 . And updating the target model by using classification updating to realize dynamic tracking of the target. The classification update mathematical expression is as follows:
Figure BDA0003157912390000131
wherein the method comprises the steps of
Figure BDA0003157912390000132
/>
Wherein: lambda is denoted as a learning factor, and lambda > 0; u (u) j (i) The j-th feature represented as the i-th frame sample image.
According to simulation experiment analysis, compression processing is carried out on Haar-like features of a human face, a target model is built by utilizing compressed Haar-like feature vectors, and an Adaboost algorithm Bayesian cascade classifier is trained. Finally, the method is applied to a video compression sensing target tracking algorithm, so that the target face image still has a certain identification effect under the interference conditions of the edge, movement, uneven illumination and other environmental factors in the video. The method is characterized in that the method comprises the steps of finishing the purposes of gait recognition and face recognition and target tracking, analyzing the target path track, and further analyzing the target information and the information of the close contact person by feeding back to a central processing unit to derive a frame sequence picture containing the target and the close contact person in the monitoring video. The research method of the station intelligent epidemic situation prevention and control system can save manpower resource waste and can obtain good station epidemic prevention effect.
3-4, which are structure diagrams of a target tracking algorithm of video compression sensing of the present invention, FIG. 3 is an updating process of a classifier at time t; fig. 4 is a target tracking process at time t+1.
Step four: the gait recognition algorithm, the face recognition algorithm and the video compressed sensing target tracking algorithm are fused, and the specific process is as follows: primarily screening the station monitoring video target characters through gait recognition, and recording and storing; performing a face recognition process according to the character images after primary screening to further confirm target information; and (3) performing a target positioning process according to the output face characteristic information, deriving a travel track containing target information by adopting a target tracking algorithm of video compression sensing, recording a frame sequence image containing a target, and finally feeding back to a central processing unit, so that the target person and the information of a person in close contact with the target person can be conveniently searched. The purpose of intelligent epidemic situation prevention and control of the station is achieved.
The intelligent epidemic prevention and control system and method for the station have the advantages that the target information is determined through gait recognition and face recognition, the confirmed target characteristic information is utilized, the track of the target is analyzed through a target tracking algorithm based on video compressed sensing, the target track information and the information of people in close contact with the target track information are obtained through feedback, the purpose of preventing and controlling the station epidemic is achieved, and intelligent scientific epidemic prevention and control is achieved.
According to the invention, gait recognition optimization algorithm, face recognition optimization algorithm and compressed sensing target track tracking algorithm are comprehensively considered, and the gait recognition is adopted to perform preliminary analysis on video target information so as to obtain related information. And performing face recognition on the matrix video frame image after gait recognition through face recognition, and further acquiring face characteristic information for confirming the identity. The face characteristic information is applied to a target tracking algorithm of video compression sensing to track target information, and the frame sequence position of an image of the target in the video is found out, so that the image containing the target and a contact target is derived, the target information and the contact target information are further confirmed through feedback search, the manual screening search is replaced, and the intelligent epidemic prevention and control function of the station is achieved.
The invention discloses a method for a station intelligent epidemic situation prevention and control system, which provides a new and more efficient research method for station intelligent epidemic situation prevention and control. The superiority and effectiveness of the research method are illustrated by the deep research on the algorithm, so that the blank of the research on the intelligent epidemic situation prevention and control theory of the station is made up to a great extent, and the research method greatly improves the efficiency of searching the target track and more comprehensively manages and controls the information of the closely contacted personnel. Has great significance in the artificial intelligence fields such as intelligent epidemic prevention and control, target positioning identification and the like.
The above examples are implemented on the premise of the technical scheme of the present invention, and detailed implementation manners and specific operation processes are given, but the protection scope of the present invention is not limited to the above examples. The methods used in the above examples are conventional methods unless otherwise specified.

Claims (4)

1. The station intelligent epidemic situation prevention and control method is realized based on a station intelligent epidemic situation prevention and control system, wherein the system comprises a camera, a matrix video controller and a server; the system acquires video through a camera, and performs preliminary analysis on video target information by gait recognition in a server to acquire related information; the matrix video controller forms the image into a matrix video frame image, and the server carries out face recognition on the matrix video frame image after gait recognition through face recognition to further acquire face characteristic information for confirming identity; the face characteristic information is applied to a target tracking algorithm of video compression sensing to track target information, and the frame sequence position of an image of the target in the video is found out, so that an image containing the target and a contact target is derived, and the target information and the contact target information are further confirmed through feedback searching; the method comprises the following steps:
step one: the gait recognition optimization algorithm takes gait information based on a video sequence as a recognition object, performs frame cutting on a dynamic video image sequence to obtain static picture information, and then performs moving object detection by using a background subtraction method to obtain a picture only containing task information, wherein the obtained binarized character information also contains noise generated due to various reasons and needs to be removed by a certain morphological processing algorithm; in the process of feature extraction, gait energy information is extracted to obtain a gait energy diagram and the opening angles of the two legs at different moments when a person walks, the two kinds of information are weighted and feature-fused at an input layer of a neural network to be used as input variables of the network, and in order to reduce the robustness of the neural network and improve the recognition accuracy of the network, a GA genetic algorithm is added to optimize the weight and the threshold value of the network, so that a better gait recognition and classification effect is achieved;
step two: the face recognition algorithm adopts a double-symmetrical LeNet parallel connection network structure, the synchronous model adopts two paths of parallel networks to respectively process images, and can independently acquire high-level feature vectors and combine the high-level feature vectors at an output layer; the global features and the local features of the input image are extracted respectively by adopting a DCT-LBP combined processing method, so that feature expression is better carried out, and the performance of a face detection and recognition system is improved; when the image information reaches the output layer after a series of processing, carrying out comparison and classification on the face image information and the information in the database by adopting Softmax regression classification to obtain correct and complete character information; the cosine correction is added in the regression classification, so that redundancy can be reduced, the generalization capability is enhanced, the overfitting is reduced, and the face recognition accuracy is increased;
step three: the video compression perception target tracking algorithm firstly utilizes image sharpening to highlight target image edge textures, and then utilizes a rectangular filter to normalize a face image and obtain feature vectors; then compressing the Haar-like features of the target sample and the background sample by utilizing dynamic compressed sensing, establishing a target model by utilizing compressed Haar-like feature vectors, and training an Adaboost algorithm Bayesian cascade classifier; finally, a naive Bayes classifier is utilized to identify the target image and the background image, so that the dynamic tracking of face recognition is realized;
step four: the gait recognition algorithm, the face recognition algorithm and the video compressed sensing target tracking algorithm are fused, and the specific process is as follows: 1) Primarily screening the station monitoring video target characters through gait recognition, and recording and storing; 2) Performing a face recognition process according to the character images after primary screening to further confirm target information; 3) According to the output face characteristic information, a target positioning process is carried out, a video compression sensing target tracking algorithm is adopted to derive a travel track containing target information, a frame sequence image containing a target is recorded, and finally the frame sequence image is fed back to a central processing unit, so that the target person and the information of a person in close contact with the target person can be conveniently searched; the purpose of intelligent epidemic situation prevention and control of the station is achieved.
2. The method for controlling intelligent epidemic situation of station according to claim 1, wherein in the first step, the gait recognition optimization algorithm comprises the following steps:
1) Feature extraction:
Figure FDA0004248255300000021
wherein: n is the number of frames contained in one cycle of the extracted binarized gait sequence; (x, y) respectively represent coordinate values in the image; b (x, y) is the pixel value of the (x, y) point in the t-th frame of the image; g (x, y) is the calculated energy map;
Figure FDA0004248255300000022
Figure FDA0004248255300000023
wherein: (X) c ,Y c ) The centroid coordinates obtained after calculation are obtained; n is the number of pixel points of the contour pixels; (x, y) is a coordinate value; θ i The calculation result of the included angle theta of the ith pixel point;
2) GA-BP classification and identification:
training the network by utilizing a genetic algorithm, solving the learning problem of the neural network, and narrowing the threshold searching range; then, the neural network is utilized to carry out accurate solution, thereby achieving the purposes of global optimization and high speed and high efficiency; the neural network recognition rate is used as a target parameter and optimized through a genetic algorithm; and using the weight and the threshold value of the individual representative network, taking the prediction error of the BP neural network initialized by the individual as the adaptation value of the individual, and searching the optimal individual through selection, crossing and mutation operations.
3. The station intelligent epidemic prevention and control method according to claim 1, wherein the face recognition algorithm in the second step comprises the following steps:
1) DCT-LBP joint treatment
S=a·DCT+b·LBP (4)
Wherein: a is the weighting coefficient of the DCT, b is the weighting coefficient of the LBP, and a+b=1; s is the weighted image;
2) Convolution operation
Figure FDA0004248255300000031
Wherein: x represents a two-dimensional input vector, the dimensions of which are (m, n); y represents m×n size feature map size; f represents an activation function; w represents a convolution kernel of size J×I; b represents bias;
3) Pooled sampling
Figure FDA0004248255300000032
Wherein:
Figure FDA0004248255300000033
a j-th feature map representing a first pooling layer; f represents an activation function; />
Figure FDA0004248255300000034
Respectively represent characteristic diagrams->
Figure FDA0004248255300000035
Multiplicative bias and additive bias of (a);
4) Full connection
Figure FDA0004248255300000036
Wherein: f represents an activation function; n is the number of neurons of the l-1 layer; l represents the current layer number;
Figure FDA0004248255300000037
representing connection parameters between the jth element of the first-1 layer and the ith element of the first layer; />
Figure FDA0004248255300000038
Is the bias term of the ith cell of the first layer,/->
Figure FDA0004248255300000039
Representing the output value of the i-th cell of the first layer;
5) Softmax regression classification
In the network training, in order to make the cosine similarity evaluation standard the same when testing, the Euclidean distance of the sample similarity is converted into the cosine distance, the weight and the feature are normalized to the processed value S, so that the value S is automatically learned, and the difference feature separation on the hypersphere achieves a better effect; the loss function joint expression at this time is:
Figure FDA00042482553000000310
wherein: lambda is the balance coefficient of the normalized joint expression.
4. The station intelligent epidemic prevention and control method according to claim 1, wherein the video compression sensing target tracking algorithm in the third step comprises the following steps:
the dynamic signal processed by dynamic compressive sensing has time variability, and X is set t For the sparse signal projected by the sparse matrix, the expression form of the state space equation of the dynamic compressed sensing model is as follows:
X t =f t (X t-1 )+v t (9)
Y t =A t X tt (10)
wherein: y is Y t Expressed as an observation equation; f (f) t Represented as a state transfer function in a state space equation; v t ,ω t Represented as process noise and observation noise, respectively;
according to the idea of the dynamic video compressed sensing theory, the information contained in the original signal is represented by a small amount of sampling observation signals, so that the dimension of the signal is reduced; carrying out compression projection on the characteristic space vector X of the high-dimensional original signal to a low-dimensional space by utilizing a random measurement matrix P to obtain a low-dimensional compression characteristic space vector;
haar-like feature calculation, namely calculating sub-image feature values of all sample windows by scanning a large number of sample windows, wherein the feature values are rectangular gray scale pixel differences in the detected image, but a large number of operations are generated in the process; meanwhile, in order to keep the image scale unchanged, the calculated amount is further increased, so that the face detection speed and the training efficiency of the classifier are reduced; the mathematical expression of the Haar-like characteristic value is as follows:
feature=∑ i∈(1,2,...N) ω i RectSurn(γ i ) (11)
wherein: omega i Representing a weight matrix; n represents the number of rectangular characteristic values; rectsum (gamma) i ) A sum of all rectangular eigenvalues representing the sample image;
in order to improve the operation rate of the feature vector value in compression projection, the sum of rectangular areas and the square sum thereof can be rapidly calculated by adopting an integral graph algorithm, so that the operation amount is reduced, and the operation rate is improved; the integral graph operation formula is as follows:
J(m,n)=∑ m′<m,n′<n H(m′,n′) (12)
wherein: j (m, n) represents an integrated value of the image at the pixel (m, n); h (m ', n') represents the gray value of the image at the pixel (m ', n');
the mathematical expression of the naive bayes classifier model is:
Figure FDA0004248255300000041
wherein: c (C) i Expressed as a category of data attributes; x is X A Represented as a test sample;
in the Adaboost algorithm, the weak classifier expression is:
Figure FDA0004248255300000042
wherein: h (x, f, p, θ) is denoted as a weak classifier; x is represented as a sub-window image; f (x) is expressed as a characteristic function of the sub-window; θ represents a threshold value of the f (x) function; the classification process is the process of obtaining the threshold value of the function.
CN202110783584.4A 2021-07-12 2021-07-12 Station intelligent epidemic situation prevention and control system and method Active CN113591607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110783584.4A CN113591607B (en) 2021-07-12 2021-07-12 Station intelligent epidemic situation prevention and control system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110783584.4A CN113591607B (en) 2021-07-12 2021-07-12 Station intelligent epidemic situation prevention and control system and method

Publications (2)

Publication Number Publication Date
CN113591607A CN113591607A (en) 2021-11-02
CN113591607B true CN113591607B (en) 2023-07-04

Family

ID=78246854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110783584.4A Active CN113591607B (en) 2021-07-12 2021-07-12 Station intelligent epidemic situation prevention and control system and method

Country Status (1)

Country Link
CN (1) CN113591607B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116778376B (en) * 2023-05-11 2024-03-22 中国科学院自动化研究所 Content security detection model training method, detection method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310466A (en) * 2013-06-28 2013-09-18 安科智慧城市技术(中国)有限公司 Single target tracking method and achievement device thereof
CN103942577A (en) * 2014-04-29 2014-07-23 上海复控华龙微系统技术有限公司 Identity identification method based on self-established sample library and composite characters in video monitoring
CN106570471A (en) * 2016-10-26 2017-04-19 武汉科技大学 Scale adaptive multi-attitude face tracking method based on compressive tracking algorithm
CN107463917A (en) * 2017-08-16 2017-12-12 重庆邮电大学 A kind of face feature extraction method merged based on improved LTP with the two-way PCA of two dimension
CN108563999A (en) * 2018-03-19 2018-09-21 特斯联(北京)科技有限公司 A kind of piece identity's recognition methods and device towards low quality video image
CN108573217A (en) * 2018-03-21 2018-09-25 南京邮电大学 A kind of compression tracking of combination partial structurtes information
CN111709285A (en) * 2020-05-09 2020-09-25 五邑大学 Epidemic situation protection monitoring method and device based on unmanned aerial vehicle and storage medium
CN112907810A (en) * 2021-04-02 2021-06-04 吉林大学 Face recognition temperature measurement campus access control system based on embedded GPU

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11017211B1 (en) * 2012-09-07 2021-05-25 Stone Lock Global, Inc. Methods and apparatus for biometric verification
US9269012B2 (en) * 2013-08-22 2016-02-23 Amazon Technologies, Inc. Multi-tracker object tracking
US10902243B2 (en) * 2016-10-25 2021-01-26 Deep North, Inc. Vision based target tracking that distinguishes facial feature targets
KR20210025020A (en) * 2018-07-02 2021-03-08 스토워스 인스티튜트 포 메디컬 리서치 Face image recognition using pseudo images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310466A (en) * 2013-06-28 2013-09-18 安科智慧城市技术(中国)有限公司 Single target tracking method and achievement device thereof
CN103942577A (en) * 2014-04-29 2014-07-23 上海复控华龙微系统技术有限公司 Identity identification method based on self-established sample library and composite characters in video monitoring
CN106570471A (en) * 2016-10-26 2017-04-19 武汉科技大学 Scale adaptive multi-attitude face tracking method based on compressive tracking algorithm
CN107463917A (en) * 2017-08-16 2017-12-12 重庆邮电大学 A kind of face feature extraction method merged based on improved LTP with the two-way PCA of two dimension
CN108563999A (en) * 2018-03-19 2018-09-21 特斯联(北京)科技有限公司 A kind of piece identity's recognition methods and device towards low quality video image
CN108573217A (en) * 2018-03-21 2018-09-25 南京邮电大学 A kind of compression tracking of combination partial structurtes information
CN111709285A (en) * 2020-05-09 2020-09-25 五邑大学 Epidemic situation protection monitoring method and device based on unmanned aerial vehicle and storage medium
CN112907810A (en) * 2021-04-02 2021-06-04 吉林大学 Face recognition temperature measurement campus access control system based on embedded GPU

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A preliminary study of gait-based age estimation techniques;Binu Muraleedharan Nair等;《Procedia Computer Science》;381-386 *
基于WM-CoSaMP重构算法的压缩感知在步态识别中的应用研究;苏维均;李明星;于重重;王红红;;计算机应用研究(01);291-294 *
基于加权融合的多特征步态识别算法研究;何艳丽;《中国优秀硕士学位论文全文数据库 信息科技辑》;I138-3569 *
基于压缩感知理论的实时目标跟踪算法研究及系统实现;刘阳;金晓康;王朦;任D梵;;软件(08);20-26 *
宋强 ; 张颖.人脸识别视频压缩感知跟踪算法.《辽宁科技大学学报》.2021,371-378. *
宋强 ; 张颖.基于卷积神经网络的人脸识别算法.《辽宁科技大学学报》.2020,363-367+376. *

Also Published As

Publication number Publication date
CN113591607A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN111860670B (en) Domain adaptive model training method, image detection method, device, equipment and medium
Mansanet et al. Local deep neural networks for gender recognition
CN111709311B (en) Pedestrian re-identification method based on multi-scale convolution feature fusion
US8855363B2 (en) Efficient method for tracking people
Jia et al. Visual tracking via coarse and fine structural local sparse appearance models
US20140307917A1 (en) Robust feature fusion for multi-view object tracking
CN112070058A (en) Face and face composite emotional expression recognition method and system
CN113361495A (en) Face image similarity calculation method, device, equipment and storage medium
CN109801305B (en) SAR image change detection method based on deep capsule network
CN115527269B (en) Intelligent human body posture image recognition method and system
Liu et al. Gait recognition using deep learning
Song et al. Feature extraction and target recognition of moving image sequences
CN113591607B (en) Station intelligent epidemic situation prevention and control system and method
Kumar et al. Predictive analytics on gender classification using machine learning
Akilan Video foreground localization from traditional methods to deep learning
Kher et al. Soft Computing Techniques for Various Image Processing Applications: A Survey
Fritz et al. Rapid object recognition from discriminative regions of interest
Briceño et al. Robust identification of persons by lips contour using shape transformation
CN110163106A (en) Integral type is tatooed detection and recognition methods and system
Fu et al. Research on Video Object Tracking Based on Improved Camshift Algorithm
CN115019365B (en) Hierarchical face recognition method based on model applicability measurement
Harshitha et al. An Advanced Ensemble Directionality Pattern (EDP) based Block Ensemble Neural Network (BENN) Classification Model for Face Recognition System
Nagarajan et al. Face Recognition Using Genetic Algorithm-A Unique Approach With Pre-Trained Models For Low Power Microcontrollers
Balaji et al. Machine learning approach for object detection-a survey approach
Nagalakshmi et al. Gender Classification using Machine Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant