CN109214467B - Transformer operator dressing robust identification method considering classifier output sensitivity - Google Patents
Transformer operator dressing robust identification method considering classifier output sensitivity Download PDFInfo
- Publication number
- CN109214467B CN109214467B CN201811213574.1A CN201811213574A CN109214467B CN 109214467 B CN109214467 B CN 109214467B CN 201811213574 A CN201811213574 A CN 201811213574A CN 109214467 B CN109214467 B CN 109214467B
- Authority
- CN
- China
- Prior art keywords
- training sample
- classifier
- basis function
- sensitivity
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a transformer operator dressing robust identification method considering classifier output sensitivity, which relates to the field of image identification and analysis and comprises the following steps: acquiring a human body image, and extracting feature vectors of each unit based on an HOG (histogram of oriented gradient) operator and an HOC operator; solving a basis function center of the radial basis function neural network about the characteristic vector by adopting a K-means clustering method; selecting a Gaussian function as a basic function of the radial basis function neural network, and calculating a variance; randomly sampling points in the neighborhood of each training sample by adopting a Monte Carlo method, and solving the sensitivity of each training sample; establishing a loss function considering the output sensitivity of the classifier according to the sensitivity of the training sample, and iteratively solving the weight from the hidden layer to the output layer by using a least square-Newton method to obtain a radial basis function neural network classifier on the training sample; and (4) dressing recognition is carried out on the transformer substation operating personnel according to the trained classifier, and a recognition result is output. The robustness of the classifier in practical recognition applications is enhanced.
Description
Technical Field
The invention relates to the field of image recognition and analysis methods, in particular to a robust recognition method for dressing of a transformer operator considering classifier output sensitivity.
Background
With the continuous improvement of video monitoring technology, intelligent video monitoring systems are widely applied to power systems. The intelligent analysis, detection and identification technology of video images and the like are continuously developed, and powerful technical support is provided for the development of unattended intelligent substations. After the intelligent monitoring technology is applied to the transformer substation, the intelligent analysis of the behaviors of the constructors can be realized, the safety alarm can be performed according to the requirements, and the safety behaviors of the constructors and the efficiency and accuracy of safety supervision can be effectively improved.
At present, human body identification and behavior analysis based on a video system are well realized and applied, in the dressing analysis of operators in a transformer substation, the transformer substation is complex in operation environment, the conditions of shading, uneven illumination, size and visual angle change and the like often occur, the dressing of the operators has great differences of old and new types, stature and the like, the differences are not good to quantify, in the existing method, a large amount of various types of sample data are generally required to be collected to learn a model, and meanwhile, the model is required to be continuously updated to meet the requirement that new changes can be effectively identified. The acquisition and marking of a large amount of data are very costly, and the iterative updating of the model also faces the challenge of practical application popularization.
In summary, the existing worker dressing analysis method needs further improvement, and the recognition robustness needs further improvement.
Disclosure of Invention
The invention aims to solve the problem that the identification accuracy of the existing transformer substation operator dressing analysis method fluctuates in a complex changing environment, and aims to provide an operator dressing analysis method capable of keeping robustness to environmental changes in a transformer substation application test environment.
The invention provides a transformer operator dressing robust identification method considering classifier output sensitivity, which comprises the following steps of:
(1) acquiring a human body image, dividing the image into three units of a helmet, an upper garment and a lower garment according to the ratio of 1:4:4, and extracting feature vectors HOG (Histogram of Oriented Gradient) of each unit based on HOG (Histogram of Oriented Gradient) and HOC (Histogram of Oriented Color) operatorsi(i=1,2,3)、hoci(i=1,2,3);
(2) Method for solving feature vector hog of radial basis function neural network relative to training sample by adopting K-means clustering methodi(i=1,2,3)、hociCenter c of basis function of (i ═ 1,2,3)jH is the number of cluster centers selected in the training sample;
(3) selecting a Gaussian function as a basis function of a radial basis function neural network, and calculating a variance sigmaj(j=1,2,3,...,h);
(4) Randomly sampling points in the neighborhood of each training sample by adopting a Monte Carlo method, and solving the sensitivity sen of each training samplep(p ═ 1,2,. said., m), m is the total number of training samples;
(5) establishing a loss function considering the output sensitivity of the classifier according to the sensitivity of the training sample, and iteratively solving the weight from the hidden layer to the output layer by using a least square-Newton method to obtain a radial basis function neural network classifier on the training sample;
(6) and (4) dressing recognition is carried out on the transformer substation operating personnel according to the trained classifier, and a recognition result is output.
The dressing robust identification method of the transformer substation operator considering the output sensitivity of the classifier comprises the steps of obtaining a human body image, dividing the image into a helmet unit, an upper garment unit and a lower garment unit according to a proportion, extracting feature vectors of each unit based on HOG and HOC operators, solving the base function center of a radial base neural network related to the feature vectors of training samples by adopting a K-mean clustering method, selecting a Gaussian function as the base function of the radial base neural network, calculating variance, randomly sampling points in the neighborhood of the training samples by adopting a Monte Carlo method, solving the sensitivity of each training sample, establishing a loss function considering the output sensitivity of the classifier according to the sensitivity of the training samples, iteratively solving the weight from a hidden layer to an output layer by adopting a least square-Newton method to obtain the radial base neural network classifier on the training samples, and carrying out dressing identification on the transformer substation operator according to the trained classifier, and outputting the recognition result. According to the invention, the output fluctuation of the classifier in the field of training samples is considered, the classification model of the radial basis function neural network is established, the generalization capability of the transformer substation operator wearing analyzer in a complex operation environment is improved, and the robustness of the classifier in practical recognition application is enhanced.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a robust identification method for transformer substation workers considering classifier output sensitivity according to an embodiment of the present invention;
FIG. 2 is a graph of the analysis results of a radial basis function neural network classifier with consideration of classifier output sensitivity provided by an embodiment of the present invention on four typical positive-class and negative-class test samples.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "or/and" includes any and all combinations of one or more of the associated listed items.
In one embodiment, as shown in fig. 1, a robust transformer worker dressing identification method considering classifier output sensitivity according to an embodiment of the present invention includes the following steps:
step 1: acquiring a human body image, dividing the image into three units of a helmet, an upper garment and a lower garment according to a proportion, and extracting a feature vector HOG of each unit based on an HOG operator and an HOC operatori(i=1,2,3)、hoci(i=1,2,3);
Wherein, i-1 denotes a helmet unit, i-2 denotes an upper garment unit, and i-3 denotes a lower garment unit.
Specifically, a human body image is obtained, the image is divided into three units of a helmet, an upper garment and a lower garment according to the ratio of 1:4:4, and feature vectors HOG based on HOG and HOC operators of the units are extractedi(i=1,2,3)、hoci(i ═ 1,2, 3); HOG denotes a gradient direction histogram, and HOC denotes a color histogram.
Step 2: method for solving feature vector hog of radial basis function neural network relative to training sample by adopting K-means clustering methodi(i=1,2,3)、hociCenter c of basis function of (i ═ 1,2,3)j(j ═ 1,2,. eta., h), h is the number of cluster centers selected in the training samples;
the step 2 specifically comprises:
(2-1) initialization of the network: randomly selecting feature vectors hog of h training samplesi(i=1,2,3)、hoci(i ═ 1,2,3) as the clustering center d of the training samplesjAn initial value of (j ═ 1,2, 3.., h), djIs the jth cluster center;
(2-2) grouping the input training sample set by using a nearest neighbor rule: according to xpAnd cluster center djThe Euclidean distance between them assigns the input samples to respective cluster setsxpA feature vector of the p training sample;
(2-3) adjusting the clustering center: computing each collection of clustersAverage value of the feature vectors of the training samples, and using each calculated average value as the clustering center d of each cluster setjValue of (d) if clustering center djIf there is no change in the value of (d), the resulting cluster center d is obtainedjI.e. the basis function center c of the radial basis function neural networkj(j ═ 1,2,. ·, h); otherwise, returning to (2-2), and entering the base function center c of the radial base neural network of the next roundjSolving;
and step 3: selecting a Gaussian function as a basis function of a radial basis function neural network, and calculating a variance sigmaj(j=1,2,3,...,h);
Specifically, when a gaussian function is selected as the basis function of the radial basis function neural network, the variance calculation formula is as follows:
wherein, cmaxIs a stand forSelecting the maximum distance between the centers of the basis functions of the radial basis function neural network;
and 4, step 4: randomly sampling points in the neighborhood of the training samples by adopting a Monte Carlo method, and solving the sensitivity sen of each samplep(p ═ 1,2,. said., m), m is the total number of training samples;
the step 4 specifically comprises the following steps:
(4-1) assuming that the difference between the data points in the neighborhood of the training samples and the feature vectors of the training samples follows a Gaussian distribution with a mean value of zero and a standard deviation of q, it can be noted as Δ x-N (0, q)2),Δx={Δxp1, 2.. multidot.m., Δ x is a set of differences between a feature vector of a point in a field in which each training sample is randomly sampled and a feature vector of the training sample, and Δ x is a set of differences between the feature vectors of the points in the field in which each training sample is randomly sampled and the feature vector of the training samplepThe difference between the characteristic vector of a point in the field of randomly sampling the p-th training sample and the characteristic vector of the training sample is obtained, and q is the standard deviation of the difference between the characteristic vector of the p-th training sample and the point in the neighborhood of the training sample;
(4-2) randomly adding disturbance to the training sample according to Gaussian distribution, sampling a plurality of points in the neighborhood of the training sample, respectively calculating output differences, averaging all the differences to obtain the sensitivity,
the sensitivity is defined as:
wherein x ═ { x ═ xp|p=1,2,...,m},Δx={Δxp1,2, m, and-q ≦ | Δ xpQ is less than or equal to | l, x is the characteristic vector of the training sample, xpIs the characteristic vector of the p training sample, theta is the weight of the radial basis function neural network, fθA mapping function of the output and the input of the radial basis function neural network with theta as a weight; n is the number of points sampled in the p-th training sample field, (Δ x)p)kRepresenting the difference of the feature vector of the k point sampled in the p training sample field and the feature vector of the training sample;
and 5: establishing a loss function considering the output sensitivity of the classifier according to the sensitivity of the training sample, and iteratively solving the weight from the hidden layer to the output layer of the radial basis function neural network by using a least square-Newton method to obtain the radial basis function classifier on the training sample;
specifically, the loss function considering the classifier output sensitivity is defined as:
wherein, ypIs the true label of the p-th training sample, fθ(xp) Is the output value of the p-th training sample in the radial basis function neural network model, | yp-fθ(xp) I defines the difference between the true mark and the radial basis function neural network model output, is a classification error term, gamma is a hyperparameter for balancing sensitivity and classification error, senpSensitivity for the p-th training sample;
step 6: dressing recognition is carried out on substation operation personnel according to the trained classifier, and a recognition result is output;
specifically, a trained radial basis function neural network classifier is used for judging whether the dressing image of the operator meets the dressing requirement or not, and the recognition result is output.
According to the robust identification method for dressing of the power transformation operator considering the output sensitivity of the classifier, firstly, a human body image is obtained, the image is divided into three units, namely a helmet unit, an upper garment unit and a lower garment unit according to the proportion, the feature vector of each unit based on an HOG (hot object) and HOC (hot object) operator is extracted, then the basis function center of a radial basis function network relative to the sample feature vector is obtained by adopting a K-mean clustering method, then a Gaussian function is selected as the basis function of the radial basis function network, and the variance is calculated; and randomly sampling points in the neighborhood of the training samples by adopting a Monte Carlo method, solving the sensitivity of each sample, then establishing a loss function considering the output sensitivity of the classifier according to the sensitivity of the training samples, iteratively solving the weight from the hidden layer to the output layer by using a least square-Newton method to obtain a radial basis function neural network classifier on the training samples, finally, carrying out dressing identification on transformer substation operators according to the trained classifier, and outputting an identification result. According to the invention, the output fluctuation of the classifier in the field of training sample points is considered, the classification model of the radial basis function neural network is established, the generalization capability of the transformer substation operating personnel wearing the analyzer in a complex operating environment is improved, and the robustness of the analyzer in practical recognition application is enhanced.
In one embodiment, a system test is performed by, for example, an operator at a substation.
In the example, wearing yellow safety helmets, blue work tops and blue work bottoms is defined as correct dressing samples, namely, the positive type, and wearing safety helmets of other colors or wearing no safety helmets, tops or bottoms which are not blue work clothes is defined as incorrect dressing, namely, the negative type. The sample data mainly comprises two main types of real substation data and simulation acquisition data. The real substation data is data of a real substation scene, the data volume is small, and the actual application scene can be reflected most truly. The simulation collected data is data collected in an unreal operation environment based on dressing of operators provided by a certain transformer substation in Guangdong, the data environment of the simulation collected data is single, and changes of dressing color and appearance are small, so that the simulation collected data is mainly used for experimental comparison. All samples are subjected to scaling at the same height, training data and testing data of real substation data are divided according to the proportion of 1:1, training data and testing data of simulation acquisition data are divided according to the proportion of 1:1, and the training data and the testing data are used for training a radial basis neural network classifier and a traditional radial basis neural network classifier considering the output sensitivity of the classifier. The results of the analysis of four typical positive-class and negative-class test samples by the radial basis neural network classifier considering the classifier output sensitivity are shown in fig. 2.
In order to compare and consider the difference of the identification effect of the radial basis function neural network (S-RBFNN) with the traditional Radial Basis Function Neural Network (RBFNN) on the output sensitivity of the classifier on the positive type samples and the negative type samples, the embodiment uses three indexes of accuracy (A), recall rate (R) and false alarm rate (FA) to count and evaluate the identification accuracy of the dressing analysis of the system, and the three indexes are defined as follows:
wherein, P and N respectively represent the number of samples of positive class and negative class in the samples, TP represents the number of prediction result and labels both of which are positive class, TN represents the number of prediction result and labels both of which are negative class, FP represents the number of samples of which prediction result is positive class and labels are negative class.
The three indexes are adopted to evaluate the identification effects of the radial basis function neural network (S-RBFNN) based on the classifier output sensitivity and the traditional Radial Basis Function Neural Network (RBFNN) on the real substation data set and the simulation collection data set respectively, and the results are shown in tables 1 and 2:
TABLE 1 accuracy of the method to operator dressing analysis on real substation data set (%)
As can be seen from table 1, RBFNN represents the case where the conventional radial basis function neural network performs identification analysis on the dressing image of the operator on the real substation data set, and S-RBFNN represents the case where the radial basis function neural network based on the classifier output sensitivity performs identification analysis on the dressing image of the operator on the real substation data set.
The result shows that compared with the traditional radial basis function neural network, the accuracy of dressing analysis of operators on a real substation data set is improved by more than 2 percent, and the identification effect of the radial basis function neural network with introduced sensitivity in a complex environment is better expressed on the whole. Compared with the traditional radial basis function neural network, the radial basis function neural network based on the classifier output sensitivity has the advantages that the recall rate is improved by more than 5 percent, and the fact that when the dressing of an operator is different in appearance and color due to old and new reasons, illumination difference, shooting angle and the like is shown, the traditional radial basis function neural network can wrongly identify the correct dressing as incorrect dressing with high probability, and the radial basis function neural network based on the classifier output sensitivity can still keep a good correct dressing identification effect under the condition because the sensitivity to the appearance and the color of a sample is considered in the learning process.
Table 1 demonstrates the effectiveness of the radial basis neural network based on classifier output sensitivity proposed herein, as well as its robustness to minor changes in operator dressing appearance, color, etc.
TABLE 2 accuracy (%) -of worker wear analysis by methods on the simulated data set
Table 2 shows the results of different methods on the simulation of the acquired dataset, and the radial basis function neural network considering the output sensitivity of the classifier is within 1 percentage point of the identification accuracy of the dataset, compared with the conventional radial basis function neural network. The result shows that the situation that the simulation acquisition data has a single scene, uniform dressing of target personnel and no obvious difference in color exists, and the partial minimized error rate of loss caused by considering the sensitivity of the radial basis function neural network considering the output sensitivity of the classifier brings negative influence on the data set.
Table 2 shows that in some application environments with a single scene and small target difference, the radial basis function neural network considering the output sensitivity of the classifier needs smaller γ (hyper-parameter for balancing sensitivity and classification error) and q (standard deviation of difference between each training sample and the data point in its neighborhood), and the most extreme case is to set them to 0, and the radial basis function neural network based on the output sensitivity of the classifier degenerates to the conventional radial basis function neural network.
The experimental results of table 1 and table 2 are combined, and it is considered that the actual application scenario is often complex, so that the radial basis function neural network considering the output sensitivity of the classifier can maintain better robustness in the actual application, which provides a possibility for the method to be applied to the actual application.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (3)
1. A transformer substation operator dressing robust identification method considering classifier output sensitivity is characterized by comprising the following steps:
(1) acquiring a human body image, dividing the image into three units of a helmet, an upper garment and a lower garment according to the ratio of 1:4:4, and extracting feature vectors HOG of each unit based on HOG and HOC operatorsi(i=1,2,3)、hoci(i ═ 1,2,3), HOG denotes a gradient direction histogram, and HOC denotes a color histogram;
(2) solving the characteristic vector hog of the radial basis function neural network by adopting a K-means clustering methodi(i=1,2,3)、hociCenter c of basis function of (i ═ 1,2,3)j(j ═ 1,2,. eta., h), h is the number of cluster centers selected in the training samples;
(3) selecting a Gaussian function as a basis function of a radial basis function neural network, and calculating a variance sigmaj(j=1,2,3,...,h);
(4) Randomly sampling points in the neighborhood of each training sample by adopting a Monte Carlo method, and solving the sensitivity sen of each training samplep(p ═ 1,2,. said., m), m is the total number of training samples;
(5) establishing a loss function considering the output sensitivity of the classifier according to the sensitivity of the training sample, and iteratively solving the weight from the hidden layer to the output layer of the radial basis function neural network by using a least square-Newton method to obtain the radial basis function classifier on the training sample;
(6) and (4) dressing recognition is carried out on the transformer substation operating personnel according to the trained classifier, and a recognition result is output.
2. The method of claim 1, wherein the sensitivity sen of each sample is determined by the robust recognition of the dressing of the transformer operator in consideration of the output sensitivity of the classifierpIs defined as: senp=E(||fθ(x)-fθ(x+Δx)||2) Wherein x ═ { x ═ xp|p=1,2,...,m},Δx={Δxp1,2, m, and-q ≦ | Δ xpQ is less than or equal to | l, x is a set of feature vectors of the training samples, and xpFor the feature vector of the p-th training sample, Δ x is the set of differences between the feature vector of a point in the field at which each training sample was randomly sampled and the feature vector of that training sample, Δ xpIs the difference between the characteristic vector of a point in the field of randomly sampling the p-th training sample and the characteristic vector of the training sample, q is the standard deviation of the difference between the characteristic vector of the p-th training sample and the point in the neighborhood of the training sample, theta is the weight of the radial basis function neural network, fθAnd the mapping function between the output and the input of the radial basis function neural network takes theta as a weight value.
3. A power transformation worker clothing robust identification method considering classifier output sensitivity as claimed in claim 1, wherein: the penalty function considering the classifier output sensitivity is defined as:wherein y ispIs the true label of the p-th training sample, fθ(xp) Is the output value of the p-th training sample in the radial basis function neural network model, | yp-fθ(xp) I defines the relationship between the true mark and the radial basis function neural network model outputDifference, a classification error term, γ is a hyperparameter that balances sensitivity and classification error, senpSensitivity for the p-th training sample.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811213574.1A CN109214467B (en) | 2018-10-17 | 2018-10-17 | Transformer operator dressing robust identification method considering classifier output sensitivity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811213574.1A CN109214467B (en) | 2018-10-17 | 2018-10-17 | Transformer operator dressing robust identification method considering classifier output sensitivity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109214467A CN109214467A (en) | 2019-01-15 |
CN109214467B true CN109214467B (en) | 2021-08-10 |
Family
ID=64980625
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811213574.1A Active CN109214467B (en) | 2018-10-17 | 2018-10-17 | Transformer operator dressing robust identification method considering classifier output sensitivity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109214467B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11227215B2 (en) * | 2019-03-08 | 2022-01-18 | International Business Machines Corporation | Quantifying vulnerabilities of deep learning computing systems to adversarial perturbations |
CN110427808A (en) * | 2019-06-21 | 2019-11-08 | 武汉倍特威视系统有限公司 | Police uniform recognition methods based on video stream data |
CN114082169B (en) * | 2021-11-22 | 2023-03-28 | 江苏科技大学 | Disabled hand soft body rehabilitation robot motor imagery identification method based on electroencephalogram signals |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609686A (en) * | 2012-01-19 | 2012-07-25 | 宁波大学 | Pedestrian detection method |
CN102831409A (en) * | 2012-08-30 | 2012-12-19 | 苏州大学 | Method and system for automatically tracking moving pedestrian video based on particle filtering |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10572979B2 (en) * | 2017-04-06 | 2020-02-25 | Pixar | Denoising Monte Carlo renderings using machine learning with importance sampling |
-
2018
- 2018-10-17 CN CN201811213574.1A patent/CN109214467B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609686A (en) * | 2012-01-19 | 2012-07-25 | 宁波大学 | Pedestrian detection method |
CN102831409A (en) * | 2012-08-30 | 2012-12-19 | 苏州大学 | Method and system for automatically tracking moving pedestrian video based on particle filtering |
Non-Patent Citations (2)
Title |
---|
Monte Carlo simulation study on phase function;Fu Yongji et al;《PROCEEDINGS OF SPIE》;20060228;第1-7页 * |
基于卷积神经网络的身份识别技术研究与应用;王洋;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180415(第04期);第I138-2304页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109214467A (en) | 2019-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhao et al. | SVM based forest fire detection using static and dynamic features | |
CN105787472B (en) | A kind of anomaly detection method based on the study of space-time laplacian eigenmaps | |
CN109214467B (en) | Transformer operator dressing robust identification method considering classifier output sensitivity | |
CN104809463B (en) | A kind of high-precision fire disaster flame detection method for converting dictionary learning based on intensive scale invariant feature | |
CN105825511A (en) | Image background definition detection method based on deep learning | |
CN104063719A (en) | Method and device for pedestrian detection based on depth convolutional network | |
CN105488809A (en) | Indoor scene meaning segmentation method based on RGBD descriptor | |
CN104077605A (en) | Pedestrian search and recognition method based on color topological structure | |
CN105138982A (en) | Crowd abnormity detection and evaluation method based on multi-characteristic cluster and classification | |
CN108734184A (en) | A kind of method and device that sensitive image is analyzed | |
CN105825233B (en) | A kind of pedestrian detection method based on on-line study random fern classifier | |
CN103617413B (en) | Method for identifying object in image | |
CN106228109A (en) | A kind of action identification method based on skeleton motion track | |
Javad Shafiee et al. | Embedded motion detection via neural response mixture background modeling | |
CN109214430B (en) | Pedestrian re-identification method based on feature space topological distribution | |
CN110458022A (en) | It is a kind of based on domain adapt to can autonomous learning object detection method | |
CN103714340A (en) | Self-adaptation feature extracting method based on image partitioning | |
CN105975914B (en) | Three people's kinship method of discrimination between Mr. and Mrs and child based on linear combination feature | |
Jin et al. | Vision-based forest fire detection using machine learning | |
Li et al. | A Deep Learning–Based Diagnosis Model Driven by Tuyere Images Big Data for Iron‐Making Blast Furnaces | |
CN104537392A (en) | Object detection method based on distinguishing semantic component learning | |
CN106682582A (en) | Compressed sensing appearance model-based face tracking method and system | |
CN108960013B (en) | Pedestrian re-identification method and device | |
CN110443207B (en) | Target progressive detection and identification method based on hierarchical feature tensor | |
CN109614893B (en) | Intelligent abnormal behavior track identification method and device based on situation reasoning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |