CN101493890B - Dynamic vision caution region extracting method based on characteristic - Google Patents

Dynamic vision caution region extracting method based on characteristic Download PDF

Info

Publication number
CN101493890B
CN101493890B CN2009100466886A CN200910046688A CN101493890B CN 101493890 B CN101493890 B CN 101493890B CN 2009100466886 A CN2009100466886 A CN 2009100466886A CN 200910046688 A CN200910046688 A CN 200910046688A CN 101493890 B CN101493890 B CN 101493890B
Authority
CN
China
Prior art keywords
feature
fritter
basis function
picture
rgb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009100466886A
Other languages
Chinese (zh)
Other versions
CN101493890A (en
Inventor
侯小笛
祁航
张丽清
祝文骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN2009100466886A priority Critical patent/CN101493890B/en
Publication of CN101493890A publication Critical patent/CN101493890A/en
Application granted granted Critical
Publication of CN101493890B publication Critical patent/CN101493890B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a dynamic visual attention area extraction method based on features in the technical field of machine vision. The method comprises the following steps: first, an independent component analysis method is adopted for carrying out sparse decomposition to mass natural images so as to obtain a group of filtering base functions and a group of corresponding reconfigurable base functions and the input images are divided into small RGB blocks of m multiplied by m and projected on a group of the base to obtain the features of the images; second, effective encoding principle is utilized to measure an incremental encoding length index for each feature; third, according to the incremental encoding length index, the remarkable degree of each small block is processed through the energy reallocation of each feature and finally a remarkable map is obtained. The method can reduce a 'time slice', realize continuous sampling, therefore, data of different frames can direct the processing of remarkable degrees together, and the problem that the remarkable degrees of different frames require independent process is solved so as to realize the dynamic performance.

Description

Dynamic vision caution region extracting method based on feature
Technical field
What the present invention relates to is a kind of method of technical field of image processing, and specifically, what relate to is a kind of dynamic vision attention method for extracting region based on feature.
Background technology
Along with the constantly development of artificial intelligence technology ground, machine vision is more and more in real-life application, it mainly comes anthropomorphic dummy's visual performance with computing machine, but be not only to be the simple extension of human eye, the more important thing is part of functions with human brain---information extraction from the image of objective things, handle and understood, finally be used for actual detected, measurement and control.Since machine vision have speed fast, contain much information, function is many, it is in quality testing, authentication, object detection and identification, robot, the application of automatic vehicle etc. is very extensive.
Can be made at various aspects on the engineering at present and (comprise the visual angle, Snazzi degree, broad spectrum activity, dynamic perfromance) all surpasses the sensor of human eye ability, we can say that the exploration to " looking " has arrived to a certain degree, but Vision Builder for Automated Inspection not only needs " looking ", also needs " feel ".Because people's Selective Attention Mechanism has guaranteed that human eye obtains the high efficiency of information, has attracted people's attention and research, the extractive technique in various vision attentions zone is suggested and has obtained to use widely.For example, utilize extractive technique based on the vision attention zone of Selective Attention Mechanism to find area-of-interest in the image, preferentially in these zones, search for then, thereby improved the efficient of object detection and identification; The area-of-interest that utilization is found carries out picture compression efficiently (the interesting areas ratio of compression is low, other regional ratio of compression height), and image zooming (the scaling ratio of interesting areas is greater than other zones), or the like.The extractive technique in vision attention zone has huge advantage in the high efficiency of obtaining information, thereby in often occurring in the Vision Builder for Automated Inspection processing procedure.
Find that through the literature search to prior art the extractive technique in vision attention zone is the remarkable map (Saliency Map) that Koch and Ullman proposed in 1985, this technology was perfect through Itti and Koch afterwards, formed a whole set of system about remarkable map.Specifically can list of references: " LAURENT I; CHRISTOF K; ERNST N.A model of saliency-based visual attention for rapid sceneanlysis[J] .IEEE Transactions on PAMI; 1998; 20 (11): 1254-1259 ", (author: LAURENT I, CHRISTOF K, ERNST N, exercise question: a visual attention model that can be used for quick scene analysis, magazine: pattern analysis and machine intelligence IEEE proceedings, 1998 20 volumes based on significance (saliency), o. 11th, the 1254-1259 page or leaf).This method is based on the extractive technique in space, at first picture is divided into color, direction, brightness, several parallel passages such as texture, extract information respectively for each passage then, form one and kept the picture topological structure, but the feature topomap (feature maps) that simultaneously the response power of feature is had record, next, each feature topomap all passes through a series of yardsticks " sombrero (Difference of Gaussian) " function and carries out filtering, and it is that the Gaussian function of two different scales is asked the function that obtains after the difference.This function is very responsive for change detected, and a little less than the reflection very of the signal of Mass, has general biological meaning for one.Next, the victor of use competition network wins (Winner-Take-All) model entirely and compares for different attention zones, finally generates a map about each some significance of the overall situation, is called remarkable map.Though this method and the analytical technology based on the space afterwards have good performance in a lot of scenes, but nearly all inevasiblely being faced with following problem: 1) can only pay close attention to specific a part of visual cues, 2) distribution of notice is discontinuous in time.For example, when continuous image was observed, system just can't consider the situation of multiframe, and this just causes each constantly all to need to reanalyse remarkable map individually, made the continuity of system, and reliability all declines to a great extent.And, when the position of visual angle and object changes, because not to the tracking mechanism of feature, the prediction of new remarkable map is very possible to be offset with preceding frame.In addition, inhibition is for example returned in a series of vision attention behaviors, and the viewpoint transfer etc., all can't in analytical technology, well be realized based on the space.
Summary of the invention
The objective of the invention is to overcome the deficiencies in the prior art, a kind of dynamic vision caution region extracting method based on feature is provided, this method is based on feature itself to the definition of significance, but not the spatial distribution differences of feature, can eliminate " timeslice ", continuous sampling, thus the data of different frame (time) can instruct the processing of significance together, the significance that has solved different frame (time) needs the problem of independent processing, has realized dynamic.
The present invention is achieved by the following technical solutions, may further comprise the steps:
The first step, adopt the independent component analysis method that a large amount of natural images is carried out Sparse Decomposition, obtain one group of filtering basis function and one group of corresponding reconstruct basis function, the image of input is divided into the RGB fritter of m * m, and project on this group base, obtain the feature of this figure;
Second step, utilize the efficient coding principle, promptly when a system was efficient coding, the principle of its entropy maximum was for each feature is weighed the incremental encoding length index;
In the 3rd step,, by the energy of each feature is redistributed the significance of handling each fritter, thereby finally obtain remarkable map according to these incremental encoding length index.
The described first step, specific as follows:
1. will train picture to be divided into the colored fritter of RGB of several m * m pixel size, and with each fritter vectorization.Natural picture is sampled, obtains the colored fritter of RGB of a large amount of m * m, with it as training sample.The value of m can be 8,16 or 32.M is the length of side of the colored fritter of each RGB.
2. by independent component analysis (ICA) method of standard, train basis function (A, W).The number of basis function is m * m * 3=3m 2, promptly W = [ w 1 , w 2 , . . . w 3 m 2 ] , W wherein iBe that (size of A is the same with W, 1≤i≤3m for i filtering basis function 2).A, W are the basis functions that the ICA method trains out, and its value can be got any range, by the input decision.
3. for any width of cloth picture X, be divided into the RGB fritter of n m * m, form sampling matrix X=[x 1, x 2... x n], x wherein kThe vectorization that is k image block represents that (1≤k≤n) is to x kCarry out linear transformation S k = W x k = [ s k , 1 , s k , 2 , . . . , s k , 3 m 2 ] , Wherein W is the filtering basis function that trains.S then kBe the coefficient of correspondence of basis function, just picture fritter x kFeature, s K, iBe the coefficient of correspondence of i basis function, be the value of i feature.To all x kAll do same processing, obtain the feature S=[S of X 1, S 2..., S n].N is the number that input picture X is divided into the RGB fritter, and its value is determined by the value of the size of input picture X and m.
After first step processing finishes,, constructed 3m for input picture X 2Next individual feature S carried out for second step.
Described second step, specific as follows:
1. for each feature, computing activation rate p i
p i = Σ k s k , i 2 Σ j Σ k s k , j 2 - - - ( 2.1 )
This amount has been represented the average energy granting level of this feature.
2. consider the activity ratio p of entropy i feature iOn variation, i.e. the incremental encoding length index of i feature, order p = { p 1 , p 2 , p 3 m 2 } , Probability distribution for stochastic variable.Suppose that the feature activation rate of certain particular moment is distributed as p, can be to p when i feature is activated iBring a small disturbance ε, therefore new distribution
Figure G2009100466886D00041
Become:
p ^ = p j + ϵ 1 + ϵ , if j = i p j 1 + ϵ , if j ≠ i - - - ( 2.2 )
Therefore, the incremental encoding length of i feature is:
ICL ( p i ) = ∂ H ( p ) ∂ p i = 1 - H ( p ) - p i - log p i - p i log p i - - - ( 2.3 )
The present invention is by the principle of predictive coding, with energy, feature and significance hook.Incremental encoding length (ICL) has been weighed the entropy rate of change to perception of each feature.This index is used for instructing energy distribution, thereby realizes that system in its entirety realizes predictive coding---and be that the system that makes that common information can be few as far as possible produces response, and rare information is understood the strong response of triggering system usually.
Described the 3rd step, specific as follows:
1. according to the incremental encoding length index of resultant each feature that goes out, divide notable feature S set F:
SF={i|ICL(p i)>0}(3.1)
Divide { SF, the feature that SF} is unique to have determined to cause the entropy of total system to increase.And this division has the explicit mathematical meaning, and for a feature, having only when other is rare on characteristic distribution, that is to say, when this feature is carried out new observation, can cause the entropy of global feature distribution p to increase.
2. according to the predictive coding principle, redistribute energy between each feature, for the feature i in the notable feature set, d assigns weight i(i ∈ SF):
d i = ICL ( p i ) Σ k ∈ SF ICL ( P k ) , if i ∈ SF - - - ( 3.2 )
And, define it for non-notable feature d k = 0 ( k ∉ SF ) .
3. for picture fritter x k, its significance is defined as m k:
m k = Σ i ∈ SF d i w i T x k - - - ( 3.3 )
4. had after the significance of each picture fritter,, generated the remarkable map M of picture in its entirety by reconstruct base A:
M = Σ k ∈ SF A k m k - - - ( 3.4 )
A wherein kK the column vector of expression reconstruct base A.
From formula (3.3), as can be seen, not constant for the significance of picture fritter, but can change along with the time.And because in the method for the invention, sampling is a kind of continuous process, and the weight of feature can change continuously along with the increase of sampling, so just can be successfully the sampling variation be interpreted as that context notes the influence of weight to feature.So-called " notable feature ", why remarkable, all be for the characteristic distribution of current context.
The invention has the beneficial effects as follows: (1) because the filtration base that adopts is that training in advance is good, therefore when handling a new input picture, does not need to train basis function again, makes processing speed very fast, and the efficient height can be accomplished real-time processing.(2) owing to adopted based on feature itself, but not the mode of the spatial distribution differences of feature comes significance is analyzed, and eliminated the structural restriction of picture space.On handling, because continuous sampling eliminated " timeslice ", thereby the data of different frame (time) can be instructed the processing of significance together, the significance that has solved different frame (time) needs the problem of independent processing, has realized dynamic.
Description of drawings
Fig. 1. the remarkable map of static images;
Wherein: (a), (d), (g) be the input picture, (b), the remarkable map that generates for the present invention of (e), (h), (c), (f), (i) eye movement data for marking.
Fig. 2. the remarkable map of screen (dynamic vision).
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are elaborated: present embodiment is being to implement under the prerequisite with the technical solution of the present invention, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
1. latent structure
(1) the colored block sizes of the RGB that adopts is 8 * 8, by a large amount of natural pictures is sampled, obtains 120000 8 * 8 the colored fritter of RGB, and the colored fritter of these RGB is the training data of basis function.
(2) (A, W), because the colored fritter of the RGB of employing 8 * 8 is used as training sample, i.e. the number of m=8, so basis function is 3 * 8 to utilize the ICA method to train basis function 2=192.
(3) for the input color picture, for example the size of picture is 800 * 640, is divided into 8000 8 * 8 the colored fritter of RGB, and promptly n=8000 forms sampling matrix X=[x 1, x 1..., x 8000], x wherein kThe vectorization that is k image block is represented, it is carried out linear transformation S k=Wx k=[s K, 1, s K, 2..., s K, 192], wherein W is the filtering basis function that trains.S then kBe the coefficient of correspondence of basis function, just picture fritter x kFeature.s K, iBe i basis function coefficient of correspondence, be i feature.
2. weigh incremental encoding length (ICL) index
(1), calculates its activity ratio p according to formula (2.1) to each feature.
(2) according to the activity ratio of each feature, weigh its incremental encoding length index according to formula (2.3)
3. generate remarkable map
(1), utilize formula (3.1) to divide notable feature S set F according to the incremental encoding length index of each feature that obtains in 2.
(2) utilize formula (3.2), redistribute the energy of each feature in the notable feature set.
(3) for picture fritter x k,, handle its significance m according to formula (3.3) k
(4) significance of importing each fritter of picture has been arranged, utilized formula (3.3), obtained importing the remarkable map M of picture.
Example one: the remarkable map of still picture
The RGB fritter of employing 8 * 8 is trained basis function, and (A, W), their dimension is 192.
For size is 800 * 640 input picture, is divided into 8000 8 * 8 the colored fritter of RGB, and promptly n=8000 forms sampling matrix X=[x 1, x 1..., x 8000].And calculate the basis function coefficient of correspondence by formula S=WX, i.e. the feature of X.
Obtain each feature activation rate p by formula (2.1), and weigh the incremental encoding length index of each feature according to p and formula (2.3).
Incremental encoding length index and formula (3.1) according to each feature are divided notable feature S set F, and utilize formula (3.2), redistribute the energy of each feature in the notable feature set.So for picture fritter x k,, handle its significance m according to formula (3.3) k, and finally utilize formula (3.4), generate the remarkable map M of input picture.
When sequentially to each the image fritter sampling on the static images time, just can estimate the characteristic distribution characteristic of this picture, thereby construct out remarkable map, the remarkable map that generates can be further be analyzed with people's eye movement data, with the correctness of verification model.Among Fig. 1, (a), (d), (g) be the input picture, (b), the remarkable map that generates for the present invention of (e), (h), (c), (f), (i) eye movement data for marking.In an embodiment, adopted document " BRUCE N; TSOTSOS J.Saliency Basedon Information Maximization[J]; Advances in Neural InformationProcessing Systems; 2006; 18:155-162 " (authors: BRUCE N, TSOTSOS J. exercise question: based on the maximized significance of quantity of information, magazine: higher nerve information handling system, 2006 18 phases, the 155-162 page or leaf) the eye movement data that provided are as benchmark, the model that has compared and traditional model, and the result shows that the present invention has obtained maximum performance.
Example two: the remarkable map in the screen
Compare same in the past class methods, a big advantage of method of the present invention is that it is continuous.Incremental encoding length is a process of upgrading continuously.The variation of the distribution of feature activation rate can be based on the spatial domain, also can occupy time domain.If consider that it is a laplacian distribution that time domain changes, and supposes p tBe the t frame, can think p so tBe before characteristic response accumulation and:
p t = 1 Z Σ τ = 0 t - 1 exp ( τ - t λ ) p ^ τ
Wherein λ is the half life period, Z = ∫ p ^ t ( x ) dx It is normalization function.
When screen being done the vision attention extraction, face the problem of target travel and the motion of observation visual angle usually.Yet, the attention model framework based on feature under, these problems all are readily solved because feature always along with object in the visual field the position and move.
The signal to noise ratio (snr) of analysis image, it is defined as follows:
SNR ( t ) = Σ i ∈ F m i t Σ j ∉ F m j t
Wherein F is a manual mark " prospect ".After respectively 250 frame pictures being carried out the craft mark, just each frame is handled its significance, its process is except feature activation rate p difference, and other processes are consistent with the remarkable map that generates still picture.Afterwards, the remarkable map that generates can be compared with manual mark, analyze snr value.Fig. 2 has reflected the result, the sectional drawing of the first behavior video among the figure, and second row has reacted signal to noise ratio (S/N ratio) of the present invention, last column then is the signal to noise ratio (S/N ratio) of Itti model, as can be seen from the figure, average signal-to-noise ratio of the present invention is 0.4803, be much better than existing main flow the Itti model 0.1680.

Claims (1)

1. dynamic vision caution region extracting method based on feature is characterized in that may further comprise the steps:
The first step adopts the independent component analysis method that a large amount of natural images is carried out Sparse Decomposition, obtains one group of filtering basis function and one group of corresponding reconstruct basis function, the RGB fritter that the image of input is divided into m * m, and project on this group filtering basis function, obtain the feature of this figure, specific as follows:
1. will train picture to be divided into the colored fritter of RGB of several m * m pixel size, and, natural picture be sampled each fritter vectorization, obtain the colored fritter of RGB of a large amount of m * m, as training sample, the value of m is 8 with it, 16 or 32, m is the length of side of the colored fritter of each RGB;
2. by the independent component analysis method of standard, (A, W), the number of basis function is m * m * 3=3m to train basis function 2, promptly
Figure FSB00000374613100011
W wherein iBe i filtering basis function, the number size of basis function A is the same with basis function W, 1≤i≤3m 2, A, W are the basis functions that the ICA method trains out;
3. for any width of cloth picture X, be divided into the RGB fritter of n m * m, form sampling matrix X=[x 1, x 2... x n], x wherein kThe vectorization that is k image block represents that 1≤k≤n is to x kCarry out linear transformation
Figure FSB00000374613100012
Wherein W is the filtering basis function that trains, then S kBe the coefficient of correspondence of basis function, just picture fritter x kFeature, s K, iBe i basis function coefficient of correspondence, be the value of i feature, to all x kAll do same processing, obtain the feature S=[S of X 1, S 2..., S n], n is the number that input picture X is divided into the RGB fritter, its value is determined by the value of the size of input picture X and m;
Second step,, specific as follows for each feature is weighed the incremental encoding length index:
1. for each feature i, calculate its activity ratio p i: And order
Figure FSB00000374613100014
Then p is the probability density distribution of a stochastic variable, and its entropy is H (p);
2. calculate the incremental encoding length ICL (p of i feature i):
The 3rd step according to these incremental encoding length index, by the energy of each feature is redistributed the significance of handling each fritter, thereby finally obtained remarkable map, and is specific as follows:
1. according to the incremental encoding length index of resulting each feature, divide notable feature S set F:
SF={i|ICL(p i)>0}
2. according to the predictive coding principle, redistribute energy between each feature, for the feature i in the notable feature set, d assigns weight i, i ∈ SF:
Figure FSB00000374613100021
if,i∈SF
And, define its weight for non-notable feature
3. so for picture fritter x k, its significance is defined as m k:
Figure FSB00000374613100023
D wherein iWeight for i the feature of distributing in 2. going on foot;
4. had after the significance of each picture fritter,, generated the remarkable map M of picture in its entirety by reconstruct base A:
Figure FSB00000374613100024
A wherein kK the column vector of expression reconstruct base A.
CN2009100466886A 2009-02-26 2009-02-26 Dynamic vision caution region extracting method based on characteristic Expired - Fee Related CN101493890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100466886A CN101493890B (en) 2009-02-26 2009-02-26 Dynamic vision caution region extracting method based on characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100466886A CN101493890B (en) 2009-02-26 2009-02-26 Dynamic vision caution region extracting method based on characteristic

Publications (2)

Publication Number Publication Date
CN101493890A CN101493890A (en) 2009-07-29
CN101493890B true CN101493890B (en) 2011-05-11

Family

ID=40924482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100466886A Expired - Fee Related CN101493890B (en) 2009-02-26 2009-02-26 Dynamic vision caution region extracting method based on characteristic

Country Status (1)

Country Link
CN (1) CN101493890B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493890B (en) * 2009-02-26 2011-05-11 上海交通大学 Dynamic vision caution region extracting method based on characteristic
JP5785955B2 (en) * 2010-01-15 2015-09-30 トムソン ライセンシングThomson Licensing Video coding with compressed sensing
CN101840518A (en) * 2010-04-02 2010-09-22 中国科学院自动化研究所 Biological vision mechanism-based object training and identifying method
CN106454371B (en) 2010-04-13 2020-03-20 Ge视频压缩有限责任公司 Decoder, array reconstruction method, encoder, encoding method, and storage medium
CN106067984B (en) 2010-04-13 2020-03-03 Ge视频压缩有限责任公司 Cross-plane prediction
TWI678916B (en) 2010-04-13 2019-12-01 美商Ge影像壓縮有限公司 Sample region merging
ES2904650T3 (en) 2010-04-13 2022-04-05 Ge Video Compression Llc Video encoding using multitree image subdivisions
CN101866484B (en) * 2010-06-08 2012-07-04 华中科技大学 Method for computing significance degree of pixels in image
TWI478099B (en) * 2011-07-27 2015-03-21 Univ Nat Taiwan Learning-based visual attention prediction system and mathod thereof
CN102568016B (en) * 2012-01-03 2013-12-25 西安电子科技大学 Compressive sensing image target reconstruction method based on visual attention
CN104778704B (en) * 2015-04-20 2017-07-21 北京航空航天大学 Image attention method for detecting area based on random pan figure sparse signal reconfiguring
CN105426399A (en) * 2015-10-29 2016-03-23 天津大学 Eye movement based interactive image retrieval method for extracting image area of interest

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493890A (en) * 2009-02-26 2009-07-29 上海交通大学 Dynamic vision caution region extracting method based on characteristic

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493890A (en) * 2009-02-26 2009-07-29 上海交通大学 Dynamic vision caution region extracting method based on characteristic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LAURENT I,CHRISTOF K,ERNST N.A model of saliency-based visual attention for rapid scene anlysis[J].《IEEE Transactions on PAMI》.1998,第20卷(第11期),1254-1259. *

Also Published As

Publication number Publication date
CN101493890A (en) 2009-07-29

Similar Documents

Publication Publication Date Title
CN101493890B (en) Dynamic vision caution region extracting method based on characteristic
CN103839065B (en) Extraction method for dynamic crowd gathering characteristics
CN104599292B (en) A kind of anti-noise moving object detection algorithm decomposed based on low-rank matrix
CN103810717B (en) A kind of human body behavioral value method and device
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN108710865A (en) A kind of driver's anomaly detection method based on neural network
CN108764142A (en) Unmanned plane image forest Smoke Detection based on 3DCNN and sorting technique
CN110263833A (en) Based on coding-decoding structure image, semantic dividing method
CN107945153A (en) A kind of road surface crack detection method based on deep learning
CN105069468A (en) Hyper-spectral image classification method based on ridgelet and depth convolution network
CN101996327B (en) Video anomaly detection method based on weighted tensor subspace background modeling
CN102682303A (en) Crowd exceptional event detection method based on LBP (Local Binary Pattern) weighted social force model
CN103208008A (en) Fast adaptation method for traffic video monitoring target detection based on machine vision
CN103093250A (en) Adaboost face detection method based on new Haar- like feature
CN103902966A (en) Video interaction event analysis method and device base on sequence space-time cube characteristics
CN102122386A (en) SAR (stop and reveres) image segmentation method based on dictionary migration clustering
CN107169994A (en) Correlation filtering tracking based on multi-feature fusion
CN106096655A (en) A kind of remote sensing image airplane detection method based on convolutional neural networks
CN105930794A (en) Indoor scene identification method based on cloud computing
CN102542295A (en) Method for detecting landslip from remotely sensed image by adopting image classification technology
CN104463248A (en) High-resolution remote sensing image airplane detecting method based on high-level feature extraction of depth boltzmann machine
CN103400154A (en) Human body movement recognition method based on surveillance isometric mapping
CN103617413B (en) Method for identifying object in image
US20220315243A1 (en) Method for identification and recognition of aircraft take-off and landing runway based on pspnet network
CN102184384A (en) Face identification method based on multiscale local phase quantization characteristics

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110511

Termination date: 20140226