CN104992183A - Method for automatic detection of substantial object in natural scene - Google Patents

Method for automatic detection of substantial object in natural scene Download PDF

Info

Publication number
CN104992183A
CN104992183A CN201510377186.7A CN201510377186A CN104992183A CN 104992183 A CN104992183 A CN 104992183A CN 201510377186 A CN201510377186 A CN 201510377186A CN 104992183 A CN104992183 A CN 104992183A
Authority
CN
China
Prior art keywords
pixel
target
fixation
pixels
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510377186.7A
Other languages
Chinese (zh)
Other versions
CN104992183B (en
Inventor
潘晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN201510377186.7A priority Critical patent/CN104992183B/en
Publication of CN104992183A publication Critical patent/CN104992183A/en
Application granted granted Critical
Publication of CN104992183B publication Critical patent/CN104992183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Abstract

The invention discloses a method for automatic detection of a significant object in natural scene. The method comprises the following steps: 1), performing significance detection on an object image to obtain a pixel significant graph; 2), ordering significant points in the pixel significant graph according to significance; 3), selecting the first N significant points as fixation points to form an area of fixation; 4), carrying out random sampling on pixels in the area of fixation, and performing equivalent pixel random sampling outside the area of fixation, wherein obtained pixels inside the area of fixation are taken as a positive sample, and pixels outside the area of fixation are taken as a negative sample; and 5), by use of a support vector machine training strategy, classifying all the pixels of the object image, and taking a pixel area classified as the positive sample as a first detection result; repeating the third step to the fifth step to obtain repeated detection results, and when the detection results are stable, recording the area; and repeating the second step to the fifth step until the image has no detection results. According to the invention, the visual sense of mankind is simulated through fixation points ordering and a nerve network model, and automatic detection of an object scene by a machine is realized.

Description

The automatic testing method of the well-marked target in natural scene
Technical field
The present invention relates to human visual simulation technical field, the automatic testing method of the well-marked target specifically in a kind of natural scene.
Background technology
Along with the development of infotech, computer vision has been widely used in the fields such as low-level feature detection and description, pattern-recognition, artificial intelligence reasoning and machine learning algorithm.But traditional computer vision methods is task-driven type normally, namely need to limit many conditions, and design corresponding algorithm according to actual task, lack versatility; Need to solve high dimensional nonlinear feature space, super large data volume to problems such as problem solving and process in real time, make its investigation and application face huge challenge.
Human visual system can efficiently, reliably work under various circumstances, and it has the following advantages: have the selectivity in the mechanism of concern, conspicuousness detection and visual processes related to this and purpose; Priori can be utilized from Low Level Vision process, make the bottom-up process of data-driven and top-down knowledge instruct mutual cooperation in visual processes; Upper and lower environment information all plays an important role the at all levels of visual processes, and can fully utilize the information of various mode in environment.But when human visual perception mechanism still not exclusively understands, still there is larger difficulty in the machine vision that how there is human vision characteristics, if the Vision Builder for Automated Inspection of simulating human vision can be built, bring important impact will inevitably to each practical application area of computer vision.
Summary of the invention
In view of this, the technical problem to be solved in the present invention is, there is provided a kind of can the automatic testing method of well-marked target in the natural scene of simulating human vision, by the behavior of simulating human active vision, do effectively to watch attentively fast to target scene, realize the automatic detection of machine to well-marked target in target scene.
Technical solution of the present invention is, provides the automatic testing method of the well-marked target in the natural scene of following steps, comprises following steps:
1) make conspicuousness by spectrum residual error method to target image to detect, obtain corresponding pixel saliency map, described pixel saliency map is consistent with the picture element position information of described target image;
2) to the significant point in described pixel saliency map, sort according to significance;
3) choose top n significant point as blinkpunkt, comprise the minimum rectangle scope of these blinkpunkts as watching area;
4) stochastic sampling is carried out to described watching area interior pixels, and the pixel stochastic sampling of equivalent is carried out to watching area outside; The watching area interior pixels that sampling obtains is as positive sample, and watching area external pixels is as negative sample;
5) utilize support vector machine ensembles Training strategy, training obtain multiple two classification SVM models, by whole pixels of target image described in these categories of model, the pixel region being divided into positive sample is done ballot integrated after, as the first testing result;
Choose a front N+M significant point as blinkpunkt, according to step 3) form watching area, then through step 4) and 5) obtain corresponding second testing result;
The relatively overlapping degree of the first testing result and the second testing result, overlapping degree greatly then shows the visually-perceptible intensity of target large; Overlapping degree is little, shows also not form the enough visually-perceptible intensity to target, continues to repeat said process, until reach enough visually-perceptible intensity, final testing result is the superposition of all testing results of said process;
After obtaining final testing result, in target image and pixel saliency map, this region is cleared, to the significant point in the pixel saliency map after renewal, according to significance minor sort again, repeat step 3), 4) and 5), obtain new testing result, until all target detection in target image are complete.
Adopt method of the present invention, compared with prior art, the present invention has the following advantages: carry out conspicuousness detection by spectrum residual error method, can form pixel saliency map fast; According to significance sorted pixels, can the high watching area of coarse localization significance; A small amount of pixel sampling is carried out to this intra-zone and outside simultaneously, form the training of positive and negative sample data and practice SVM (support vector machine) model, subsequently by this SVM category of model pixel, can obtain significance high, more accurate region is as the first testing result; And on the basis setting up the first testing result, can suitably expand the high watching area scope of significance, again form corresponding testing result through SVM study-classification, and compare with the first testing result, to judge whether fixation object district stablizes.The process that the present invention watches attentively according to human vision, by blinkpunkt sequence and SVM model, carrys out simulating human vision, realizes the automatic detection of machine to well-marked target in target scene.
As improvement, described spectrum residual error method refers to by supercomplex Fourier transform, red, green, blue in coloured image three components are participated in Fourier transform as hypercomplex three imaginary parts, only retains amplitude spectrum residual sum phase spectrum information, obtain pixel saliency map through inverse fourier transform.This is designed for and solves the problem that prior art only can process gray level image, effectively correspondingly improves the concrete steps of former spectrum residual error method for coloured image.
As improvement, described stochastic sampling for the Grad that has of pixel be greater than the average gradient value of its region.This is owing to being greater than former figure information entropy by the information entropy of gradient larger pixel generation in image, showing that high gradient pixel is representative to watched attentively target area, contribute to removal of images information redundancy.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the automatic testing method of well-marked target in natural scene of the present invention.
Embodiment
With regard to specific embodiment, the invention will be further described below, but the present invention is not restricted to these embodiments.
The present invention contain any make on marrow of the present invention and scope substitute, amendment, equivalent method and scheme.To have the present invention to make the public and understand thoroughly, in the following preferred embodiment of the present invention, describe concrete details in detail, and do not have the description of these details also can understand the present invention completely for a person skilled in the art.In addition, the needs in order to illustrate in the accompanying drawing of the present invention, completely accurately do not draw according to actual ratio, are explained at this.
As shown in Figure 1, the automatic testing method of the well-marked target in natural scene of the present invention, comprises following steps:
1) make conspicuousness by spectrum residual error method to target image to detect, obtain corresponding pixel saliency map, described pixel saliency map is consistent with the picture element position information of described target image;
2) to the significant point in described pixel saliency map, sort according to significance;
3) choose top n significant point as blinkpunkt, comprise the minimum rectangle scope of these blinkpunkts as watching area;
4) stochastic sampling is carried out to described watching area interior pixels, and the pixel stochastic sampling of equivalent is carried out to watching area outside; The watching area interior pixels that sampling obtains is as positive sample, and watching area external pixels is as negative sample;
5) utilize support vector machine ensembles Training strategy, training obtain multiple two classification SVM models, by whole pixels of target image described in these categories of model, the pixel region being divided into positive sample is done ballot integrated after, as the first testing result;
Choose a front N+M significant point as blinkpunkt, according to step 3) form watching area, then through step 4) and 5) obtain corresponding second testing result;
The relatively overlapping degree of the first testing result and the second testing result, overlapping degree greatly then shows the visually-perceptible intensity of target large; Overlapping degree is little, shows also not form the enough visually-perceptible intensity to target, continues to repeat said process, until reach enough visually-perceptible intensity, final testing result is the superposition of all testing results of said process;
After obtaining final testing result, in target image and pixel saliency map, this region is cleared, to the significant point in the pixel saliency map after renewal, according to significance minor sort again, repeat step 3), 4) and 5), obtain new testing result, until all target detection in target image are complete.
Natural scene is equivalent to the scene that human vision is watched attentively, no matter scene size, the scope of imaging on the retina constant, and thus natural scene is also like this in machine in machine vision.
Make conspicuousness by spectrum residual error method to target image to detect, following steps can be adopted to implement: treat perceptual image I (x) (x represents pixel coordinate vector) herein for given, first two dimensional discrete Fourier transform F [I (x)] is carried out to it, image is changed to frequency domain by transform of spatial domain, obtains amplitude A (f) and phase place P (f) information:
A(f)=|F[I(x)]| (1)
Then amplitude is taken the logarithm, obtains log and compose L (f):
L(f)=log(A(f)) (3)
In formula, F represents two dimensional discrete Fourier transform, | .| represents amplitude computing, represent phase bit arithmetic.Because log curve meets local linear condition, so with local average wave filter h nf () is smoothing to it, obtain the general shape of log spectrum:
V(f)=L(f)*h n(f) (4)
Wherein h n(f) be a n × n matrix (the present embodiment experiment in n=3.), be defined as follows:
Spectrum residual error R (f) is then the description to the Sudden change region in image:
R(f)=L(f)-V(f) (6)
By inverse Fourier transform, saliency map picture can be obtained in spatial domain.
S(x)=|F -1[exp{R(f)+jP(f)}]| 2(7)
On saliency map, the value of often represents the significance of this position.Considering the local group effect of human eye vision, in order to eliminate the isolated significant point of minority, obtaining better visual effect, can carry out once level and smooth with average filter again after obtaining S (x), obtain final saliency map Z (x).
Z(x)=S(x)*h n(f) (8)
In Fig. 1, relate to training data, disaggregated model, result etc. and be the corresponding implementation process of employing support vector machine (SVM) Training strategy.Specific implementation process is as follows:
If comprise the training set of l sample for input vector, y k{-1 ,+1} is positive and negative classification logotype to ∈.First SVM uses training set learning model building, and object finds optimal separating hyper plane at feature space, test data as far as possible correctly classified.Consider generalized case, when training set is Nonlinear separability, first select a gaussian radial basis function kernel function
K(x,x i)=exp{-q||x-x i|| 2} (9)
By training set data x ibe mapped in a High-dimensional Linear feature space and construct optimal separating hyper plane.Wherein q is Radial basis kernel function parameter, then the discriminant function of sorter is
y ( x ) = sign [ Σ x i ∈ SI α i * y i K ( x i , x ) + b * ] - - - ( 10 )
Training process is known with under the condition such as q, Quadratic Programming Solution method is utilized to obtain b in (10) formula *, α i *with support vector (SV) as training the SVM model obtained; Test process is then utilize this SVM model, the data x of the unknown is substituted into (10) formula, obtains its prediction classification.
The dimension disaster problem that SVM utilizes kernel function skill to avoid traditional learning algorithm to face.Structure based principle of minimization risk, its classification performance is only determined by a small amount of support vector (SV), the Generalization Capability had.In practical problems, be conducive to utilizing priori to select a small amount of sample, carry out structural classification device through SVM study.Which overcome traditional learning algorithm based on empirical risk minimization principle, when sample number trends towards infinity, performance just has the defect of theoretic guarantee; By solving quadratic programming problem, traditional neural network algorithm can be avoided to build the empirical of network and be easily absorbed in the shortcomings such as local minimizers number; Be applicable to detection of complex, be difficult to the image object of quantitative description.
During SVM practical application, for dissimilar image, need to adjust the classification performance that wherein some key parameters just can obtain.In order to reduce the adverse effect that svm classifier Model Parameter is arranged, the training set that the positive and negative composition of sample of this method multi collect is slightly different, and adopt differentiated Parameter Parallel that multiple slightly different SVM model is set.Object carries out parallel training and test to the svm classifier model that parameter is slightly different respectively with the different training sets of slightly disturbance, finally carries out ballot method to test result integrated.Such strategy can increase the robustness of disaggregated model, greatly reduces SVM model parameter and arranges the improper adverse effect caused.
Described spectrum residual error method can also pass through supercomplex Fourier transform, red, green, blue in coloured image three components are participated in Fourier transform as hypercomplex three imaginary parts, only retain amplitude spectrum residual sum phase spectrum information, obtain pixel saliency map through inverse fourier transform.This is designed for and solves the problem that prior art only can process black white image identification, effectively correspondingly improves the concrete steps of tradition spectrum residual error method for coloured image.
Supercomplex is made up of four parts, is expressed as
q=a+bi+cj+dk (11)
Wherein a, b, c, d are real numbers, i, j, k Dou Shi imaginary unit, and have following character: i 2=j 2=k 2=ijk=-1, ij=-ji=k, ki=-ik=j, jk=-kj=i.
The RGB model of coloured image can be described as the pure supercomplex not having real part:
f=R(m,n)i+G(m,n)j+B(m,n)k (12)
Wherein R (m, n), G (m, n), B (m, n) represent image RGB three components respectively.If q=f, then a=0, b=R (m, n), c=G (m, n), d=B (m, n).Supercomplex Fourier transform can be carried out according to formula (13) to the colour phasor constructed:
F R(v,u)=(real(fft2(a))+μ·imag(fft2(a)))+
i(real(fft2(b))+μ·imag(fft2(b)))+ (13)
j(real(fft2(c))+μ·imag(fft2(c)))+
k(real(fft(d))+μ·imag(fft2(d)))
Wherein, fft2 () represents conventional two-dimensional Fourier transform, and real part is got in real () expression, and imaginary part is got in imag () expression.
for the empty vector of unit.Herein, only F need be got ramplitude spectrum residual error R (f) of (v, u) and phase spectrum p (f):
R(f)=log|F R(v,u)|-h*log|F R(v,u)| (14)
Wherein, h is local average operator.Order:
A=e R(f)+jP(f)(16)
Utilize conventional two-dimensional inverse fast Fourier transform (ifft2) to combine and can obtain supercomplex inverse Fourier transform, such as formula (17):
F -R(v,u)=(real(ifft2(A))+μ·imag(ifft2(A)))+
i(real(ifft2(B))+μ·imag(ifft2(B)))+ (17)
j(real(ifft2(C))+μ·imag(ifft2(C)))+
k(real(ifft2(D))+μ·imag(ifft2(D)))
Wherein, B=fft2 (b), C=fft2 (c), D=fft2 (d).
Real (F -R(v, u)) be the remarkable figure tried to achieve.Because the globality of colour element before and after data processing obtains maintenance, thus avoid the color distortion that conversion or exchange due to vector component cause.
Described stochastic sampling for the Grad that has of pixel be greater than the average gradient value of its region.Show the research of image information entropy, the information entropy having the partial pixel of higher gradient to produce in image is greater than the information entropy that source images entire pixels is formed.This is the phenomenon because image information redundancy causes.Given this phenomenon, in order to obtain the richest quantity of information, the representational pixel samples in watching area, sampling should be carried out for high gradient pixel.In order to avoid a large amount of calculating, a kind of effective countermeasure is: the pixel being only greater than this zone leveling Grad for watching area inner gradient value carries out stochastic sampling.And the sampling of watching area outside is still taked entire pixels stochastic sampling mode.
Below only just preferred embodiment of the present invention is described, but can not be interpreted as it is limitations on claims.The present invention is not only confined to above embodiment, and its concrete structure allows to change.In a word, all various changes done in the protection domain of independent claims of the present invention are all in protection scope of the present invention.

Claims (3)

1. an automatic testing method for the well-marked target in natural scene, is characterized in that: comprise the following steps:
1) make conspicuousness by spectrum residual error method to target image to detect, obtain corresponding pixel saliency map, described pixel saliency map is consistent with the picture element position information of described target image;
2) to the significant point in described pixel saliency map, sort according to significance;
3) choose top n significant point as blinkpunkt, comprise the minimum rectangle scope of these blinkpunkts as watching area;
4) stochastic sampling is carried out to described watching area interior pixels, and the pixel stochastic sampling of equivalent is carried out to watching area outside; The watching area interior pixels that sampling obtains is as positive sample, and watching area external pixels is as negative sample;
5) utilize support vector machine Training strategy, training obtains the SVM model of one two classification, by whole pixels of target image described in this category of model, will be divided into the pixel region of positive sample as the first testing result;
Choose a front N+M significant point as blinkpunkt, according to step 3) form watching area, then through step 4) and 5) obtain corresponding second testing result;
The relatively overlapping degree of the first testing result and the second testing result, overlapping degree greatly then shows the visually-perceptible intensity of target large; Overlapping degree is little, shows also not form the enough visually-perceptible intensity to target, continues to repeat said process, until reach enough visually-perceptible intensity, final testing result is the superposition of all testing results of said process;
After obtaining final testing result, in target image and pixel saliency map, this region is cleared, to the significant point in the pixel saliency map after renewal, according to significance minor sort again, repeat step 3), 4) and 5), obtain new segmentation result, until all target detection in target image are complete.
2. the automatic testing method of the well-marked target in natural scene according to claim 1 and 2, it is characterized in that: described spectrum residual error method refers to by supercomplex Fourier transform, red, green, blue in coloured image three components are participated in Fourier transform as hypercomplex three imaginary parts, only retain amplitude spectrum residual sum phase spectrum information, obtain pixel saliency map through inverse fourier transform.
3. the automatic testing method of the well-marked target in natural scene according to claim 1 and 2, is characterized in that: described stochastic sampling for the Grad that has of pixel be greater than the average gradient value of its region.
CN201510377186.7A 2015-06-25 2015-06-25 The automatic testing method of well-marked target in natural scene Active CN104992183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510377186.7A CN104992183B (en) 2015-06-25 2015-06-25 The automatic testing method of well-marked target in natural scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510377186.7A CN104992183B (en) 2015-06-25 2015-06-25 The automatic testing method of well-marked target in natural scene

Publications (2)

Publication Number Publication Date
CN104992183A true CN104992183A (en) 2015-10-21
CN104992183B CN104992183B (en) 2018-08-28

Family

ID=54303996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510377186.7A Active CN104992183B (en) 2015-06-25 2015-06-25 The automatic testing method of well-marked target in natural scene

Country Status (1)

Country Link
CN (1) CN104992183B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956592A (en) * 2016-05-10 2016-09-21 西北工业大学 Aircraft target detection method based on image significance and SVM
CN106815604A (en) * 2017-01-16 2017-06-09 大连理工大学 Method for viewing points detecting based on fusion of multi-layer information
CN107992875A (en) * 2017-12-25 2018-05-04 北京航空航天大学 A kind of well-marked target detection method based on image bandpass filtering
CN108897786A (en) * 2018-06-08 2018-11-27 Oppo广东移动通信有限公司 Recommended method, device, storage medium and the mobile terminal of application program
CN109190473A (en) * 2018-07-29 2019-01-11 国网上海市电力公司 The application of a kind of " machine vision understanding " in remote monitoriong of electric power
CN110415240A (en) * 2019-08-01 2019-11-05 国信优易数据有限公司 Sample image generation method and device, circuit board defect detection method and device
CN111481166A (en) * 2017-05-04 2020-08-04 深圳硅基智能科技有限公司 Automatic identification system based on eye ground screening

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980248A (en) * 2010-11-09 2011-02-23 西安电子科技大学 Improved visual attention model-based method of natural scene object detection
US7940985B2 (en) * 2007-06-06 2011-05-10 Microsoft Corporation Salient object detection
CN102945378A (en) * 2012-10-23 2013-02-27 西北工业大学 Method for detecting potential target regions of remote sensing image on basis of monitoring method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7940985B2 (en) * 2007-06-06 2011-05-10 Microsoft Corporation Salient object detection
CN101980248A (en) * 2010-11-09 2011-02-23 西安电子科技大学 Improved visual attention model-based method of natural scene object detection
CN102945378A (en) * 2012-10-23 2013-02-27 西北工业大学 Method for detecting potential target regions of remote sensing image on basis of monitoring method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIAODI HOU: "Saliency Detection: A Spectral Residual Approach", 《COMPUTER VISION AND PATTERN RECOGNITION(CVPR),2007IEEE CONFERENCE ON》 *
侯庆岑: "模拟人类视觉的自动图像分割技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
潘晨 等: "基于空间和时间差别采样的色彩图像分割", 《计算机工程》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956592A (en) * 2016-05-10 2016-09-21 西北工业大学 Aircraft target detection method based on image significance and SVM
CN105956592B (en) * 2016-05-10 2019-03-29 西北工业大学 A kind of Aircraft Targets detection method based on saliency and SVM
CN106815604A (en) * 2017-01-16 2017-06-09 大连理工大学 Method for viewing points detecting based on fusion of multi-layer information
CN106815604B (en) * 2017-01-16 2019-09-27 大连理工大学 Method for viewing points detecting based on fusion of multi-layer information
CN111481166A (en) * 2017-05-04 2020-08-04 深圳硅基智能科技有限公司 Automatic identification system based on eye ground screening
CN111481166B (en) * 2017-05-04 2021-11-26 深圳硅基智能科技有限公司 Automatic identification system based on eye ground screening
CN107992875A (en) * 2017-12-25 2018-05-04 北京航空航天大学 A kind of well-marked target detection method based on image bandpass filtering
CN107992875B (en) * 2017-12-25 2018-10-26 北京航空航天大学 A kind of well-marked target detection method based on image bandpass filtering
CN108897786A (en) * 2018-06-08 2018-11-27 Oppo广东移动通信有限公司 Recommended method, device, storage medium and the mobile terminal of application program
CN108897786B (en) * 2018-06-08 2021-06-08 Oppo广东移动通信有限公司 Recommendation method and device of application program, storage medium and mobile terminal
CN109190473A (en) * 2018-07-29 2019-01-11 国网上海市电力公司 The application of a kind of " machine vision understanding " in remote monitoriong of electric power
CN110415240A (en) * 2019-08-01 2019-11-05 国信优易数据有限公司 Sample image generation method and device, circuit board defect detection method and device

Also Published As

Publication number Publication date
CN104992183B (en) 2018-08-28

Similar Documents

Publication Publication Date Title
CN104992183A (en) Method for automatic detection of substantial object in natural scene
CN110210486B (en) Sketch annotation information-based generation countermeasure transfer learning method
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN104166859B (en) Based on SSAE and FSALS SVM Classification of Polarimetric SAR Image
CN113192040A (en) Fabric flaw detection method based on YOLO v4 improved algorithm
CN107545263B (en) Object detection method and device
CN107491734B (en) Semi-supervised polarimetric SAR image classification method based on multi-core fusion and space Wishart LapSVM
CN107194872A (en) Remote sensed image super-resolution reconstruction method based on perception of content deep learning network
CN111079685A (en) 3D target detection method
CN108108751A (en) A kind of scene recognition method based on convolution multiple features and depth random forest
CN112164054A (en) Knowledge distillation-based image target detection method and detector and training method thereof
CN104282008A (en) Method for performing texture segmentation on image and device thereof
CN112613350A (en) High-resolution optical remote sensing image airplane target detection method based on deep neural network
CN105894013A (en) Method for classifying polarized SAR image based on CNN and SMM
CN107392863A (en) SAR image change detection based on affine matrix fusion Spectral Clustering
CN114463637A (en) Winter wheat remote sensing identification analysis method and system based on deep learning
CN104933691A (en) Image fusion method based on phase spectrum visual saliency detection
Müller et al. Simulating optical properties to access novel metrological parameter ranges and the impact of different model approximations
CN102737232B (en) Cleavage cell recognition method
CN106446965A (en) Spacecraft visible light image classification method
CN109711420A (en) The detection and recognition methods of alveolar hydalid target based on human visual attention mechanism
CN104933435B (en) Machine vision construction method based on simulation human vision
CN105005788A (en) Target perception method based on emulation of human low level vision
CN105023016B (en) Target apperception method based on compressed sensing classification
Veeravasarapu et al. Model-driven simulations for computer vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant