CN106127754B - CME detection method based on fusion feature and space-time expending decision rule - Google Patents

CME detection method based on fusion feature and space-time expending decision rule Download PDF

Info

Publication number
CN106127754B
CN106127754B CN201610450612.XA CN201610450612A CN106127754B CN 106127754 B CN106127754 B CN 106127754B CN 201610450612 A CN201610450612 A CN 201610450612A CN 106127754 B CN106127754 B CN 106127754B
Authority
CN
China
Prior art keywords
cme
region
image
cutting cube
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610450612.XA
Other languages
Chinese (zh)
Other versions
CN106127754A (en
Inventor
张玲
尹建芹
姚海
冯志全
周劲
蔺永政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201610450612.XA priority Critical patent/CN106127754B/en
Publication of CN106127754A publication Critical patent/CN106127754A/en
Application granted granted Critical
Publication of CN106127754B publication Critical patent/CN106127754B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Image Analysis (AREA)

Abstract

The CME detection method based on fusion feature and space-time expending decision rule that the present invention provides a kind of, space-time expending between this method combination CME event two continuous frames, corona observed image is identified using the classifier based on ELM according to the gray-scale statistical characteristics of image and textural characteristics, whether there is CME in detection image.The beneficial effects of the present invention are: the present invention establishes multiple features fusion CME detection model, avoid the error that CME is detected according to single features, erroneous judgement caused by noise section and the generation area CME in corona observed image are obscured is also avoided simultaneously, and accuracy in detection is high, and detection efficiency is high.

Description

CME detection method based on fusion feature and space-time expending decision rule
Technical field
The CME detection method based on fusion feature and space-time expending decision rule that the present invention relates to a kind of.
Background technique
In recent years, external existing more set automatic identification early warning systems, studies in China are started to walk relatively late.At home, Zeng Zhao Constitution proposes the coronal mass ejection recognition methods based on frequency spectrum mutation analysis.Tian Hongmei etc. is learnt using AdaBoost algorithm Statistical nature related with the Opacitization and the generation of extraction and detection corona the Opacitization accordingly in CME difference image.In state On border, Robbrecht etc. mainly handle to image and is further calculated the characteristic parameter of CME such as by Hough transformation Position angle, expanded angle, speed etc..O.Olmedo goes forward side by side one by the detection that processing single width LASCO difference image carries out CME Step tracks the CME detected.N.Goussies introduce region competition mechanism carries out processing to image and carries out CME event It detects and it is tracked, the detection of the image of Byrne combination LASCO and corresponding meteorological chart progress CME.In addition, utilizing Optical flow field algorithm can detecte out the CME event of shape and intensity with respect to stable development.
A kind of intelligent coronal mass ejection is proposed in Chinese invention patent application prospectus CN201310391553.X Projectile event observation method, generation and alarm using difference image detection CME.It is proposed in patent book CN201410443408.6 The CME with highlight information feature can be successfully detected using the method for gray-scale statistical characteristics and AdaBoost artificial intelligence Event, the method promote testing result by integrating the classification results of multiple Weak Classifiers according to gray-scale statistical characteristic.Patent Fusion gray feature, textural characteristics and HOG characteristic Design Weak Classifier are proposed in specification CN201510312081.3, use decision The method of the method integration Weak Classifier of tree can detecte the CME event for providing the highlighted feature of standby texture information.Publication number A kind of textural characteristics according to CME corona observed image are proposed in the prospectus of CN104597523A detects it with existing The method of elephant.This method is learnt by the method extracted the textural characteristics of the image of a period of time and its demarcate its label, The detection of CME satellite phenomenon is carried out further according to the feature learnt.
Summary of the invention
To solve the above technical deficiency, the present invention provides a kind of detection speed is fast, accuracy in detection it is high based on The CME detection method of fusion feature and space-time expending decision rule.
The present invention is achieved by the following measures:
A kind of CME detection method based on fusion feature and space-time expending decision rule proposed by the present invention, including with Lower step:
Step 1, the image sequence in certain period of time comprising various CME is chosen, and the image in image sequence is carried out Its generation for whether having CME is manually demarcated, the region CME is extracted in the CME image of the generation for having CME of calibration, and be cut into CME cutting cube sample;No region CME and noisy region are found out in the CME image of the generation without CME of calibration, and are cut It is cut into non-CME cutting cube sample;CME cutting cube sample and non-CME cutting cube sample are combined as training sample set;
Step 2, learn the various features of different types of CME using machine learning method and merged, establish classification Device goes in detection image the generation for whether having CME phenomenon, and fusion feature is then extracted from sample set;
Step 3, the fusion feature extracted according to step 2, training sample set obtained in training step 1, so that obtaining can To distinguish the detection model in the region CME and the non-region CME;
Step 4, single picture is scanned in the form of sliding window, in scanning process, detects sliding window with the detection model that step 3 obtains Whether the region covered is the candidate region CME, then in conjunction with the space-time expending of CME, further determines that candidate region is to make an uproar Sound area domain or the region CME are fitted the cutting cube of the CME phenomenon of all generations, you can get it according to the region CME determined The profile of CME.
When extracting the region CME in the image of the generation for having CME of calibration in step 1, the region CME includes having to highlight The CME of CME, the sum with abundant textural characteristics without textural characteristics of the CME of characteristic and non-highlighted characteristic.
Learn the various features of different types of CME using machine learning method in step 2 and merged, wherein wrapping Include inclusion region variability, each anisotropic, texture complexity a variety of CME of intensity.
In step 3, using training sample set obtained in ELM learning algorithm training step 1, the differentiation of ELM learning algorithm is cut Whether in block have CME, initialization input data is the gray feature and textural characteristics of the cutting cube obtained if cutting, exports result Whether to there is CME.
The beneficial effects of the present invention are: the present invention establishes multiple features fusion CME detection model, avoid according to single spy The error of sign detection CME, while also avoiding noise section and the CME generation area in corona observed image and obscure to cause Erroneous judgement, accuracy in detection is high, and detection efficiency is high.
Detailed description of the invention
In Fig. 1, (a) indicates the CME image of existing luminance information and textured information;(b) indicate that CME is weaker, brightness Not high image;(c) image of CME region area very little is indicated;(d) noise and the much like image of CME are indicated;(e-h) Find out the most abundant block of image marked with rectangle frame of most bright block and texture in image.
Fig. 2 indicates that traversing the corona under polar coordinates respectively according to three sliding window families having a size of 50*50,15*30,5*5 sees Error of measurement partial image.
A-f successively illustrates the process that single width CME image chooses CME sample block in Fig. 3.
Specific embodiment
Further detailed description is done to the present invention with reference to the accompanying drawing:
CME detection method based on fusion feature and space-time expending decision rule of the invention is that CME event is combined to connect Space-time expending between continuous two frames utilizes the classifier pair based on ELM according to the gray-scale statistical characteristics of image and textural characteristics Corona observed image is identified, whether there is CME in detection image.And testing result is supplied to space weather prediction department As reference data.
One, multiple features fusion CME detection model is established
Picture in its entirety is scanned in the form of sliding window, in scanning process, with whether there is CME in CME detection model detection sliding window Phenomenon.What previous research institute took removes detection CME, but actual CME characteristics of image using single features (texture or gray scale) With diversity, in addition, the noise of similar CME is not but also high using the Detection accuracy of single features progress CME.The present invention It goes to establish detection model using the various features of CME image.The design of model includes the following aspects:
1) selection of sample set;
The image sequence in certain period of time comprising various CME is chosen, and the image in image sequence is manually marked Whether fixed its has the generation of CME, for its region CME of calibrated CME image zooming-out (including have the CME of highlighted characteristic with And the CME of CME, the sum with abundant textural characteristics without textural characteristics of non-highlighted characteristic) and cut it is blocking, for calibration The image without CME afterwards finds out its, combination CME cutting cube sample and non-CME blocking without the region CME and the cutting of noisy region Cutting cube sample is as training sample set.
2) extraction of fusion feature;
Due to the diversity of CME, feature also shows diversity, each feature reflects the one aspect of CME, utilizes Single features go to determine that CME can only detect the CME phenomenon with this feature, very for the CME phenomenon with Biodiversity Characteristics Hardly possible obtains preferable detection effect, the present invention using machine learning method learn different types of CME (inclusion region variability, Each anisotropic, texture complexity CME of intensity) various features and hand over its fusion to establish classifier goes in detection image whether to have The generation of CME phenomenon.
3) detection model is established;
According to the fusion feature mentioned in 2), the sample set in the way of ELM extreme learning machine in training 1) obtains can To distinguish the detection model of CME and the non-region CME.Extreme learning machine is the solution list hidden layer nerve net put forward by Huang Guangbin The algorithm of network is a kind of novel fast learning algorithm, and in the case where guaranteeing certain study precision, the learning algorithm of ELM restrains speed Degree can obtain corresponding list according to the classification results of sample in the case where random initializtion input weight and biasing than very fast The hidden output weight with output node.Whether having the differentiation of CME in cutting cube can be solved by ELM algorithm, initialization input Data are the gray feature and textural characteristics of the cutting cube obtained, and output result is whether to have CME.
4) foundation of the CME decision rule under space-time restriction;
The region that noise section in corona observed image and CME occur has certain similitude, that is, have it is highlighted with And the feature of texture-rich, it is easy to cause erroneous judgement, is worked out in conjunction with the characteristic of CME phenomenon itself namely the space-time expending of CME CME continuity decision rule.
Two, the detection of CME and quantitative description;
Single picture is scanned in the form of sliding window, in scanning process, with the CME cutting cube detection model detection having had built up Whether the region that sliding window is covered is the candidate region CME.In conjunction with the space-time expending of CME, further determine that candidate region is to make an uproar Sound area domain or the region CME.According to the region CME detected, it is fitted the cutting cube of the CME phenomenon of all generations, you can get it The profile of CME.
Three, CME feature vector designs
From the perspective of human vision, usually there are a bright forward position, followed by one dark chamber and high core in the region CME. Therefore human vision is imitated using computer mould to detect CME and namely find out the candidate region with highlighted feature and abundant texture And then judge whether it is the region CME.Firstly, carrying out coordinate conversion to the coronal mass ejection difference image of LASCO: by image from straight The form of angular coordinate changes into polar form.Then, image is pre-processed using median filter tiny to remove its Noise.
(1) signature analysis of CME
Different types of CME, which holds the external performance revealed, also has diversity.Fig. 1 (a-d) illustrates four width different types The image of CME.In order to find out the candidate region of CME, that is, find out the region with highlighted characteristic and abundant textural characteristics.It is first Cutting image is first carried out, image is cut into the cutting cube having a size of 50*50.The brightness of block, which is used herein in cutting cube, owns The average gray value of pixel is measured.The texture of block is measured with its entropy.Algorithm finds out most bright block in image and texture most Abundant piece is marked with the rectangle frame that the rectangle frame and line style that line style is dotted line are solid line respectively, sees Fig. 1 (e-h).In Fig. 1 (a) The existing luminance information of CME textured information again, therefore can detecte out CME cutting cube with textural characteristics and luminance information. The CME of Fig. 1 (b) is weaker, and brightness is not high, therefore cannot be positioned with brightness, but this region has texture information, Therefore the high entropy cutting cube found is CME cutting cube.CME region area very little in Fig. 1 (c) only accounts for a small item, where cut It cuts block and does not have high entropy feature, but it has highlighted feature, therefore can orient the region CME with luminance information.Fig. 1 (d) figure CME and noise and CME as in is much like, and high entropy region and highlight regions are not the regions CME, in view of the foregoing description, single All regions CME cannot be found out with brightness or texture information.It is feasible one kind that two Fusion Features, which are made a return journey, and detect CME Scheme.
Merge the feature vector design of gray scale and texture information
Gray scale and texture information are merged with feature vector, X.In X, brightness and textural characteristics are the components of vector X.
A) selection of brightness.Under normal conditions, most CME has highlighted feature.Therefore it is higher than background pixel The number of the pixel of average gray value (being denoted as m) can be used to detect CME as fixed reference feature.In the present invention, 50 gray scales Statistical nature is designed as 50 components of vector, is denoted as [g1, g2..., g50].Wherein, giIt is that pixel value is m+i in cutting cube The number of pixel.Shown in its formula such as formula (1).
B) selection of textural characteristics.The region CME is in addition to there is highlighted feature, and there are also textural characteristics.4 sides in cutting image block 13 haralick textural characteristics on (0 °, 45 °, 90 °, 135 °) are denoted as [t as the another part component of vector X1, t2,…,t52]。
Therefore, feature vector, X can be designed as X=[t1,t2,…,t52,g1,g2,…g50]T
CME detection algorithm based on ELM
How whether this section introduction using feature vector have CME event with the classifier based on ELM in detection image Occur.ELM algorithm is a kind of quick neural networks with single hidden layer training algorithm.Algorithm can be decomposed into three layers: input layer, hidden layer And output layer, it is used in the detection of CME, the feature vector of CME cutting cube is designed as the input layer of ELM, is in cutting cube It is no that there are the output layers that CME is designed as ELM.The detection algorithm of CME is described as follows:
(1) samples selection process.In the process, a sample set is established.
(2) classifier of the training of sample set obtained in (1) based on ELM is utilized.
(3) whether contain CME in detection image with the classifier based on ELM.
(4) accuracy of detection is further increased using time and space continuity decision rule.
A) samples selection process
In the process, the CME sample of polymorphic type is got in selection, then it is more quasi- to train the CME that the classifier come can detecte Really.In the process, the sample set for being used to train classifier is established.The selection of the size of sample block is also critically important One step, selection course example are as follows: are traversed under polar coordinates respectively according to three sliding window families having a size of 50*50,15*30,5*5 Corona observes difference image (such as Fig. 2) to find out its candidate region CME, that is, has the block and abundant texture letter of highlighted feature The block of breath.The rectangle frame of the block white for the highlighted feature found out marks, and highest piece of entropy is marked with the rectangle frame of black.From figure In if as can be seen that sliding window it is oversized, such as 50*50, then the block for the highlighted feature found out and highest piece of entropy are not CME block, if sliding window is undersized, such as 5*5, then the candidate CME block found out is not CME cutting cube.And 15*30's Under sliding window size, two candidate blocks found out are CME block.Therefore, the size of sliding window is chosen to be 15*30.
In order to enable classifier to detect different types of CME.Selection has multifarious CME sample and non-as far as possible CME sample.The image document for choosing in March, 2007 has the CME and non-CME cutting cube sample of each opposite sex in wherein taking.It is first First, it manually demarcates and whether has CME in every piece image, then choose the cutting sample block of CME and non-CME.(1) CME sample Selection.Process such as Fig. 3 of single width CME image selection CME sample block.First, it is gone with the sliding window of the size H*W greater than 15*30 The image of CME occurs for traversal, finds out large-sized cutting cube of candidate CME, namely highlighted cutting cube (Fig. 3 .b) and high entropy The cutting cube of (Fig. 3 .d) then removes two class candidate's CME blocks of cutting H*W size according to the size of 15*30, two parts 15*30's Cutting fritter takes union, and gets rid of and wherein cut fritter as the cutting cube sample of CME without CME.All CME images The CME cutting fritter for finding out 15*30 according to the method described above constitutes sample set.This part sample is expressed as { SW1,SW2,…,SWm} (2) selection without CME sample.Process is as follows: optional 20 width is cut from the sample image of no CME, cut lengths 15* 30.After cutting, one that 80 cutting cubes combine as no CME sample block is selected from the cutting cube of each image Point.In addition, hand picking goes out being cut without CME image and according to the size of 15*30 for Noise.Hand picking goes out Noise The cutting cube without CME as sample another part.This part of sample is expressed as { SN1,SN2,…,SNn}.There is cutting for CME It cuts block and the cutting cube without CME combines composing training sample set, be denoted as DS={ SW1,SW2,…,SWm,SN1,SN2,…, SNn}。
B) foundation based on ELM classifier.
ELM algorithm includes three layers: input layer, output layer and hidden layer.Assuming that L is the number of hidden layer interior joint, it is based on Input layer and the output layer design of the classifier algorithm of ELM are as follows:
Input layer: the set of eigenvectors X={ X of sample in sample set1,X2,…XN, wherein XiFor the feature of i-th of sample Vector.XiIt can be expressed as Xj=[tj1,tj2,…,tj52,gj1,gj2,gj50]T
Output layer: O={ O1,O2,…ON}.Wherein O is the class indication collection of all samples in sample set DS.If DSiIt is CME sample, then Oi=1, otherwise Oi=0。
The weight of hidden nodeCalculating process it is as follows:
I. vector W and bias vector b is generated at random.Wherein, W={ W1,W2,…,WL, Wi=[ωi,1i,2,… ωi,102]T,WiDimension be 102.Bias vector b={ b1,b2,…,bL}。
Ii. β is calculated.
Neural networks with single hidden layer can be expressed as formula 2, be abbreviated as the form of vector: formula 3.
H β=T (3)
Wherein, g (x) is activationFunction is desired output, tiIt can be calculated by formula (4).
By formula (2-4) it can be concluded that the approximate formula (5) of β.
Wherein,For the approximation of β.H+It is the generalized inverse matrix of H.
C) process of the detection of classifier CME based on ELM.
Detection process is as follows: image traversed according to step-length s with the sliding window having a size of 15*30, during traversal, The feature vector for calculating image in sliding window, then carries out the detection of CME with classifier b) obtained to image in sliding window.If time Simply by the presence of there is CME generation in sliding window during going through, then there is CME phenomenon in entire image.
D) expansion of sample set
With the image in detection of classifier in March, 2007 based on ELM, wrong point of corona observation chart is shown in result set Picture.There are two the reason of mistake are divided: one is that there is no the samples of similar wrong partial image in sample set, the second is shaped like bright spot and height The influence of the noise of line.The sample that the method for solution is wrong point of cutting manually finds out wrong point of cutting cube and they is added to Training sample is concentrated.Continue the accuracy for training classifier and the same sample set discovery of re-test detects with the sample set of expansion It increases.The process of exptended sample and test sample collection circulation carries out until accuracy is no longer promoted.Because of the convergence speed of ELM Degree is fast, feasible by the method for exptended sample training classifier and the accuracy of detection can be improved.
In conjunction with based on space-time expending decision rule modified hydrothermal process
When detection image, find while CME detected, noise region is also mistaken for the region CME.The reason is that making an uproar Sound and CME are very much like on external performance.The method of solution is that the space-time expending of CME in connection sequence of video images goes to solve Certainly judge problem by accident.Method is as follows: CME event has a process from Emergence and Development to extinction.It shows on polar coordinate image, CME extends in region from left to right.CME initial phase, the region CME one are scheduled on the Far Left of image, and in subsequent figure As extending to the right in frame.It can be concluded that the region CME detected is if it is isolated region, and not near baffle, and chase after It traces back to the similar position of the picture frame of front or its left region Jun WuCME, then this isolated candidate region CME is noise Region.According to this process, design is based on shown in time and space continuity decision rule such as formula (6):
In formula (6), Ri,j,tIndicate that center is the final detection result of the image block of (i, j) in the image of t moment. If there is CME in image block, Ri,j,tIt is 1, otherwise Ri,j,tIt is 0.S is step-length when traversing image with sliding window.Oi,j,tIt indicates to use T moment detection of classifier center is the classification results of the image block of (i, j), O before decision rulei,j,tIf not for 0 image block It is the candidate region CME, otherwise Oi,j,tIt is 1.By can solve in most of image using space-time expending decision rule is based on The erroneous judgement problem of noise region block.
The above is only the preferred embodiment of this patent, it is noted that for the ordinary skill people of the art For member, under the premise of not departing from the art of this patent principle, several improvement and replacement can also be made, these are improved and replacement Also it should be regarded as the protection scope of this patent.

Claims (3)

1. a kind of CME detection method based on fusion feature and space-time expending decision rule, which is characterized in that including following step It is rapid:
Step 1, the image sequence in certain period of time comprising various CME is chosen, and the image in image sequence is carried out artificial Its generation for whether having CME is demarcated, the region CME is extracted in the CME image of the generation for having CME of calibration, and be cut into CME and cut Cut block sample;No region CME and noisy region are found out in the CME image of the generation without CME of calibration, and are cut into non- CME cutting cube sample;CME cutting cube sample and non-CME cutting cube sample are combined as training sample set;
It goes traversal that the image of CME occurs with the sliding window of the size H*W greater than 15*30, finds out candidate highlighted cutting cube and high entropy Then cutting cube removes the candidate highlighted cutting cube and high entropy cutting cube of cutting H*W size according to the size of 15*30, obtains The candidate highlighted cutting cube of 15*30 and high entropy cutting cube take union, and get rid of and concentrate the cutting cube without CME as The cutting cube sample of CME;
Step 2, learn the various features of different types of CME using machine learning method and merged, establish classifier and go Whether there is the generation of CME phenomenon in detection image, fusion feature is then extracted from sample set;Different types of CME includes area Domain variability, each anisotropic, texture complexity CME of intensity;
CME extends in region from left to right, and CME initial phase, the region CME one is scheduled on the Far Left of image, and subsequent Picture frame in extend to the right, when the region CME detected is isolated region, and not near baffle, and before tracing back to If the similar position of the picture frame in face or its left region Jun WuCME, then this isolated candidate region CME is noise region;
Step 3, the fusion feature extracted according to step 2, training sample set obtained in training step 1, to obtain distinguishing CME The detection model in region and the non-region CME;
Step 4, single picture is scanned in the form of sliding window, in scanning process, the detection model detection sliding window obtained with step 3 is covered Whether the region of lid is the candidate region CME, then in conjunction with the space-time expending of CME, further determines that candidate region is noise range Domain or the region CME are fitted the cutting cube of the CME phenomenon of all generations according to the region CME determined, you can get it CME's Profile.
2. the CME detection method based on fusion feature and space-time expending decision rule, feature exist according to claim 1 In: when extracting the region CME in the CME image of the generation for having CME of calibration in step 1, the region CME includes having highlighted spy The CME of the property and CME of CME, the sum with abundant textural characteristics without textural characteristics of non-highlighted characteristic.
3. the CME detection method based on fusion feature and space-time expending decision rule, feature exist according to claim 1 In: in step 3, using training sample set obtained in ELM learning algorithm training step 1, ELM learning algorithm differentiates in cutting cube Whether CME is had, and initialization input data is the gray feature and textural characteristics of the cutting cube obtained, and whether output result is With CME.
CN201610450612.XA 2016-06-21 2016-06-21 CME detection method based on fusion feature and space-time expending decision rule Expired - Fee Related CN106127754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610450612.XA CN106127754B (en) 2016-06-21 2016-06-21 CME detection method based on fusion feature and space-time expending decision rule

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610450612.XA CN106127754B (en) 2016-06-21 2016-06-21 CME detection method based on fusion feature and space-time expending decision rule

Publications (2)

Publication Number Publication Date
CN106127754A CN106127754A (en) 2016-11-16
CN106127754B true CN106127754B (en) 2019-03-08

Family

ID=57470395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610450612.XA Expired - Fee Related CN106127754B (en) 2016-06-21 2016-06-21 CME detection method based on fusion feature and space-time expending decision rule

Country Status (1)

Country Link
CN (1) CN106127754B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109951243A (en) * 2017-12-20 2019-06-28 中国科学院深圳先进技术研究院 A kind of spectrum prediction method, system and electronic equipment
CN109064478B (en) * 2018-07-17 2021-06-11 暨南大学 Astronomical image contour extraction method based on extreme learning machine
CN110533100B (en) * 2019-07-22 2021-11-26 南京大学 Method for CME detection and tracking based on machine learning
CN112150475A (en) * 2020-10-12 2020-12-29 山东省科学院海洋仪器仪表研究所 Suspended particle feature segmentation and extraction method for underwater image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318049A (en) * 2014-10-30 2015-01-28 济南大学 Coronal mass ejection event identification method
CN104597523A (en) * 2014-12-30 2015-05-06 西南交通大学 Detection method of coronal mass ejection multiple associated phenomenon
CN105046259A (en) * 2015-06-09 2015-11-11 济南大学 Coronal mass ejection (CME) detection method based on multi-feature fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318049A (en) * 2014-10-30 2015-01-28 济南大学 Coronal mass ejection event identification method
CN104597523A (en) * 2014-12-30 2015-05-06 西南交通大学 Detection method of coronal mass ejection multiple associated phenomenon
CN105046259A (en) * 2015-06-09 2015-11-11 济南大学 Coronal mass ejection (CME) detection method based on multi-feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PREDICTING CORONAL MASS EJECTIONS USING MACHINE LEARNING METHODS;M.G.BOBRA等;《THE ASTROPHYSICAL JOURNAL》;20160420;第1-7页

Also Published As

Publication number Publication date
CN106127754A (en) 2016-11-16

Similar Documents

Publication Publication Date Title
CN109583342B (en) Human face living body detection method based on transfer learning
CN103617426B (en) Pedestrian target detection method under interference by natural environment and shelter
CN105872477B (en) video monitoring method and video monitoring system
CN100397410C (en) Method and device for distinguishing face expression based on video frequency
Marin et al. Learning appearance in virtual scenarios for pedestrian detection
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN110837784B (en) Examination room peeping and cheating detection system based on human head characteristics
CN106127754B (en) CME detection method based on fusion feature and space-time expending decision rule
CN105740780B (en) Method and device for detecting living human face
CN100426317C (en) Multiple attitude human face detection and track system and method
CN107067413B (en) A kind of moving target detecting method of time-space domain statistical match local feature
CN106295600A (en) Driver status real-time detection method and device
CN105740779B (en) Method and device for detecting living human face
CN109670396A (en) A kind of interior Falls Among Old People detection method
CN110263712B (en) Coarse and fine pedestrian detection method based on region candidates
CN102722698A (en) Method and system for detecting and tracking multi-pose face
CN108346160A (en) The multiple mobile object tracking combined based on disparity map Background difference and Meanshift
CN106570491A (en) Robot intelligent interaction method and intelligent robot
CN104794449B (en) Gait energy diagram based on human body HOG features obtains and personal identification method
CN108280421B (en) Human behavior recognition method based on multi-feature depth motion map
CN103325122A (en) Pedestrian retrieval method based on bidirectional sequencing
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN105930803A (en) Preceding vehicle detection method based on Edge Boxes and preceding vehicle detection device thereof
CN106599785A (en) Method and device for building human body 3D feature identity information database
CN110599463A (en) Tongue image detection and positioning algorithm based on lightweight cascade neural network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190308

Termination date: 20210621