CN103678552A - Remote-sensing image retrieving method and system based on salient regional features - Google Patents
Remote-sensing image retrieving method and system based on salient regional features Download PDFInfo
- Publication number
- CN103678552A CN103678552A CN201310652866.6A CN201310652866A CN103678552A CN 103678552 A CN103678552 A CN 103678552A CN 201310652866 A CN201310652866 A CN 201310652866A CN 103678552 A CN103678552 A CN 103678552A
- Authority
- CN
- China
- Prior art keywords
- image
- marking area
- remarkable
- feature
- binaryzation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
Landscapes
- Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a remote-sensing image retrieving method and system based on salient regional features. The method comprises the steps of obtaining image saliency maps by utilizing a visual attention model, performing binaryzation on the image saliency maps through the adaptive threshold method, performing mask operation on original images and the corresponding binaryzation saliency maps to obtain salient regions of the images, extracting saliency points in the salient regions of the images, performing clustering on the saliency points based on saliency point features to obtain feature vectors for describing the salient regional features, and finally performing image retrieval according to preset similarity measure criteria. According to the remote-sensing image retrieving method and system based on the salient regional features, feature extraction efficiency is guaranteed, meanwhile, the precision ratio of retrieving is enhanced, and retrieved results are improved. Besides, the system and method conform to human vision features.
Description
Technical field
The invention belongs to remote sensing image processing video search technical field, relate to a kind of Remote Sensing Image Retrieval method and system based on marking area feature.
Background technology
Along with development of remote sensing, obtaining of Methods on Multi-Sensors RS Image becomes day by day simple.Magnanimity remote sensing image data has also brought many difficult problems urgently to be resolved hurrily providing more data to select for scientific research when.On the one hand, present stage processing and the analysis ability of image data limited, make the service efficiency of remote sensing image data low; On the other hand, remote sensing image data has the features such as spatiality, diversity, complicacy, and at present the tissue of remote sensing image data, the development degree managing, browse and inquire about lag behind the growth rate of remote sensing image data far away, make often can not find fast required remote sensing image for application-specific.The shortage of effective search method of magnanimity remote sensing image data has become the bottleneck of restriction remote sensing image data application, studies efficient Remote Sensing Image Retrieval method imperative.
Traditional Remote Sensing Image Retrieval method comprises retrieval and the content-based retrieval based on key word.Video search technology based on key word is described the image in database with a series of key word by artificial mark mode, during retrieval, input key word, system will be returned to the image conforming to it, although the method retrieving simply and is easily understood, but manual mark efficiency is low, and subjectivity is strong.Content-based video search technology is retrieved by extracting the Low Level Vision feature (spectral signature, textural characteristics and shape facility) of image, although this technology has been improved result for retrieval to a certain extent, but spectral signature and textural characteristics are the global characteristics of image, do not consider the feature of image prospect and background, can not describe well the semantic information of image; Although shape facility is relevant with specific objective on image, the extraction of shape facility often needs image to cut apart, and cut apart, itself is exactly a great problem of computer vision field.Can find out, due to the existence of semantic gap, Low Level Vision feature can not effectively reflect the essential content of image.
Theoretical according to human eye vision, what for width image people, pay close attention to is not the content of view picture image but the marking area of image, people introduce video search field by visual attention model for this reason, and research and utilization visual attention model calculates the remarkable figure of image and extracts the features such as color, texture, edge and retrieve.Compare Low Level Vision feature, the method more meets people's query intention and can effectively make semantic gap up, yet significantly figure is fuzzy gray level image, and the features such as color, texture, edge of directly extracting remarkable figure are very difficult.
Summary of the invention
The deficiency existing for prior art, the invention provides and a kind ofly can better reflect Remote Sensing Image Retrieval method and system Search Requirement, based on marking area feature, the present invention is by analyzing human eye vision attention characteristic, from complicated remote sensing image, extract marking area, by extracting marking area feature, realize Remote Sensing Image Retrieval.
Technical scheme of the present invention is as follows:
One, the Remote Sensing Image Retrieval method based on marking area feature, comprises step:
Step 1, utilizes visual attention model to obtain the remarkable figure of image;
Step 4, the significant point of the extraction step 3 image marking area that obtains, based on significant point feature, adopts clustering method to carry out cluster to significant point, obtains the proper vector of describing marking area feature;
Step 5, based on proper vector that step 4 obtains, carries out video search according to default similarity measurement criterion.
Step 1 is specially:
By the gaussian pyramid of structure different scale, obtain brightness, color characteristic and the direction character of image, merge brightness, color characteristic and the direction character acquisition of image and the remarkable figure of raw video consistent size.
In step 3, by the remarkable figure of the raw video binaryzation corresponding with it is carried out to " mask " computing, obtain the marking area of image.
Described in step 4, be characterized as word bag feature, word bag feature obtains according to Bag of Words algorithm principle in text retrieval.
Step 5 further comprises sub-step:
5.1 default similarity measurement criterions;
5.2 proper vectors based on image marking area, adopt similarity measurement criterion to obtain one by one the similarity of image in image to be retrieved and Image Database;
5.3 export in Image Database, image sequence is rear according to similarity size.
Two, the Remote Sensing Image Retrieval system based on marking area feature, comprising:
Significantly figure acquisition module, is used for utilizing visual attention model to obtain the remarkable figure of image;
The remarkable figure acquisition module of binaryzation, is used for that remarkable figure is converted into corresponding binaryzation and significantly schemes;
Marking area acquisition module, is used for obtaining based on raw video and the remarkable figure of binaryzation the marking area of image;
Proper vector acquisition module, is used for extracting the significant point of image marking area, based on significant point feature, adopts clustering method to carry out cluster to significant point, obtains the proper vector of describing marking area feature;
Video search module, is used for based on proper vector, according to default similarity measurement criterion, carries out video search.
Compared with prior art, the present invention has following features and beneficial effect,
1, adopt visual attention model to obtain the marking area of image, in order to make up the shortcoming of visual attention model, raw video and the remarkable figure of binaryzation are carried out to the marking area that " mask " computing obtains image.
2, extract the feature of marking area, overcome and directly from remarkable figure, extracted the difficulty of feature; Marking area feature based on extracting is carried out video search, not only meets human eye vision feature and can reflect better Search Requirement, has dwindled the distance between Low Level Vision feature and high-level semantic, can effectively improve precision ratio and the recall ratio of Remote Sensing Image Retrieval.
3, favorable expandability, includes but not limited to word bag feature for the feature of the marking area retrieved, as long as can describe the feature of marking area content, all can.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the embodiment of the present invention;
Fig. 2 is that in the embodiment of the present invention, difference is returned to the average precision figure under image number.
Embodiment
A kind of specific implementation method of side of the present invention is: utilize Itti visual attention model to calculate the remarkable figure of all images in Image Database, and adopt " large Tianjin method " self-adaptation definite threshold by remarkable figure binaryzation; Binaryzation is significantly schemed to carry out with corresponding raw video the marking area that " mask " computing obtains image; Utilize the significant point of SIFT operator extraction image marking area, through cluster, obtain describing the vision word feature vector (visual words) of marking area word bag feature; According to image in default similarity measurement criterion retrieval Image Database.
Below in conjunction with Fig. 1, describe the specific embodiment of the present invention in detail, specifically comprise step:
Step 1, obtains the remarkable figure of image in Remote Sensing Image Database.
First, build retrieval Remote Sensing Image Database.
The image data adopting in the present embodiment retrieval Image Database comes from several metropolitan aviation images of the U.S. that resolution is 30cm, according to Tiles zero lap partitioned mode, it by Image Segmentation, is the sub-block of 256*256 size, form the retrieval Image Database that comprises aircraft, sparse residential block, buildings and parking lot 4 class atural objects, wherein every class atural object comprises 100 width images.
Then, obtain the remarkable figure of image in Remote Sensing Image Database.
The present embodiment adopts effect better and more ripe Itti visual attention model obtains the remarkable figure of image.By the gaussian pyramid of structure different scale, obtain three kinds of features of brightness, color, direction of image.Formula (1)~(3) are shown in the calculating of these three kinds of features.
I=(r+g+b)/3 (1)
In formula (1), the brightness that I is image, r, g, b are respectively three components of color.
In formula (2):
RG and BY represent respectively the color distortion between red green and blue Huang, the i.e. color characteristic of image;
R, G, B, Y represent respectively red, green, blue, yellow Color Channel, R=r-(g+b)/2, and G=g-(r+b)/2, B=b-(r+g)/2, Y=(r+g)/2-|r-g|/2-b, r, g, b are respectively three components of color;
C, s represent respectively central yardstick and periphery yardstick;
Θ represents " central authorities-periphery (center-surround) " yardstick operator.
O(c,s,θ)=|O(c,θ)ΘO(s,θ)| (3)
In formula (3),
O (c, s, θ) represents the direction character of different scale image;
O (c, θ) and O (s, θ) represent respectively central yardstick image feature and periphery yardstick image feature;
θ is the direction of Gabor wave filter;
Θ implication cotype (2).
Finally by many Feature fusions, brightness, color characteristic and direction character are merged, obtain the remarkable figure consistent with former image size.
During concrete enforcement, can, by asking for the arithmetic mean of three kinds of features or asking for weighted mean value according to the importance of three kinds of features and carry out many Fusion Features, obtain final significantly figure.
By asking for the arithmetic mean of three kinds of features, to carry out the formula of many Fusion Features as follows:
In formula (4), the final significantly figure of S representative, Intensity represents brightness, and Color represents color characteristic, and Orientation represents direction character.
According to the importance of three kinds of features, asking for weighted mean value, to carry out the formula of many Fusion Features as follows:
In formula (5), the final significantly figure of S representative; Intensity represents brightness, and Color represents color characteristic, and Orientation represents direction character; α, β, γ are respectively brightness, color characteristic and directional characteristic weights, according to the importance of three kinds of features, set, and alpha+beta+γ=1.
The present embodiment adopts " large Tianjin method (Otsu) " to determine adaptively the binary-state threshold of remarkable figure.Large Tianjin method is a kind of algorithm of more ripe definite image binaryzation segmentation threshold, makes the inter-class variance of image prospect and background reach peaked threshold value be applicable binary-state threshold according to this algorithm, and formula (6) is shown in the calculating of inter-class variance:
v=w
0w
1(μ
0-μ
1)
2 (6)
In formula (6): v represents inter-class variance; w
0represent that remarkable figure foreground pixel number accounts for the ratio of view picture specific image prime number; w
1represent that remarkable figure background pixel number accounts for the ratio of view picture specific image prime number; μ
0the average gray that represents remarkable figure foreground pixel; μ
1the average gray that represents remarkable figure background pixel.
From the angle of Digital Image Processing, so-called " mask " computing is exactly to cover some pixel in image and retain required pixel, requires two width image sizes identical while carrying out mask computing.
Digital picture is to represent by take the matrix that gray-scale value is element, and this region of the larger expression of area grayscale value is brighter, otherwise represents that this region is darker.Binaryzation is significantly schemed its gray-scale value and is only had 0 and 1 two kind, when itself and raw video carry out mask computing, raw video and binaryzation are significantly schemed to corresponding matrix and carry out array multiplication computing, now on raw video with the remarkable figure of binaryzation in gray-scale value be 1 the corresponding pixel in position is retained, rest of pixels gray-scale value becomes 0.By " mask " computing, can isolate from raw video the marking area of image.
If F represents raw video, S represents that the binaryzation that raw video F is corresponding significantly schemes, and R represents " mask " operation result, and formula (7) is shown in mask computing:
R=F*S (7)
Wherein,
In F, each element is the gray-scale value of 0~255, and in S, Logical is logical value 0 or 1.
Step 4, the word bag feature of extraction image marking area.
In the present embodiment, adopt the word bag feature of SIFT operator extraction image marking area.For each image marking area, adopt SIFT operator can extract the array that obtains K * 128 dimension, i.e. the significant point feature of image marking area, the significant point number of K for extracting wherein, 128 dimensional feature vectors corresponding to significant point of the every line display of K * 128 dimension group.
In order to obtain the vision word feature vector of descriptor bag feature, in the present embodiment, adopt K-means clustering algorithm to carry out cluster to the significant point of every width image marking area, obtain the vision word feature vector V of the description image marking area word bag feature of 1*k dimension, wherein, k represents the number of cluster centre, and V is as follows for vision word feature vector:
V=[a
1 a
2 … a
k] (8)
In formula (8), significant point number corresponding to each cluster centre of each element representation in V.
Step 5, the video search based on similarity criteria.
After completing steps 4, the image in Remote Sensing Image Database all adopts vision word feature vector to be described, and according to default similarity measurement criterion, carries out video search.Conventional method for measuring similarity comprises city distance, Euclidean distance, histogram intersection method, quadratic form distance, cosine distance, related coefficient and KL divergence etc.
In the present embodiment, adopt better simply Euclidean distance as similarity measurement criterion, to weigh the similarity size of image in image to be retrieved and Image Database, formula (9) is shown in the calculating of Euclidean distance.By formula (9), calculate respectively the similarity size of all images in image to be retrieved and Image Database, and return in certain sequence result for retrieval, conventionally similarity size is arranged to rear output by descending mode, in Output rusults, the more forward image in position represents that itself and image similarity to be retrieved are higher like this.
In formula (9), I
1and I
2represent the two width images that carry out similarity calculating; L(I
1, I
2) expression I
1and I
2euclidean distance; a
i, b
irepresent respectively image I
1and I
2vision word feature vector in i element, n is proper vector dimension.
During compute euclidian distances, require the proper vector dimension of image identical, the numerical value that simultaneously formula (9) calculates do not represent strictly speaking the similarity size between image and means the gap size between image, therefore, and L (I
1, I
2) the less expression image I of value
1and I
2similarity is larger.
In sum, the inventive method, adopting Itti visual attention model to calculate after the remarkable figure of image, is carried out remarkable figure binaryzation and is carried out " mask " computing with raw video, obtains meeting the marking area image of human eye vision feature; Utilize the significant point of SIFT operator extraction image marking area.Meanwhile, in order to obtain describing the vision word feature vector of marking area image word bag feature, the present invention adopts K-means clustering algorithm that significant point cluster is obtained to vision word feature vector.The inventive method can effectively be improved result for retrieval, improves retrieval precision ratio.
By emulation experiment, verify beneficial effect of the present invention below:
This experiment adopts 4 classes totally 400 width remote sensing images structure retrieval Image Databases, and every class remote sensing image all comprises 100 width images, and size is 256*256.This experiment adopts average precision and different two evaluation indexes of precision ratio of returning under image number to evaluate video search effect.
Similar image number and the ratio of all image numbers that return in the image that precision ratio refers to return, the image number returning by setting can obtain returning in difference the precision ratio of the image under image number, and then calculates the average precision of each method.
See Fig. 2, wherein, method 1 is traditional method for retrieving image based on color characteristic, and method 2 is traditional method for retrieving image based on textural characteristics, method 3 is to utilize SIFT operator directly from raw video, to extract the method that word bag feature is retrieved, and method 4 is the inventive method.As can be seen from Figure 2, returning to image number when less, said method all can keep higher precision ratio; But increase along with returning to image number, the retrieval precision ratio of method 1~3 declines very fast, and that the retrieval precision ratio of the inventive method declines is slower, still can keep higher precision ratio when a lot of returning to image number.
The average precision of 1~4 pair of four class atural object of method is as shown in table 1:
Table 1 average precision statistical form
Average precision | |
Method 1 | 0.4150 |
|
0.4810 |
|
0.3687 |
Method 4 | 0.5737 |
The inventive method has optimum average precision as can be seen from Table 1, can describe well the remarkable content of all kinds of images, and retrieval effectiveness is best.
Above content is the further description of the present invention being done in conjunction with optimum implementation, can not assert that specific embodiment of the invention is only limited to these explanations.Those skilled in the art will be understood that not departing from limited by appended claims in the situation that, can carry out in detail various modifications, all should be considered as protection scope of the present invention.
Claims (7)
1. the Remote Sensing Image Retrieval method based on marking area feature, is characterized in that, comprises step:
Step 1, utilizes visual attention model to obtain the remarkable figure of image;
Step 2, is converted into corresponding binaryzation by the remarkable figure of step 1 gained and significantly schemes;
Step 3, obtains the marking area of image based on raw video and the remarkable figure of step 2 gained binaryzation;
Step 4, the significant point of the extraction step 3 image marking area that obtains, based on significant point feature, adopts clustering method to carry out cluster to significant point, obtains the proper vector of describing marking area feature;
Step 5, based on proper vector that step 4 obtains, carries out video search according to default similarity measurement criterion.
2. the Remote Sensing Image Retrieval method based on marking area feature as claimed in claim 1, is characterized in that:
Step 1 is specially:
By the gaussian pyramid of structure different scale, obtain brightness, color characteristic and the direction character of image, merge brightness, color characteristic and the direction character acquisition of image and the remarkable figure of raw video consistent size.
3. the Remote Sensing Image Retrieval method based on marking area feature as claimed in claim 1, is characterized in that:
Step 2 employing adaptive threshold method is converted into corresponding binaryzation by remarkable figure significantly to be schemed, and binary-state threshold adopts large Tianjin method to determine.
4. the Remote Sensing Image Retrieval method based on marking area feature as claimed in claim 1, is characterized in that:
In step 3, by the remarkable figure of the raw video binaryzation corresponding with it is carried out to " mask " computing, obtain the marking area of image.
5. the Remote Sensing Image Retrieval method based on marking area feature as claimed in claim 1, is characterized in that:
Described in step 4, be characterized as word bag feature.
6. the Remote Sensing Image Retrieval method based on marking area feature as claimed in claim 1, is characterized in that:
Step 5 further comprises sub-step:
5.1 default similarity measurement criterions;
5.2 proper vectors based on image marking area, adopt similarity measurement criterion to obtain one by one the similarity of image in image to be retrieved and Image Database;
5.3 export in Image Database, image sequence is rear according to similarity size.
7. the Remote Sensing Image Retrieval system based on marking area feature, is characterized in that, comprising:
Significantly figure acquisition module, is used for utilizing visual attention model to obtain the remarkable figure of image;
The remarkable figure acquisition module of binaryzation, is used for that remarkable figure is converted into corresponding binaryzation and significantly schemes;
Marking area acquisition module, is used for obtaining based on raw video and the remarkable figure of binaryzation the marking area of image;
Proper vector acquisition module, is used for extracting the significant point of image marking area, based on significant point feature, adopts clustering method to carry out cluster to significant point, obtains the proper vector of describing marking area feature;
Video search module, is used for based on proper vector, according to default similarity measurement criterion, carries out video search.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310652866.6A CN103678552A (en) | 2013-12-05 | 2013-12-05 | Remote-sensing image retrieving method and system based on salient regional features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310652866.6A CN103678552A (en) | 2013-12-05 | 2013-12-05 | Remote-sensing image retrieving method and system based on salient regional features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103678552A true CN103678552A (en) | 2014-03-26 |
Family
ID=50316097
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310652866.6A Pending CN103678552A (en) | 2013-12-05 | 2013-12-05 | Remote-sensing image retrieving method and system based on salient regional features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103678552A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104462494A (en) * | 2014-12-22 | 2015-03-25 | 武汉大学 | Remote sensing image retrieval method and system based on non-supervision characteristic learning |
CN105989001A (en) * | 2015-01-27 | 2016-10-05 | 北京大学 | Image searching method and device, and image searching system |
CN107330276A (en) * | 2017-07-03 | 2017-11-07 | 雷柏英 | Neuroimaging figure search method and device |
CN107529510A (en) * | 2017-05-24 | 2018-01-02 | 江苏科技大学 | A kind of portable small-sized boat-carrying Lift-on/Lift-off System with active compensation of undulation function |
CN107555324A (en) * | 2017-06-26 | 2018-01-09 | 江苏科技大学 | A kind of portable small-sized boat-carrying Lift-on/Lift-off System with active compensation of undulation function |
CN110460832A (en) * | 2019-07-31 | 2019-11-15 | 南方医科大学南方医院 | Processing method, system and the storage medium of double vision point video |
CN111062433A (en) * | 2019-12-13 | 2020-04-24 | 华中科技大学鄂州工业技术研究院 | Scenic spot confirmation method and device based on SIFT feature matching |
CN116129191A (en) * | 2023-02-23 | 2023-05-16 | 维璟(北京)科技有限公司 | Multi-target intelligent identification and fine classification method based on remote sensing AI |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100226564A1 (en) * | 2009-03-09 | 2010-09-09 | Xerox Corporation | Framework for image thumbnailing based on visual similarity |
CN102496023A (en) * | 2011-11-23 | 2012-06-13 | 中南大学 | Region of interest extraction method of pixel level |
CN102737248A (en) * | 2012-06-21 | 2012-10-17 | 河南工业大学 | Method and device for extracting characteristic points of lane line under complex road condition |
CN103399863A (en) * | 2013-06-25 | 2013-11-20 | 西安电子科技大学 | Image retrieval method based on edge direction difference characteristic bag |
-
2013
- 2013-12-05 CN CN201310652866.6A patent/CN103678552A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100226564A1 (en) * | 2009-03-09 | 2010-09-09 | Xerox Corporation | Framework for image thumbnailing based on visual similarity |
CN102496023A (en) * | 2011-11-23 | 2012-06-13 | 中南大学 | Region of interest extraction method of pixel level |
CN102737248A (en) * | 2012-06-21 | 2012-10-17 | 河南工业大学 | Method and device for extracting characteristic points of lane line under complex road condition |
CN103399863A (en) * | 2013-06-25 | 2013-11-20 | 西安电子科技大学 | Image retrieval method based on edge direction difference characteristic bag |
Non-Patent Citations (2)
Title |
---|
OGE MARQUES 等: "An attention-driven model for grouping similar images with image retrieval applications", 《EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING》 * |
高静静: "应用于图像检索的视觉注意力模型的研究", 《测控技术》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104462494B (en) * | 2014-12-22 | 2018-01-12 | 武汉大学 | A kind of remote sensing image retrieval method and system based on unsupervised feature learning |
CN104462494A (en) * | 2014-12-22 | 2015-03-25 | 武汉大学 | Remote sensing image retrieval method and system based on non-supervision characteristic learning |
CN105989001A (en) * | 2015-01-27 | 2016-10-05 | 北京大学 | Image searching method and device, and image searching system |
CN105989001B (en) * | 2015-01-27 | 2019-09-06 | 北京大学 | Image search method and device, image search system |
CN107529510A (en) * | 2017-05-24 | 2018-01-02 | 江苏科技大学 | A kind of portable small-sized boat-carrying Lift-on/Lift-off System with active compensation of undulation function |
CN107555324A (en) * | 2017-06-26 | 2018-01-09 | 江苏科技大学 | A kind of portable small-sized boat-carrying Lift-on/Lift-off System with active compensation of undulation function |
CN107330276A (en) * | 2017-07-03 | 2017-11-07 | 雷柏英 | Neuroimaging figure search method and device |
CN107330276B (en) * | 2017-07-03 | 2020-01-10 | 深圳大学 | Neural image map retrieval method and device |
CN110460832A (en) * | 2019-07-31 | 2019-11-15 | 南方医科大学南方医院 | Processing method, system and the storage medium of double vision point video |
CN110460832B (en) * | 2019-07-31 | 2021-09-07 | 南方医科大学南方医院 | Processing method, system and storage medium of double-viewpoint video |
CN111062433A (en) * | 2019-12-13 | 2020-04-24 | 华中科技大学鄂州工业技术研究院 | Scenic spot confirmation method and device based on SIFT feature matching |
CN116129191A (en) * | 2023-02-23 | 2023-05-16 | 维璟(北京)科技有限公司 | Multi-target intelligent identification and fine classification method based on remote sensing AI |
CN116129191B (en) * | 2023-02-23 | 2024-01-26 | 维璟(北京)科技有限公司 | Multi-target intelligent identification and fine classification method based on remote sensing AI |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103678552A (en) | Remote-sensing image retrieving method and system based on salient regional features | |
CN104966085B (en) | A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features | |
CN102662949B (en) | Method and system for retrieving specified object based on multi-feature fusion | |
CN109344701A (en) | A kind of dynamic gesture identification method based on Kinect | |
CN101770578B (en) | Image characteristic extraction method | |
CN108520516A (en) | A kind of bridge pavement Crack Detection and dividing method based on semantic segmentation | |
CN103309982B (en) | A kind of Remote Sensing Image Retrieval method of view-based access control model significant point feature | |
WO2021082168A1 (en) | Method for matching specific target object in scene image | |
CN102073748A (en) | Visual keyword based remote sensing image semantic searching method | |
CN104850850A (en) | Binocular stereoscopic vision image feature extraction method combining shape and color | |
CN103514456A (en) | Image classification method and device based on compressed sensing multi-core learning | |
CN103020265B (en) | The method and system of image retrieval | |
CN102254326A (en) | Image segmentation method by using nucleus transmission | |
CN102968637A (en) | Complicated background image and character division method | |
CN104361313A (en) | Gesture recognition method based on multi-kernel learning heterogeneous feature fusion | |
US20220315243A1 (en) | Method for identification and recognition of aircraft take-off and landing runway based on pspnet network | |
CN103366178A (en) | Method and device for carrying out color classification on target image | |
CN104778476A (en) | Image classification method | |
CN103473785A (en) | Rapid multiple target segmentation method based on three-valued image clustering | |
CN103871077A (en) | Extraction method for key frame in road vehicle monitoring video | |
CN105243387A (en) | Open-pit mine typical ground object classification method based on UAV image | |
Zang et al. | Traffic lane detection using fully convolutional neural network | |
CN103324753B (en) | Based on the image search method of symbiotic sparse histogram | |
CN113034506A (en) | Remote sensing image semantic segmentation method and device, computer equipment and storage medium | |
CN106203448A (en) | A kind of scene classification method based on Nonlinear Scale Space Theory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140326 |