CN116630790A - Classification result optimization method based on edge precision evaluation - Google Patents
Classification result optimization method based on edge precision evaluation Download PDFInfo
- Publication number
- CN116630790A CN116630790A CN202310259581.XA CN202310259581A CN116630790A CN 116630790 A CN116630790 A CN 116630790A CN 202310259581 A CN202310259581 A CN 202310259581A CN 116630790 A CN116630790 A CN 116630790A
- Authority
- CN
- China
- Prior art keywords
- image
- classification
- probability
- ground object
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000005457 optimization Methods 0.000 title claims abstract description 37
- 238000007619 statistical method Methods 0.000 claims abstract description 7
- 230000005484 gravity Effects 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 8
- 238000013341 scale-up Methods 0.000 claims description 4
- 239000000523 sample Substances 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013074 reference sample Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a classification result optimization method based on edge precision evaluation, which specifically comprises the following steps: s1, judging the classification type of an input image, distinguishing single classification and multiple classification, and if the classification is multiple, classifying, extracting and converting the classification into single classification; s2, performing pixel statistical analysis on the single-classification images, and judging the types of the images; s3, performing ground object edge accuracy evaluation by using the binary image, and if the image is a probability image, binarizing the probability image and then performing ground object edge accuracy evaluation; s4, extracting the feature outline from the probability image score, and if the image is a binary image, firstly dividing the object and converting the binary image into a probability image by image probability; s5, optimizing a classification result based on the edge precision evaluation result; s6, merging the optimization results of the multi-classification images; the invention provides a classification result optimization method based on edge precision evaluation, which realizes automatic optimization of ground object classification results through analysis of the edge precision evaluation results of remote sensing ground object classification images and further improves the classification precision of the edges of the image ground objects.
Description
Technical Field
The invention belongs to the technical field of remote sensing classification, and particularly relates to a classification result optimization method based on edge precision evaluation.
Background
With the development of classification technology in the remote sensing field, more and more precision evaluation methods and classification optimization algorithms are proposed and applied. There are many evaluation indexes of the current classification accuracy, such as Kappa coefficient based on pixels, F1 score, and the like. The classification optimization method comprises a traditional mathematical statistics method, a classical machine learning method, an emerging deep learning method and the like. However, the existing researches mostly aim at the accuracy evaluation of the overall classification result of a single scale, and the local features and the global features of the feature edges cannot be simultaneously represented. Meanwhile, the existing research rarely combines the precision evaluation method with the classification optimization method. Therefore, the invention provides a classification result optimization method based on edge precision evaluation aiming at the requirements.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a classification result optimization method based on edge precision evaluation, which solves the problems.
In order to achieve the above purpose, the invention is realized by the following technical scheme: a classification result optimization method based on edge precision evaluation comprises the following steps:
s1, judging the classification type of an input image, distinguishing single classification and multiple classification, and if the classification is multiple, classifying, extracting and converting the classification into single classification;
s2, performing pixel statistical analysis on the single-classification images, and judging the types of the images;
s3, performing ground object edge accuracy evaluation by using the binary image, and if the image is a probability image, binarizing the probability image and then performing ground object edge accuracy evaluation;
s4, extracting the ground object outline from the probability image score, and if the image is a binary image, firstly dividing the object and converting the binary image into a probability image by probability of the binary image;
s5, optimizing a classification result based on the edge precision evaluation result;
and S6, merging the optimization results of the multi-classification images.
Based on the technical scheme, the invention also provides the following optional technical schemes:
the technical scheme is as follows: in the step S1:
judging the image classification type, and if the image is a multi-classification image, classifying and extracting the multi-classification image into a plurality of Shan Fenlei images;
the technical scheme is as follows: in the step S2:
dividing the single-classification image into a probability image or a binary image by using a pixel statistical analysis result;
the technical scheme is as follows: the specific steps of the S3 are as follows:
s301, if the image is a probability image, binarizing the probability image, and then evaluating the edge accuracy of the ground object;
s302, extracting a ground object outline by using a binary image to evaluate the ground object edge precision, wherein the precision evaluation formula is as follows:
wherein N represents the number of pixels of the ground object outline, T represents the sample image, and C represents the classified image;
the technical scheme is as follows: the specific steps of the S4 are as follows:
s401, extracting the ground object outline value by value for the probability image;
s402, if the image is a binary image, performing object segmentation and binary image probability to convert the binary image into a probability image;
s403, the object segmentation adopts a method of extracting the outline of the ground object, cutting the original image and finally obtaining all the ground object objects in the image;
s404, performing binary image probability on the basis of S403, wherein the adopted method is to scale up and down the ground object in equal proportion by referring to the sample ground object, and the scale-up factor size is determined by the size proportion of the existing sample image and the classified image ground object. Then, taking the gravity center of the ground object as the center, the probability is 0, the ground object outline is the edge, the probability is 1, and the pixel value in the ground object is assigned according to the following formula:
wherein D is the distance between the current pixel and the ground object gravity center point pixel, a is the azimuth angle of the current pixel relative to the gravity center point pixel, and D is the distance between the pixel on the contour under the current azimuth angle and the gravity center point pixel;
the technical scheme is as follows: in the step S5:
the method is characterized in that the classification result is optimized based on the edge precision evaluation result, the main principle is that the local optimal threshold value of image classification is searched for in an iteration mode, the global optimal threshold value is finally found, and then the classification result is optimized according to the optimal threshold value.
The technical scheme is as follows: in the step S6:
if the classified images are multi-classified images, the multi-classified images are converted into single-classified images in S1, so that the optimization results of the single-classified images are combined after the optimization is finished;
advantageous effects
The invention provides a classification result optimization method based on edge precision evaluation, which has the following beneficial effects compared with the prior art:
1. the invention provides a method for optimizing classification results by searching an optimal threshold value by utilizing a ground feature edge precision evaluation result based on the combination of pixels and objects through precision evaluation of the classification results. The classification optimizing method is used for judging and converting various classification situations, so that an automatic optimizing classification result is realized, and meanwhile, the classification precision of the feature edges in the classification result is further improved by combining the feature edge precision evaluation method.
Drawings
FIG. 1 is a flow chart of the accuracy evaluation and classification result optimization of the present invention;
FIG. 2 is a schematic diagram of multi-classification image classification extraction;
FIG. 3 is a schematic diagram of probability image binarization;
FIG. 4 is a schematic diagram of the extraction of the ground object contour value by value;
FIG. 5 is a schematic diagram of binary image probability.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Specific implementations of the invention are described in detail below in connection with specific embodiments.
Referring to fig. 1 to 5, for providing an embodiment of the present invention, the present invention discloses a classification result optimization method based on edge accuracy evaluation, which specifically includes the following steps:
s1, judging the classification type of an input image, distinguishing single classification and multiple classification, and if the classification is multiple, classifying, extracting and converting the classification into single classification;
s2, performing pixel statistical analysis on the single-classification images, and judging the types of the images;
s3, performing ground object edge accuracy evaluation by using the binary image, and if the image is a probability image, binarizing the probability image and then performing ground object edge accuracy evaluation;
s4, extracting the ground object outline from the probability image score, and if the image is a binary image, firstly dividing the object and converting the binary image into a probability image by probability of the binary image;
s5, optimizing a classification result based on the edge precision evaluation result;
and S6, merging the optimization results of the multi-classification images.
Specifically, as shown in fig. 2, the present invention uses GDAL library to extract input multi-classified images in different ways: if the pixels of the multi-classification image are single values, determining the category number of the ground object by reading each pixel value of the image, respectively extracting each category according to the pixel values and storing the category number as a binary image, wherein the pixel value of the ground object in the new image is assigned to 1, and the other pixel values are assigned to 0, so that the multi-classification image classification extraction is completed; if the pixel of the multi-classification image is a multi-dimensional array containing the probability that the pixel belongs to a certain type of ground feature, extracting values in the array from the pixel to pixel class and generating a probability image.
Specifically, regarding to judging the image type, according to the pixel statistical analysis result, if the number of different pixel values of the sample image is inconsistent with the number of different pixel values of the classification result image, that is, if the two types are not equal, the probability image is judged, otherwise, the probability image is a binary image.
Specifically, as shown in fig. 3, if the image is a probability image during the precision evaluation, the probability image is binarized, the binarized threshold value is selected to be the contour pixel value of the ground object, and then the edge precision evaluation of the ground object is performed. The ground object outline is extracted by using the binary image to evaluate the ground object edge precision, and the precision evaluation formula is as follows:
wherein N represents the number of pixels of the ground object outline, T represents the sample image, and C represents the classified image.
Specifically, as shown in fig. 4, the feature contours are extracted from the probability image scores, the probability images are selected according to the accuracy, and the feature contours are extracted according to the selected threshold.
If the image is a binary image, object segmentation and binary image probability are performed first to convert the binary image into a probability image, and as shown in fig. 5, the feature contours are extracted from the probability image value by value. The object segmentation adopts the method that the ground object outline is extracted, the original image is cut, and finally all ground object objects in the image are obtained. The binary image probability is carried out on the basis of object segmentation, and the adopted method is that the ground objects in the reference sample image are scaled and contracted in equal proportion, and the scaling factor size is determined by the size proportion of the ground objects of the existing sample image and the classified image. Then, taking the gravity center of the ground object as the center, the probability is 0, the ground object outline is the edge, the probability is 1, and the pixel value in the ground object is assigned according to the following formula:
wherein D is the distance between the current pixel and the ground object gravity center point pixel, a is the azimuth angle of the current pixel relative to the gravity center point pixel, and D is the distance between the pixel on the contour under the current azimuth angle and the gravity center point pixel.
Specifically, regarding the optimization of the classification result based on the edge precision evaluation result, the edge precision evaluation is calculated for the ground feature contour pixels extracted value by value in the last step, the local optimal threshold value is obtained by comparing the edge precision evaluation result obtained each time through the combination of the precision evaluation based on the pixels and the ground feature segmentation method based on the object, the global optimal threshold value is obtained by traversing the complete ground feature contour, and the classification result is extracted according to the optimal threshold value, so that the optimization of the classification result is realized.
Specifically, in the case that the optimization result combination is multi-classification for the classification type, the multi-classification image is subjected to the classification extraction operation in the first step, and the classification optimization result of each class is obtained in the last step, so that the optimization results of each class need to be combined. The merging mode is to store the probability that each pixel belongs to each class in a probability image mode or store the class with the highest probability of each pixel belonging to the class in a multi-value image mode.
The invention judges the classification type and the image type and executes different operation flows aiming at different classification types and image types. The precision evaluation and the classification result optimization are carried out on the feature edges of the classified images, and the precision evaluation method is combined with the classification result optimization method, so that the classification precision of the feature edges is improved, and the automatic edge precision evaluation and classification result optimization are realized.
It should be noted that, in this document, the term "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. The classification result optimization method based on the edge precision evaluation is characterized by comprising the following steps of:
s1, judging the classification type of an input image, distinguishing single classification and multiple classification, and if the classification is multiple, classifying, extracting and converting the classification into single classification;
s2, performing pixel statistical analysis on the single-classification images, and judging the types of the images;
s3, performing ground object edge accuracy evaluation by using the binary image, and if the image is a probability image, binarizing the probability image and then performing ground object edge accuracy evaluation;
s4, extracting the ground object outline from the probability image score, and if the image is a binary image, firstly dividing the object and converting the binary image into a probability image by probability of the binary image;
s5, optimizing a classification result based on the edge precision evaluation result;
and S6, merging the optimization results of the multi-classification images.
2. The classification result optimization method based on edge accuracy evaluation according to claim 1, wherein the specific operation steps of S1 are as follows: and inputting the images, judging the image classification types, and if the images are multi-classification images, classifying and extracting the multi-classification images into a plurality of Shan Fenlei images.
3. The classification result optimization method based on edge accuracy evaluation according to claim 1, wherein the specific step of S2 includes: and classifying the single-classification images into probability images or binary images by using pixel statistical analysis results.
4. The classification result optimization method based on edge accuracy evaluation according to claim 1, wherein the specific operation steps of S3 are as follows:
s301, if the image is a probability image, binarizing the probability image, and then evaluating the edge accuracy of the ground object;
s302, extracting a ground object outline by using a binary image to evaluate the ground object edge precision, wherein the precision evaluation formula is as follows:
wherein N represents the number of pixels of the ground object outline, T represents the sample image, and C represents the classified image.
5. The classification result optimization method based on edge accuracy evaluation according to claim 1, wherein the specific operation steps of S4 are as follows:
s401, extracting the ground object outline value by value for the probability image;
s402, if the image is a binary image, performing object segmentation and binary image probability to convert the binary image into a probability image;
s403, the object segmentation adopts a method of extracting the outline of the ground object, cutting the original image and finally obtaining all the ground object objects in the image;
s404, performing binary image probability on the basis of S403, wherein the adopted method is to scale up and down the ground object in equal proportion by referring to the sample ground object, and the scale-up factor size is determined by the size proportion of the existing sample image and the classified image ground object. Then, taking the gravity center of the ground object as the center, the probability is 0, the ground object outline is the edge, the probability is 1, and the pixel value in the ground object is assigned according to the following formula:
wherein D is the distance between the current pixel and the ground object gravity center point pixel, a is the azimuth angle of the current pixel relative to the gravity center point pixel, and D is the distance between the pixel on the contour under the current azimuth angle and the gravity center point pixel.
6. The method for optimizing classification results based on edge accuracy evaluation according to claim 1, wherein S5 is an optimization method for optimizing classification results based on edge accuracy evaluation results, and the main principle is to iteratively find a local optimal threshold value for image classification, finally find a global optimal threshold value, and then optimize the classification results according to the optimal threshold value.
7. The method according to claim 1, wherein S6 is that if the classified images are multi-classified images, they are converted into single-classified images in S1, so that the optimization results of the single-classified images need to be combined after the optimization is completed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310259581.XA CN116630790B (en) | 2023-03-17 | 2023-03-17 | Classification result optimization method based on edge precision evaluation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310259581.XA CN116630790B (en) | 2023-03-17 | 2023-03-17 | Classification result optimization method based on edge precision evaluation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116630790A true CN116630790A (en) | 2023-08-22 |
CN116630790B CN116630790B (en) | 2024-05-24 |
Family
ID=87625406
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310259581.XA Active CN116630790B (en) | 2023-03-17 | 2023-03-17 | Classification result optimization method based on edge precision evaluation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116630790B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825217A (en) * | 2016-03-22 | 2016-08-03 | 辽宁师范大学 | Hyperspectral image interested area automatic extraction method based on active contour model |
US20170098310A1 (en) * | 2014-06-30 | 2017-04-06 | Ventana Medical Systems, Inc. | Edge-based local adaptive thresholding system and methods for foreground detection |
CN107657262A (en) * | 2016-12-30 | 2018-02-02 | 航天星图科技(北京)有限公司 | A kind of computer automatic sorting Accuracy Assessment |
CN108629287A (en) * | 2018-04-09 | 2018-10-09 | 华南农业大学 | A kind of remote sensing image terrain classification method |
CN109101894A (en) * | 2018-07-19 | 2018-12-28 | 山东科技大学 | A kind of remote sensing image clouds shadow detection method that ground surface type data are supported |
CN110570427A (en) * | 2019-07-19 | 2019-12-13 | 武汉珈和科技有限公司 | Remote sensing image semantic segmentation method and device fusing edge detection |
CN113658163A (en) * | 2021-08-24 | 2021-11-16 | 王程 | High-resolution SAR image segmentation method for improving FCM through multistage cooperation |
CN114511021A (en) * | 2022-01-27 | 2022-05-17 | 浙江树人学院(浙江树人大学) | Extreme learning machine classification algorithm based on improved crow search algorithm |
CN114926748A (en) * | 2022-06-15 | 2022-08-19 | 安徽理工大学 | Soybean remote sensing identification method combining Sentinel-1/2 microwave and optical multispectral images |
-
2023
- 2023-03-17 CN CN202310259581.XA patent/CN116630790B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170098310A1 (en) * | 2014-06-30 | 2017-04-06 | Ventana Medical Systems, Inc. | Edge-based local adaptive thresholding system and methods for foreground detection |
CN105825217A (en) * | 2016-03-22 | 2016-08-03 | 辽宁师范大学 | Hyperspectral image interested area automatic extraction method based on active contour model |
CN107657262A (en) * | 2016-12-30 | 2018-02-02 | 航天星图科技(北京)有限公司 | A kind of computer automatic sorting Accuracy Assessment |
CN108629287A (en) * | 2018-04-09 | 2018-10-09 | 华南农业大学 | A kind of remote sensing image terrain classification method |
CN109101894A (en) * | 2018-07-19 | 2018-12-28 | 山东科技大学 | A kind of remote sensing image clouds shadow detection method that ground surface type data are supported |
CN110570427A (en) * | 2019-07-19 | 2019-12-13 | 武汉珈和科技有限公司 | Remote sensing image semantic segmentation method and device fusing edge detection |
CN113658163A (en) * | 2021-08-24 | 2021-11-16 | 王程 | High-resolution SAR image segmentation method for improving FCM through multistage cooperation |
CN114511021A (en) * | 2022-01-27 | 2022-05-17 | 浙江树人学院(浙江树人大学) | Extreme learning machine classification algorithm based on improved crow search algorithm |
CN114926748A (en) * | 2022-06-15 | 2022-08-19 | 安徽理工大学 | Soybean remote sensing identification method combining Sentinel-1/2 microwave and optical multispectral images |
Non-Patent Citations (3)
Title |
---|
DONGCAI CHENG ET AL.: "FusionNet: Edge Aware Deep Convolutional Networks for Semantic Segmentation of Remote Sensing Harbor Images", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》, 20 September 2017 (2017-09-20), pages 5769 - 5783 * |
朱成杰等: "面向对象的高分辨率遥感影像分割精度评价方法", 《强激光与粒子束》, 15 June 2015 (2015-06-15), pages 43 - 49 * |
谷正楠,张震: "基于结构方程模型的安徽省归一化植被指数变化及影响因素分析", 《科学技术与工程》, 8 October 2022 (2022-10-08), pages 12259 - 12267 * |
Also Published As
Publication number | Publication date |
---|---|
CN116630790B (en) | 2024-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110533084B (en) | Multi-scale target detection method based on self-attention mechanism | |
CN108108657B (en) | Method for correcting locality sensitive Hash vehicle retrieval based on multitask deep learning | |
CN101976258B (en) | Video semantic extraction method by combining object segmentation and feature weighing | |
CN110032998B (en) | Method, system, device and storage medium for detecting characters of natural scene picture | |
CN109410238B (en) | Wolfberry identification and counting method based on PointNet + + network | |
CN102004786B (en) | Acceleration method in image retrieval system | |
CN113420669B (en) | Document layout analysis method and system based on multi-scale training and cascade detection | |
JP5176763B2 (en) | Low quality character identification method and apparatus | |
CN105184225B (en) | A kind of multinational banknote image recognition methods and device | |
CN112949338A (en) | Two-dimensional bar code accurate positioning method combining deep learning and Hough transformation | |
CN112633382A (en) | Mutual-neighbor-based few-sample image classification method and system | |
CN103336801A (en) | Multi-feature locality sensitive hashing (LSH) indexing combination-based remote sensing image retrieval method | |
CN110598581B (en) | Optical music score recognition method based on convolutional neural network | |
WO2015146113A1 (en) | Identification dictionary learning system, identification dictionary learning method, and recording medium | |
CN116630301A (en) | Strip steel surface small target defect detection method and system based on super resolution and YOLOv8 | |
Wang et al. | MRF based text binarization in complex images using stroke feature | |
CN116630790B (en) | Classification result optimization method based on edge precision evaluation | |
CN111832497A (en) | Text detection post-processing method based on geometric features | |
CN110889418A (en) | Gas contour identification method | |
CN111145314A (en) | Method for extracting place name symbol of scanning electronic map by combining place name labeling | |
CN109117841B (en) | Scene text detection method based on stroke width transformation and convolutional neural network | |
Li et al. | 3D large-scale point cloud semantic segmentation using optimal feature description vector network: OFDV-Net | |
CN114429537A (en) | Method for extracting niche from grotto point cloud | |
CN113673534B (en) | RGB-D image fruit detection method based on FASTER RCNN | |
Ahmad et al. | A fusion of labeled-grid shape descriptors with weighted ranking algorithm for shapes recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |