CN107909085A - A kind of characteristics of image Angular Point Extracting Method based on Harris operators - Google Patents

A kind of characteristics of image Angular Point Extracting Method based on Harris operators Download PDF

Info

Publication number
CN107909085A
CN107909085A CN201711251542.6A CN201711251542A CN107909085A CN 107909085 A CN107909085 A CN 107909085A CN 201711251542 A CN201711251542 A CN 201711251542A CN 107909085 A CN107909085 A CN 107909085A
Authority
CN
China
Prior art keywords
image
corner
characteristic
value
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711251542.6A
Other languages
Chinese (zh)
Inventor
鲁剑锋
陈洁柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN201711251542.6A priority Critical patent/CN107909085A/en
Publication of CN107909085A publication Critical patent/CN107909085A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to image mosaic, identification field.Two improvement are proposed on the basis of the method that the corner feature of original Harris operators extracts:If abscissa and ordinate absolute difference of two angle points the 1, closed on corresponding to it are less than a set threshold value, and by generating a new angle point accordingly.By the method for this reduction feature angle point, pseudo- angle point is removed, improves the matching speed of image.2nd, by doing average value processing to the gradient of the gray scale in the certain territory of image pixel, the angle point information included in gradient of image and gray scale information is reduced, improves the recognition speed of angle point.

Description

Harris operator-based image feature corner extraction method
Technical Field
The invention relates to the technical field of image splicing and fusion, in particular to an improved algorithm for extracting angular point features of a Harris operator.
Background
The image stitching and fusion technology is a technology for stitching a plurality of overlapped images (which may be obtained at different times, different viewing angles or different sensors) into a large-scale seamless high-resolution image. Image splicing is carried out by the steps of image acquisition, image processing, image characteristic processing, image matching, model building, image fusion and the like, and finally the splicing of panoramic pictures is completed. The image splicing technology integrates a plurality of disciplines, relates to a plurality of fields, has very extensive reference in actual life, and has very important practical significance.
The identification of the corner points by the human eye is usually done in a local small area or small window. If the gray scale of the area in the window is changed greatly when the small window of the feature is moved in all directions, then the corner point is considered to be met in the window, and if the gray scale of the image in the window is not changed when the specific window is moved in all directions of the image, then the corner point does not exist in the window; if the gray scale of the image in the window changes greatly when the window moves in one direction, but does not change in other directions, then the image in the window may be a straight line segment.
The Harris operator, which simulates the human eye observation process with a window function, was proposed in 1988. The Harris operator replaces a binary window function with a Gaussian function, and weights the pixels which are closer to a central point more heavily to reduce noise influence.
The Harris algorithm is used for detection, and three defects exist: (1) the algorithm has no scale invariance; (2) the corners extracted by the algorithm are pixel-level; (3) the algorithm detection time is not very satisfactory; (4) there is a false corner problem.
Disclosure of Invention
The invention aims to overcome the defects of the existing Harris operator corner feature extraction technology, and provides a new Harris operator improvement algorithm in multiple aspects of the extraction efficiency and speed of feature corners and the removal of false corners.
The method for extracting the characteristic corner points of the image based on the Harris operator is provided, and comprises the following steps:
judging whether two characteristic angular points of the image need to be combined or not according to whether the absolute value of the difference value between the abscissa and the ordinate corresponding to the two characteristic angular points is smaller than a combination threshold or not;
and if the absolute value of the difference between the abscissa and the ordinate corresponding to the two characteristic angular points is smaller than a set threshold, combining the two characteristic angular points.
In some embodiments, whether two feature corner points need to be merged is determined according to whether absolute values of differences between abscissa and ordinate corresponding to the two feature corner points are smaller than a merging threshold, where the determination formula is as follows:
wherein, mu i 、μ ji 、θ j And component values on horizontal and vertical coordinates of the two characteristic angular points extracted by the Harris operator respectively, wherein epsilon is a combination threshold value, i represents the horizontal coordinate, and j represents the vertical coordinate.
In some embodiments, if the absolute value of the difference between the abscissa and the ordinate corresponding to the two feature corner points is smaller than a set threshold, the two feature corner points are combined, and a new feature corner point coordinate formula of the improved Harris operator is obtained as follows:
wherein, mu ij Component values, mu, on the abscissa of the feature corner points obtained for the conventional Harris operator m 、θ n And generating new feature corner horizontal and vertical coordinates.
In some embodiments, the method further comprises the steps of: and splicing and fusing the images according to the new characteristic corner coordinates.
In some embodiments, before determining whether two feature corner points of an image need to be merged according to whether absolute values of differences between abscissas and ordinates corresponding to the two feature corner points are smaller than a merging threshold, the method further includes the following steps:
calculating the mean value of the gradient of the gray level of the image pixel in the set neighborhood of each pixel by adopting the following formula, setting the obtained mean value as the gray level value of the current pixel,
wherein, I i Representing the image gray scale value on the abscissa; i is j Representing the image grey value on the ordinate.
In some embodiments, the set neighborhood of each pixel is to the right or to the bottom of the pixel.
The invention has the beneficial effects that: the Harris operator-based image feature corner extraction method effectively reduces the calculated amount in the feature corner extraction process, accelerates the operation speed, prevents the extraction of the corner feature from each pixel point of the whole image, and reduces the number of wrong feature corner points.
Drawings
Fig. 1 is a schematic diagram of an ellipse equation in the prior art.
Fig. 2 is a flowchart of an algorithm provided in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to fig. 1 and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention.
The existing Harris operator corner feature extraction algorithm comprises the following steps:
for image I (x, y), the self-similarity when translated (Δ x, Δ y) at point (x, y) can be given by the autocorrelation function:
where W (x, y) is a window centered at point (x, y), W (u, v) is a weighting function, which may be either a constant or a gaussian weighting function, or more generally a gaussian weighting function, all in this position being gaussian weighting functions.
According to the taylor expansion, a first order approximation is performed on the image I (x, y) after the translation (Δ x, Δ y):
I(u+Δx,v+Δy)=I(u,v)+I x (u,v)Δx+I y (u,v)Δy+O(Δx 2 ,Δy 2 )≈I(u,v)+I x (u,v)Δx+I y (u, v) Δ y where Ix, iy are partial derivatives of the image I (x, y), so that the autocorrelation function can be simplified as:
wherein, the first and the second end of the pipe are connected with each other,
that is, the autocorrelation function of the image I (x, y) after translation (Δ x, Δ y) at point (x, y) can be approximated as a binomial function:
wherein the content of the first and second substances,
c(x,y;Δx,Δy)≈AΔx 2 +2CΔxΔy+BΔy 2
the quadratic term function is essentially an elliptic function. The ellipticity and size of the ellipse are determined by the eigenvalues λ 1, λ 2 of M (x, y), and the direction of the ellipse is determined by the eigenvector of M (x, y), as shown in fig. 1, which is a schematic diagram of the manner of an ellipse, and its specific elliptical equation is:
the relationship between the feature values of the elliptic function and the corners, lines (edges) and planes in the image can be divided into three cases:
1. a straight line in the image. One eigenvalue is large and the other eigenvalue is small, λ 1 >>λ 2 Or λ 2 >&gt, lambda 1. The autocorrelation function value is large in one direction and small in the other direction.
2. A plane in the image. Both eigenvalues are small and approximately equal; the autocorrelation function values are small in all directions.
3. A corner point in the image. Both eigenvalues are large and approximately equal, and the autocorrelation function increases in all directions.
According to the calculation formula of the eigenvalue of the quadratic term function, we can solve the eigenvalue of the M (x, y) matrix. However, the corner difference method provided by Harris does not need to calculate specific characteristic values, but calculates a corner response value R to determine the corner. The formula for R is:
R=detM-α(traceM) 2
wherein detM is a matrixDeterminant of (4); traceM is a direct trace of the matrix M; alpha is constant and has a value range of 0.04-0.06. In fact, the features are implicit in detM and traceM because:
detM=λ 1 λ 2 =AC-B 2
traceM=λ 12 =A+C
the general implementation process of the corner feature extraction technology of the Harris operator is roughly divided into four steps:
the gradient of the image I (X, Y) in both X and Y directions, ix, iy, is calculated.
The product of the gradients in the two directions of the image is calculated.I xy =I x I y
Using a Gaussian function pair, andand I xy Gaussian weighting (assuming σ = 1) was performed to generate elements of matrix M, A, B, and C.
Calculate the Harris response value for each pixel R = detM-alpha (traceM) 2 And marking the pixel points with the R value larger than a certain set threshold value as zero points.
In the traditional implementation process, false corners are easy to introduce, or because too many characteristic corners are extracted, densely-adhered corners are formed, and the processing speed, the image matching speed and the image matching precision are influenced. In order to solve the problem that the corner points are densely adhered together, a non-maximum suppression is used for further removing some adhered corner points, the method is that in a certain field range of each corner point, all corner point response values R in the field are compared, a maximum response value R point is selected to be reserved as the corner point in the field, other corner points are abandoned, and secondary non-maximum suppression can be carried out according to the number of the remaining processed corner points to further reduce the number of the corner points. The method mainly comprises the following steps:
step one, pre-screening pictures, specifically comprising the following steps:
step 1-1, selecting a 3*3 area as a candidate corner screening module, and calculating an absolute value delta t of a difference between a central point pixel gray value and a peripheral point pixel gray value;
step 1-2, selecting a similarity threshold t, wherein the selection value of t is 10% -15% of the maximum value of the pixel gray level;
step 1-3, comparing the delta t value with the t value, and if the delta t value is smaller than the t value, judging that the central point is similar to the peripheral points;
step 1-4, regarding the similarity m (the number of the central points and similar peripheral points) of the central points and the peripheral stores as an alternative angular point when m is 2-6;
step two: selecting a scale space kernel to perform scale transformation, and specifically subdividing the kernel into:
step 2-1, selecting a Gaussian kernel as a scale transformation kernel, and selecting a scale transformation kernel model as follows:
step 2-2, combining the Harris operator with a scale space and adopting a formulaObtaining a Harris scale expression;
step 2-3, calculating an autocorrelation matrix M with scale change, wherein the M matrix obtained by calculation is as follows:
where Lu (x, σ D) and Lv (x, σ D) represent derivatives of L (x, σ) in the x and y directions, respectively, σ 1= σ n is a scale parameter selected when calculating the feature point, and σ D = s σ n is a differential scale.
Step three: selecting a proper corner response function, selecting R = det (M)/(trace (M) + epsilon) as the corner response function, and avoiding the error caused by manually selecting a k value in the traditional Harris corner response function;
step four: and carrying out secondary non-maximum suppression to search angular points, carrying out non-maximum suppression on the candidate angular points obtained in the step one, selecting a 10 × 10 template area, calculating the maximum in the template area, wherein the maximum point is the central point of the template area, the maximum is the candidate angular points, carrying out secondary non-maximum suppression on the basis, obtaining large local response maximum points after two times of non-maximum suppression, and considering the maximum points as the candidate angular points. And simultaneously, the selection of the threshold value during the calculation of the corner response function is avoided.
Although the method solves the problems of excessive angular points, adhesion, false removal and the like to a certain extent, the method has the problems of large data processing amount, repeated maximum value suppression, low operation efficiency and the like.
In order to solve a plurality of problems existing in the existing algorithm, the invention adopts the following technical scheme: if the absolute value of the difference value between the horizontal coordinate and the vertical coordinate corresponding to two adjacent corner points is less than a set threshold (| mu) ij |<ε,θ ij Less than epsilon), i.e. by the corresponding formulaA new corner point is generated. By the method for reducing the characteristic angular points, the false angular points are removed, and the matching speed of the image is improved. 2. By averaging the gradients of gray levels within a certain range of image pixelsAngular point information contained in the image gray gradient information is reduced, and the identification speed of the angular points is improved.
Referring to fig. 2, the present invention provides an improved corner feature extraction algorithm for the Harris operator. The algorithm is based on the traditional Harris operator corner feature extraction process, and two improvements are made aiming at the problems existing in the existing extraction process. The characteristic corner extraction process of the invention patent is as follows:
calculating the gradient of the image I (X, Y) in both X and Y directions, I x 、I y
The obtained gradient of the gray scale of the image pixel is subjected to mean value calculation in the set neighborhood of each pixel,the resulting mean value is set as the gray value of the current pixel, where i =1, j =1. The set neighborhood of each pixel is to the right or to the lower side of said pixel, I i Representing the image gray scale value on the abscissa; i is j Representing the image grey value on the ordinate. N represents the mean coefficient on the abscissa, and M is the mean coefficient on the ordinate.
Using a Gaussian function pair, andandgaussian weighting (assuming σ = 1) is performed to generate the elements of matrix M, A, B and C.
Calculate R = detM- α (traceM) for each pixel 2 And marking the pixel points with the R value larger than a certain set threshold value as zero points.
And judging whether the two characteristic angular points need to be combined or not according to whether the absolute value of the difference value between the abscissa and the ordinate corresponding to the two characteristic angular points of the image is smaller than a combination threshold or not. The judgment formula is as follows:
wherein, mu i 、μ ji 、θ j And component values on horizontal and vertical coordinates of the two characteristic angular points extracted by the Harris operator respectively, wherein epsilon is a combination threshold value, i represents the horizontal coordinate, and j represents the vertical coordinate. .
Generating a new corner coordinate from the coordinates of the two corners by a set formula for the two corners meeting the set threshold value:
wherein, mu ij Component values, mu, on the abscissa of the feature corner obtained for the conventional Harris operator m 、θ n And generating new feature corner horizontal and vertical coordinates.
And finally, completing image splicing and fusion work according to the new corner coordinates and completing image splicing and fusion work according to the new corner coordinates, and discarding the original corner coordinates.
The invention has the beneficial effects that: and performing mean value calculation on the gradient of the gray level of the image pixels in the set neighborhood of each pixel, so that the angular point information content of the image is reduced, and the characteristic angular point extraction speed in the subsequent step is increased. And by removing the false angular points and removing the angular points densely adhered together, the number of the angular points is reduced, the angular point extraction accuracy is improved, and the image splicing and fusing speed is increased.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.

Claims (6)

1. An image characteristic corner point extraction method based on a Harris operator is characterized by comprising the following steps:
judging whether two characteristic angular points of the image need to be combined or not according to whether the absolute value of the difference value between the abscissa and the ordinate corresponding to the two characteristic angular points is smaller than a combination threshold or not;
and if the absolute value of the difference between the abscissa and the ordinate corresponding to the two characteristic angular points is smaller than a set threshold, combining the two characteristic angular points.
2. The Harris operator-based image feature corner extraction method according to claim 1, wherein whether two feature corners need to be merged is determined according to whether absolute values of differences between abscissas and ordinates corresponding to the two feature corners are smaller than a merging threshold, and a determination formula is as follows:
ij |<ε,
wherein, mu i 、μ ji 、θ j And respectively extracting component values on the horizontal and vertical coordinates of the two characteristic angular points extracted by the Harris operator, wherein epsilon is a combination threshold value, i represents a horizontal coordinate value, and j represents a vertical coordinate value.
3. The method for extracting characteristic corner points of an image based on a Harris operator according to claim 2, wherein if the absolute value of the difference between the abscissa and the ordinate corresponding to two characteristic corner points is smaller than a set threshold, the two characteristic corner points are merged to obtain a new characteristic corner point coordinate formula of the improved Harris operator, which is as follows:
wherein, mu i ,μ j Component values, mu, on the abscissa of the feature corner obtained for the conventional Harris operator m 、θ n And generating new feature corner horizontal and vertical coordinates.
4. The Harris operator-based image feature corner extraction method according to claim 3, further comprising the steps of:
and splicing and fusing the images according to the new characteristic corner coordinates.
5. The Harris operator-based image feature corner extraction method of claim 4, wherein: before judging whether two characteristic corner points of an image need to be combined according to whether the absolute value of the difference between the abscissa and the ordinate corresponding to the two characteristic corner points is smaller than a combination threshold, the method further comprises the following steps:
calculating the mean value of the gradient of the gray level of the image pixel in the set neighborhood of each pixel by adopting the following formula, and setting the obtained mean value as the gray level of the current pixel:
wherein, I i Representing the image gray value at the abscissa i; i is j Representing the image grey value at ordinate j.
6. The Harris operator-based image feature corner extraction method of claim 5, wherein: the set neighborhood of each pixel is to the right or to the bottom of the pixel.
CN201711251542.6A 2017-12-01 2017-12-01 A kind of characteristics of image Angular Point Extracting Method based on Harris operators Pending CN107909085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711251542.6A CN107909085A (en) 2017-12-01 2017-12-01 A kind of characteristics of image Angular Point Extracting Method based on Harris operators

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711251542.6A CN107909085A (en) 2017-12-01 2017-12-01 A kind of characteristics of image Angular Point Extracting Method based on Harris operators

Publications (1)

Publication Number Publication Date
CN107909085A true CN107909085A (en) 2018-04-13

Family

ID=61848220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711251542.6A Pending CN107909085A (en) 2017-12-01 2017-12-01 A kind of characteristics of image Angular Point Extracting Method based on Harris operators

Country Status (1)

Country Link
CN (1) CN107909085A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214380A (en) * 2018-09-12 2019-01-15 湖北民族学院 License plate sloped correcting method
CN109445455A (en) * 2018-09-21 2019-03-08 深圳供电局有限公司 A kind of unmanned vehicle independent landing method and its control system
CN111444948A (en) * 2020-03-21 2020-07-24 哈尔滨工程大学 Image feature extraction and matching method
CN113609943A (en) * 2021-07-27 2021-11-05 东风汽车有限公司东风日产乘用车公司 Finger vein recognition method, electronic device and storage medium
CN113830136A (en) * 2021-10-20 2021-12-24 哈尔滨市科佳通用机电股份有限公司 Method for identifying malposition fault of angle cock handle of railway wagon

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261115A (en) * 2008-04-24 2008-09-10 吉林大学 Spatial circular geometric parameter binocular stereo vision measurement method
CN100561503C (en) * 2007-12-28 2009-11-18 北京中星微电子有限公司 A kind of people's face canthus and corners of the mouth location and method and the device followed the tracks of
CN101799939A (en) * 2010-04-02 2010-08-11 天津大学 Rapid and self-adaptive generation algorithm of intermediate viewpoint based on left and right viewpoint images
CN103345755A (en) * 2013-07-11 2013-10-09 北京理工大学 Chessboard angular point sub-pixel extraction method based on Harris operator
CN103400359A (en) * 2013-08-07 2013-11-20 中国科学院长春光学精密机械与物理研究所 Real-time color image filtering method based on nonlocal domain transformation
CN105023265A (en) * 2014-04-29 2015-11-04 东北大学 Checkerboard angular point automatic detection method under fish-eye lens

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100561503C (en) * 2007-12-28 2009-11-18 北京中星微电子有限公司 A kind of people's face canthus and corners of the mouth location and method and the device followed the tracks of
CN101261115A (en) * 2008-04-24 2008-09-10 吉林大学 Spatial circular geometric parameter binocular stereo vision measurement method
CN101799939A (en) * 2010-04-02 2010-08-11 天津大学 Rapid and self-adaptive generation algorithm of intermediate viewpoint based on left and right viewpoint images
CN103345755A (en) * 2013-07-11 2013-10-09 北京理工大学 Chessboard angular point sub-pixel extraction method based on Harris operator
CN103400359A (en) * 2013-08-07 2013-11-20 中国科学院长春光学精密机械与物理研究所 Real-time color image filtering method based on nonlocal domain transformation
CN105023265A (en) * 2014-04-29 2015-11-04 东北大学 Checkerboard angular point automatic detection method under fish-eye lens

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李军等: "一种改进图像拼接算法的仿真研究", 《计算机仿真》 *
赵小川: "《MATLAB图像处理-程序实现与模块化仿真》", 31 January 2014 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214380A (en) * 2018-09-12 2019-01-15 湖北民族学院 License plate sloped correcting method
CN109214380B (en) * 2018-09-12 2021-10-01 湖北民族学院 License plate inclination correction method
CN109445455A (en) * 2018-09-21 2019-03-08 深圳供电局有限公司 A kind of unmanned vehicle independent landing method and its control system
CN111444948A (en) * 2020-03-21 2020-07-24 哈尔滨工程大学 Image feature extraction and matching method
CN113609943A (en) * 2021-07-27 2021-11-05 东风汽车有限公司东风日产乘用车公司 Finger vein recognition method, electronic device and storage medium
CN113609943B (en) * 2021-07-27 2024-05-17 东风汽车有限公司东风日产乘用车公司 Finger vein recognition method, electronic device and storage medium
CN113830136A (en) * 2021-10-20 2021-12-24 哈尔滨市科佳通用机电股份有限公司 Method for identifying malposition fault of angle cock handle of railway wagon
CN113830136B (en) * 2021-10-20 2022-04-19 哈尔滨市科佳通用机电股份有限公司 Method for identifying malposition fault of angle cock handle of railway wagon

Similar Documents

Publication Publication Date Title
CN107909085A (en) A kind of characteristics of image Angular Point Extracting Method based on Harris operators
JP7113657B2 (en) Information processing device, information processing method, and program
CN109410207B (en) NCC (non-return control) feature-based unmanned aerial vehicle line inspection image transmission line detection method
WO2017219391A1 (en) Face recognition system based on three-dimensional data
WO2019007004A1 (en) Image feature extraction method for person re-identification
CN108629343B (en) License plate positioning method and system based on edge detection and improved Harris corner detection
Qu et al. Research on image segmentation based on the improved Otsu algorithm
CN107066969A (en) A kind of face identification method
CN110751154B (en) Complex environment multi-shape text detection method based on pixel-level segmentation
US10249046B2 (en) Method and apparatus for object tracking and segmentation via background tracking
CN105809651A (en) Image saliency detection method based on edge non-similarity comparison
US9418446B2 (en) Method and apparatus for determining a building location based on a building image
CN110222661B (en) Feature extraction method for moving target identification and tracking
KR20170066014A (en) A feature matching method which is robust to the viewpoint change
CN113095385A (en) Multimode image matching method based on global and local feature description
CN117611525A (en) Visual detection method and system for abrasion of pantograph slide plate
CN111160362B (en) FAST feature homogenizing extraction and interframe feature mismatching removal method
CN105243661A (en) Corner detection method based on SUSAN operator
CN112365516A (en) Virtual and real occlusion processing method in augmented reality
Smiatacz Normalization of face illumination using basic knowledge and information extracted from a single image
CN111640071A (en) Method for obtaining panoramic foreground target based on convolutional neural network frame difference repairing method
CN112381747A (en) Terahertz and visible light image registration method and device based on contour feature points
CN110008964A (en) The corner feature of heterologous image extracts and description method
CN112446894B (en) Image segmentation method based on direction space
CN114842354B (en) Quick and accurate detection method for edge line of high-resolution remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180413

RJ01 Rejection of invention patent application after publication