CN117474965A - Image feature point extraction and main direction calculation method - Google Patents

Image feature point extraction and main direction calculation method Download PDF

Info

Publication number
CN117474965A
CN117474965A CN202311475996.7A CN202311475996A CN117474965A CN 117474965 A CN117474965 A CN 117474965A CN 202311475996 A CN202311475996 A CN 202311475996A CN 117474965 A CN117474965 A CN 117474965A
Authority
CN
China
Prior art keywords
image
points
point
contour
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311475996.7A
Other languages
Chinese (zh)
Inventor
周柔刚
李�杰
袁贤琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huicui Intelligent Technology Co ltd
Hangzhou Dianzi University
Original Assignee
Hangzhou Huicui Intelligent Technology Co ltd
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huicui Intelligent Technology Co ltd, Hangzhou Dianzi University filed Critical Hangzhou Huicui Intelligent Technology Co ltd
Priority to CN202311475996.7A priority Critical patent/CN117474965A/en
Publication of CN117474965A publication Critical patent/CN117474965A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image feature point extraction and main direction calculation method, which comprises the following steps: s10, graying the image; s20, carrying out Gaussian filtering on the image after graying, and carrying out smoothing treatment on the edge of the image to remove noise; s30, carrying out Sobel filtering on the Gaussian filtered image to obtain gradient information of the image; s40, obtaining an image edge binary image by applying adaptive canny edge extraction; s50, extracting continuous contours in the edge binary image by adopting a cv2.FindContours function in OpenCV and ignoring contours with lengths smaller than a preset value; s60, calculating the curvature and the angle of points on each contour; and S70, extracting points with curvature and angle meeting conditions as extreme points, and obtaining a main direction. According to the invention, according to the fact that the infrared image and the visible light image have the same edge, the extreme points of the edge are extracted, and the angle of the extreme points is calculated, so that the corresponding characteristic points between the two images are ensured.

Description

Image feature point extraction and main direction calculation method
Technical Field
The invention belongs to the field of image registration, and particularly relates to an image feature point extraction and main direction calculation method.
Background
Registration of infrared images with visible light images is a technique for aligning and matching infrared images with visible light images, commonly used in military, medical, weather, and other fields. The origin of this technology can be traced back to the middle of the 20 th century.
In the prior art, the technical scheme comprises a method based on region segmentation and non-downsampled contourlet transformation, which utilizes region segmentation to identify important regions and background information in an image, performs multi-resolution and directional image processing by means of non-downsampled contourlet transformation (NSCT), and then applies different fusion rules to optimize the quality of the fused image. This approach successfully preserves the infrared target characteristics and clearly presents a visible background.
But suffer from the following drawbacks:
the accuracy of region segmentation is critical to successful image registration. False target detection or region segmentation can result in information loss or distortion. The success of NSCT and image fusion depends on the selection of appropriate parameters and fusion rules. Incorrect parameter selection may result in poor quality registration results.
Disclosure of Invention
In view of this, the invention provides a method for extracting image feature points and calculating a main direction, comprising the following steps:
s10, graying the image;
s20, carrying out Gaussian filtering on the image after graying, and carrying out smoothing treatment on the edge of the image to remove noise;
s30, carrying out Sobel filtering on the Gaussian filtered image to obtain gradient information of the image;
s40, obtaining an image edge binary image by applying adaptive canny edge extraction;
s50, extracting continuous contours in the edge binary image by adopting a cv2.FindContours function in OpenCV and ignoring contours with lengths smaller than a preset value;
s60, calculating the curvature and the angle of points on each contour;
and S70, extracting points with curvature and angle meeting conditions as extreme points, and obtaining a main direction.
Preferably, the step S60 specifically includes the following steps:
s61, setting a contour curve, an angle threshold t_angle, a field radius maxlength, an extreme point position index array M and an extreme point main direction array A;
s62, calculating a first derivative arrayAnd second derivative array->Regarding a two-dimensional curve as +.>
Wherein i represents the index of the current point in the contour, y is the same as the first derivative and the second derivative of t, and two end points of the contour curve cut are ignored, so that x is ensured i+1 And x i-1 Meaning;
s63, calculating a curvature array K,
s64, since the points of the contour are all continuous, and the coordinate difference Δx=x between two adjacent points i+1 -x i ∈{0,1},Δy=y i+1 -y i E {0,1}, Δx and Δy are not 0 at the same time, and the curvature K of any point i on the contour is calculated according to formulas (1) and (2) i ∈[0,1]Taking a point with curvature larger than a certain threshold value as a candidate extreme point, namely storing an index i meeting curvature in an array N;
s65, for each index i, the following steps are cyclically performed in N:
computing within a neighborhoodLeft and right side profile length L - And L + The length is the minimum value between the neighborhood radius and the curve residual length;
using formula (5) to find the left and right profile weight coefficient array W - And W is +
Using formula (3) to find the average point coordinates of the left and right contours
Using equations (9), (10) and (11) to determine the included angles of the contours
Comparison ofAnd t_angle, when +.>At this time, the main direction θ is found using formulas (12) (13), the current index i is added to M, the angle θ is calculated and added to a,
wherein the formula comprises:
each data point of the simple weighted average is multiplied by its corresponding weight, then all the products are added, and finally divided by the sum of the weights to calculate the average, the formula is:
the Gaussian function is as follows by adopting a Gaussian weighted average method:
the weights of the contour points in the local neighborhood of the candidate extreme point are obtained according to the formula (4):
wherein e is a base number of natural logarithms, L is the number of left or right side contour points in the local neighborhood of the candidate extremum point, the candidate extremum point is included, i is the index distance from the left or right side contour point to the candidate extremum point, and the sizes are 0,1, … … and L-1; then, according to the formula (3), the coordinates of the weighted average points are obtained;
let A 0 The point is the candidate extreme point, A-is the point on the left side, A+ is the point on the right side, and the angleIs A 0 The included angle between the two sides A-and the A+ outline has the following calculation formula:
when the true extreme point is extracted, the opening direction of the contour, that is, the principal direction of the extreme point is obtained from the two-side contour vector obtained by the formula (9), and the two-side contour vectors are obtainedAnd->Unit vector in the same direction->And->The opening direction isThe main direction is +.>The formula is as follows:
the direction is:
preferably, the convolution kernel of the gaussian filter in S20 adopts a size of 7×7.
Preferably, 30% and 5% of the maximum gradient are used as the high threshold and the low threshold of the canny operator in S40, respectively.
Preferably, the preset value of the omitted length in S50 is 1/100 of the image circumference.
Compared with the prior art, the image feature point extraction and main direction calculation method disclosed by the invention at least comprises the following beneficial effects:
the contour extreme points on the two images are used as the characteristic points, so that the same characteristic points can be extracted from the two images, and the condition that the characteristic points detected in the infrared image and the visible light image by the traditional characteristic point detection algorithm (such as Harris, ORB, SIFT, SURF) are rarely the same is avoided; the principal directions of the characteristic points are defined through the outline directions (the traditional method such as SIFT adopts image gradients, so that the influence of infrared light and light spectrum can be greatly influenced), so that the principal directions of the same characteristic points on an infrared image and a light image can be the same as much as possible, and the influence of different spectrums is reduced.
As Harris, ORB, SIFT, SURF, there are few cases where the feature points detected in the infrared image and the visible light image are the same.
Drawings
In order to make the objects, technical solutions and advantageous effects of the present invention more clear, the present invention provides the following drawings for description:
FIG. 1 is a flow chart showing steps of a method for extracting feature points and calculating a main direction according to an embodiment of the present invention;
FIG. 2 is a graph of candidate extremum points of an infrared edge map and an infrared map in a method for extracting feature points and calculating a main direction of an image according to an embodiment of the present invention;
FIG. 3 is a diagram of candidate extremum points of a visible light edge map and a visible light map in a method for extracting feature points and calculating a main direction of an image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of true and false extreme points in the method for extracting feature points and calculating the principal direction according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of candidate extremum point contour direction in the method for extracting feature points and calculating main direction in an embodiment of the present invention;
FIG. 6 is a graph of extreme points and a principal direction of an infrared contour and an infrared image according to an embodiment of the present invention;
fig. 7 is a diagram of extreme points and a main direction of a visible light contour and a visible light image according to an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, the image feature point extraction and main direction calculation method of the present invention includes the following steps:
s10, graying the image;
s20, carrying out Gaussian filtering on the image after graying, and carrying out smoothing treatment on the edge of the image to remove noise;
s30, carrying out Sobel filtering on the Gaussian filtered image to obtain gradient information of the image;
s40, obtaining an image edge binary image by applying adaptive canny edge extraction;
s50, extracting continuous contours in the edge binary image by adopting a cv2.FindContours function in OpenCV and ignoring contours with lengths smaller than a preset value;
s60, calculating the curvature and the angle of points on each contour;
and S70, extracting points with curvature and angle meeting conditions as extreme points, and obtaining a main direction.
S60 specifically comprises the following steps:
s61, setting a contour curve, an angle threshold t_angle, a field radius maxlength, an extreme point position index array M and an extreme point main direction array A;
s62, calculating a first derivative arrayAnd second derivative array->Regarding a two-dimensional curve as +.>
Wherein i represents the index of the current point in the contour, y is the same as the first derivative and the second derivative of t, and two end points of the contour curve cut are ignored, so that x is ensured i+1 And x i-1 Meaning;
s63, calculating a curvature array K,
s64, since the points of the contour are all continuous, and the coordinate difference Δx=x between two adjacent points i+1 -x i ∈{0,1},Δy=y i+1 -y i E {0,1}, Δx and Δy are not 0 at the same time, and the curvature K of any point i on the contour is calculated according to formulas (1) and (2) i ∈[0,1]Taking a point with curvature larger than a certain threshold value as a candidate extreme point, namely storing an index i meeting curvature in an array N;
s65, for each index i, the following steps are cyclically performed in N:
calculating the left and right side profile length L in the neighborhood - And L + The length is the minimum value between the neighborhood radius and the curve residual length;
using formula (5) to find the left and right profile weight coefficient array W - And W is +
Using formula (3) to find the average point coordinates of the left and right contours
Using equations (9), (10) and (11) to determine the included angles of the contours
Comparison ofAnd t_angle, when +.>At this time, the main direction θ is found using formulas (12) (13), the current index i is added to M, and the angle θ is calculated and added to a.
The method for calculating the coordinates and the main direction of the image feature points comprises the following steps:
enabling the gray level image img, a canny detection relative high-low threshold HeightThreshold, lowThreshold, an angle threshold t_angle, an extreme point maximum neighborhood radius maxlength and an extreme point coordinate threshold S; image feature point coordinates and a main direction array KeyPoints.
Image preprocessing: performing Gaussian blur processing on the image img to reduce noise;
canny edge detection: calculating gradient intensity of an image, and executing Canny edge detection based on a relatively low threshold value LowThreshold and a relatively high threshold value highthreshold to obtain an edge binary image BW;
searching the outline: using the result of canny edge detection to search the object contour in the image, filtering out too small contour, and only keeping enough large contour;
traversing the profile: the following steps are performed for each screened profile:
s60, obtaining an extreme point index array M of the current contour and a main direction array A of the extreme point index array M;
and obtaining an extreme point coordinate according to the index array M, judging the coordinate, and storing only the extreme point coordinate and the main direction of which the distance between the coordinate and the image edge is greater than S in the Key points array.
The contour in the image is a two-dimensional curve, which is generally not satisfied with y=f (x), and can be regarded as a general two-dimensional curveThe curvature calculation formula of any point is as follows:
since the image contour is a series of discrete points and the curvature magnitude does not need to be precisely calculated in the curvature calculation of the present invention, the first derivative and the second derivative in equation (1) are simplified as follows:
where i denotes the index of the current point in the contour, y is the same as the first and second derivatives of t, and the two endpoints of the contour are ignored, ensuring x i+1 And x i-1 Has significance.
The contours in the present invention are extracted using the cv2.findcontours function in OpenCV, and mode=cv2.retr_list of the function indicates that all contours are detected, and method=cv2.chan_approx_none indicates that all contour points are stored. By setting the function, the extracted contour can be ensured to be continuous, and the method ensures thatAnd->And the value of K is not 0 at the same time, so that the situation that NAN does not occur is ensured.
Since the points of the contour are all continuous and the coordinate difference Δx=x between two adjacent points i+1 -x i ∈{0,1},Δy=y i+1 -y i E {0,1}, Δx and Δy are not 0 at the same time, so that the curvature K of any point (excluding the end points) on the contour calculated according to formulas (1) and (2) is i ∈[0,1]Points with curvature greater than a certain threshold (set to 0.7 in this embodiment) are taken as candidate extremum points, and referring to fig. 2 and 3, the left-hand side graphs of both graphs are edge graphs and candidate extremum points, and the right-hand side graphs and candidate extremum points.
As can be seen from fig. 2 and 3, many of the candidate extremum points extracted only by curvature are false extremum points due to edge noise or image blurring, so that further screening of the candidate extremum points is required. The invention screens the included angles of the edges at two ends of the candidate extreme point, the included angles of the edges at two ends in the local neighborhood of the true extreme point should be smaller, and the included angles of the edges at two ends in the local neighborhood of the false extreme point should be larger, see fig. 4, the true extreme point A 0 And false extreme point B 0 The shape within 1 pixel around it is the same, so the curvatures of the two points calculated according to equation (1) and equation (2) are the same, and A is 0 The curvature results calculated for the example of points are as follows:
the main idea of calculating the included angles of the contours is that the contours of the neighborhood near the candidate extreme points are selected, the contours are divided into two parts (left and right are assumed) by taking the candidate extreme points as central points, then the left and right contours are fitted by using two straight lines passing through the candidate extreme points, the straight line can be fitted by using a least square method or a weighted average method to calculate the average points of the contours, and the straight line is determined by the two points. Common weighted averaging methods are: simple weighted average, exponential weighted average, gaussian weighted average, etc.
The simple weighted average multiplies each data point by its corresponding weight, then adds all the products, and finally divides by the sum of the weights to calculate the average. The mathematical expression is as follows:
exponentially weighted average: the weights of the different points are controlled using an exponential function form of weight distribution, typically using an exponential smoothing coefficient.
Gaussian weighted average (Gaussian Weighted Average): with gaussian (normal) distribution of weights, more recent data points still have some weight, but the weight drops more smoothly.
The simple weighted average requires manual weight giving and is not easy to control; the exponential weighted average and the Gaussian weighted average have similar effects, and can automatically allocate weights, but the exponential weighted average focuses on capturing the latest trend more sensitively, the Gaussian weighted average is smoother in weight allocation, and the robustness to abnormal values is stronger, and the exponential weighted average focuses on being more robust to the abnormal values. Therefore, if the average point of the contour is calculated by using the weighted average method, and then the vector (straight line) is calculated by using the average point and the extreme point, the present invention suggests using the gaussian weighted method. The gaussian function is:
the weights of the contour points in the local neighborhood of the candidate extreme point are obtained according to the formula (4):
wherein e is a base number of natural logarithms, L is a left (right) contour point number (including a candidate extreme point) in a local neighborhood of the candidate extreme point, i is an index distance from the candidate extreme point of the left (right) contour point, and the magnitudes are 0,1, … … and L-1.
The coordinates of the weighted average points can then be found according to equation (3).
If a straight line fit is used, then the ideal straight line equation should be (for example, A0):
the error can be regarded as the sum of the differences or the sum of squares of the differences between the y values of the real and predicted points, i.e.:
then, the error is obtained by a least square method or a gradient descent method 1 The smallest slope a, or the error is optimized using the minimum function in "scipy.optimize 2 The slope a satisfying the condition can be obtained as well. A unit direction vector of a straight line is obtained after a fitting straight line is obtained, and the direction is represented by A 0 The points point to the outline.
The method of straight line fitting or Gaussian weighted average can be used for obtaining the included angles of the contoursA kind of electronic device. Referring to FIG. 5, weighted average points A- (B-) and A+ (B+) of the two-sided contours of the candidate extreme point are obtained by Gaussian weighted average, whereby the angles of the two-sided contours of the candidate extreme point can be obtained according to formulas (9), (10), (11), and then the angles can be givenSetting a threshold value only when +.>If the threshold value is smaller than the threshold value, the point is considered as a true polar point, and the threshold value set by the invention is 135 degrees.
In A way 0 By way of example, the angleThe calculation formula of (2) is as follows:
when the true extreme point is extracted, the main direction of the extreme point, which is the opening direction of the contour, can be obtained from the two-side contour vector obtained by the formula (9), and the main direction of the extreme point can be obtainedAnd->Unit vector in the same direction->And->The opening direction, i.e. the main direction, is +.>The formula is as follows:
the direction is:
comparing fig. 6 and fig. 7, it can be found that the false extremum point can be greatly reduced after the candidate extremum point neighborhood inner outline included angle is judged. So far, the calculation of the extreme points and the main directions of the image contour is completed.
In addition to the embodiments described above, other embodiments of the invention are possible. All technical schemes formed by equivalent substitution or equivalent transformation are within the protection scope of the invention.
The present invention has been described in detail above, but the specific implementation form of the present invention is not limited thereto. Various modifications or adaptations may occur to one skilled in the art without departing from the spirit and scope of the claims herein.

Claims (5)

1. The image feature point extraction and main direction calculation method is characterized by comprising the following steps of:
s10, graying the image;
s20, carrying out Gaussian filtering on the image after graying, and carrying out smoothing treatment on the edge of the image to remove noise;
s30, carrying out Sobel filtering on the Gaussian filtered image to obtain gradient information of the image;
s40, obtaining an image edge binary image by applying adaptive canny edge extraction;
s50, extracting continuous contours in the edge binary image by adopting a cv2.FindContours function in OpenCV and ignoring contours with lengths smaller than a preset value;
s60, calculating the curvature and the angle of points on each contour;
and S70, extracting points with curvature and angle meeting conditions as extreme points, and obtaining a main direction.
2. The method for extracting image feature points and calculating the main direction according to claim 1, wherein the step S60 specifically comprises the steps of:
s61, setting a contour curve, an angle threshold t_angle, a field radius maxlength, an extreme point position index array M and an extreme point main direction array A;
s62, calculating a first derivative arrayAnd second derivative array->To a two-dimensional curve as
Wherein i represents the index of the current point in the contour, y is the same as the first derivative and the second derivative of t, and two end points of the contour curve cut are ignored, so that x is ensured i&1 And x i-1 Meaning;
s63, calculating a curvature array K,
s64, byThe points on the outline are all continuous, and the coordinate difference deltax=x between two adjacent points i&1 -x i ∈{0,1},Δy=y i&1 -y i E {0,1}, Δx and Δy are not 0 at the same time, and the curvature K of any point i on the contour is calculated according to formulas (1) and (2) i ∈[0,1]Taking a point with curvature larger than a certain threshold value as a candidate extreme point, namely storing an index i meeting curvature in an array N;
s65, for each index i, the following steps are cyclically performed in N:
calculating the left and right side profile length L in the neighborhood - And L & The length is the minimum value between the neighborhood radius and the curve residual length;
using formula (5) to find the left and right profile weight coefficient array W - And W is &
Using formula (3) to find the average point coordinates of the left and right contours
Using equations (9), (10) and (11) to determine the included angles of the contours
Comparison ofAnd t_angle, when +.>At this time, the main direction θ is found using formulas (12) (13), the current index i is added to M, the angle θ is calculated and added to a,
wherein the formula comprises:
each data point of the simple weighted average is multiplied by its corresponding weight, then all the products are added, and finally divided by the sum of the weights to calculate the average, the formula is:
the Gaussian function is as follows by adopting a Gaussian weighted average method:
the weights of the contour points in the local neighborhood of the candidate extreme point are obtained according to the formula (4):
wherein e is a base number of natural logarithms, L is the number of left or right side contour points in the local neighborhood of the candidate extremum point, the candidate extremum point is included, i is the index distance from the left or right side contour point to the candidate extremum point, and the sizes are 0,1, … … and L-1; then, according to the formula (3), the coordinates of the weighted average points are obtained;
let A 0 The point is the candidate extreme point, A-is the point on the left side, A+ is the point on the right side, and the angleIs A 0 The included angle between the two sides A-and the A+ outline has the following calculation formula:
when the true extreme point is extracted, the true extreme point is obtained according to the formula (9)To determine the principal direction of the extreme point, which is the opening direction of the contour, by determining the two-side contour vectors of the contour, respectivelyAnd->Unit vector in the same direction->And->The opening direction, i.e. the main direction, is +.>The formula is as follows:
the direction is:
3. the method according to claim 1, wherein the convolution kernel of the gaussian filter in S20 is 7×7.
4. The method according to claim 1, wherein 30% and 5% of the maximum gradient are used as the high threshold and the low threshold of the canny operator in S40, respectively.
5. The method for extracting image feature points and calculating the main direction according to claim 1, wherein the predetermined value of the omitted length in S50 is 1/100 of the image circumference.
CN202311475996.7A 2023-11-08 2023-11-08 Image feature point extraction and main direction calculation method Pending CN117474965A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311475996.7A CN117474965A (en) 2023-11-08 2023-11-08 Image feature point extraction and main direction calculation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311475996.7A CN117474965A (en) 2023-11-08 2023-11-08 Image feature point extraction and main direction calculation method

Publications (1)

Publication Number Publication Date
CN117474965A true CN117474965A (en) 2024-01-30

Family

ID=89639349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311475996.7A Pending CN117474965A (en) 2023-11-08 2023-11-08 Image feature point extraction and main direction calculation method

Country Status (1)

Country Link
CN (1) CN117474965A (en)

Similar Documents

Publication Publication Date Title
CN114418957B (en) Global and local binary pattern image crack segmentation method based on robot vision
CN110349207B (en) Visual positioning method in complex environment
CN112819772B (en) High-precision rapid pattern detection and recognition method
CN106355577B (en) Rapid image matching method and system based on significant condition and global coherency
CN109409366B (en) Distorted image correction method and device based on angular point detection
CN106934803B (en) method and device for detecting surface defects of electronic device
CN106934795B (en) A kind of automatic testing method and prediction technique of glue into concrete beam cracks
CN106981077B (en) Infrared image and visible light image registration method based on DCE and LSS
CN108765325A (en) Small unmanned aerial vehicle blurred image restoration method
CN109447117B (en) Double-layer license plate recognition method and device, computer equipment and storage medium
US20110262013A1 (en) Fingerprint matcher using iterative process and related methods
CN112017223A (en) Heterologous image registration method based on improved SIFT-Delaunay
CN111223063A (en) Finger vein image NLM denoising method based on texture features and binuclear function
CN110348289A (en) A kind of finger vein identification method based on binary map
CN112991283A (en) Flexible IC substrate line width detection method based on super-pixels, medium and equipment
Alazzawi Edge detection-application of (first and second) order derivative in image processing: communication
CN112734816A (en) Heterogeneous image registration method based on CSS-Delaunay
CN113781413B (en) Electrolytic capacitor positioning method based on Hough gradient method
CN117993406A (en) Automatic two-dimensional bar code identification and reading method and system
CN117853510A (en) Canny edge detection method based on bilateral filtering and self-adaptive threshold
CN113298725A (en) Correction method for superposition error of ship icon image
CN112926516A (en) Robust finger vein image region-of-interest extraction method
CN112132054A (en) Document positioning and segmenting method based on deep learning
CN111667429A (en) Target positioning and correcting method for inspection robot
CN117474965A (en) Image feature point extraction and main direction calculation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination