CN106960451B - Method for increasing number of feature points of image weak texture area - Google Patents

Method for increasing number of feature points of image weak texture area Download PDF

Info

Publication number
CN106960451B
CN106960451B CN201710145106.4A CN201710145106A CN106960451B CN 106960451 B CN106960451 B CN 106960451B CN 201710145106 A CN201710145106 A CN 201710145106A CN 106960451 B CN106960451 B CN 106960451B
Authority
CN
China
Prior art keywords
image
points
point
feature
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710145106.4A
Other languages
Chinese (zh)
Other versions
CN106960451A (en
Inventor
余何
袁承宗
宋锐
李云松
王养利
王智卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Electronic Science and Technology
Shanghai Aerospace Measurement Control Communication Institute
Original Assignee
Xian University of Electronic Science and Technology
Shanghai Aerospace Measurement Control Communication Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Electronic Science and Technology, Shanghai Aerospace Measurement Control Communication Institute filed Critical Xian University of Electronic Science and Technology
Priority to CN201710145106.4A priority Critical patent/CN106960451B/en
Publication of CN106960451A publication Critical patent/CN106960451A/en
Application granted granted Critical
Publication of CN106960451B publication Critical patent/CN106960451B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of computer vision, and discloses a method for increasing the number of feature points in a weak texture area of an image, which is used for extracting details of an image to obtain a detailed texture map of the image; constructing a Gaussian difference pyramid for an original image, and detecting an angular point at the same time; extracting details and parallel computing a Gaussian difference pyramid, extracting corner points of the detail image, and finally combining the detail image into a feature point set; for the extracted feature points, generating a binary descriptor with rotation invariance on a corresponding image; and then matching and filtering the descriptors of the two images to obtain a correct matching point set. The present invention uses binary operators and only requires simple summation and comparison operations to generate the description vectors. Meanwhile, the invention uses two methods of angular point detection and spatial extreme point detection for parallel calculation, greatly improves the stability of the characteristic points, has low coupling of each calculation step and high parallelism, and is very suitable for hardware realization.

Description

Method for increasing number of feature points of image weak texture area
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a method for increasing the number of feature points in a weak texture area of an image.
Background
Image matching is an extremely important technology in the field of computer vision, and the application of the technology comprises the fields of pattern recognition, automatic navigation, three-dimensional reconstruction, stitching influence and the like. The current image matching mainly uses a matching method based on image local features, and an excellent local feature needs to have the characteristics of small calculation amount, good robustness, insensitivity to image change and the like. The performance of local features of images has become a technical bottleneck of the applications, and the quality of the results directly influences the application effect. With the rapid rise of the high-definition video processing industry, the application of high-definition technologies such as 4K and even 8K is gradually popularized, and the requirement of an image local feature algorithm on the processing speed is higher and higher. Currently, the image matching algorithm has the best stability, and the Scale-invariant feature transform (SIFT) algorithm is the most widely used. The SIFT features are expressed by a 128-dimensional vector, contain a large amount of information and have very high identification degree. The SIFT algorithm has very high stability to changes of brightness, noise, rotation, scaling and the like of an image, and the algorithm performance is excellent. The largest disadvantage of the SIFT algorithm is that the algorithm is very complex and cannot be applied to real-time processing applications and embedded platforms. In order to improve the algorithm efficiency, the SURF (speeded UpRobust features) feature detection algorithm uses Hessian matrix determinant approximate value to replace a Gaussian difference image in the SIFT algorithm, and uses integral images to simplify the operation process. The speedof SURF algorithm is improved by several times compared with SIFT algorithm, but the whole framework of the SURF algorithm is not changed greatly, and the SURF algorithm can not meet the situation of real-time processing. To improve computational efficiency, one class of binary operators, including ORB, BRISK, BRIEF, and FREAK, has at least two orders of magnitude improvement in speed. However, the binary operator only uses the corner detection to extract the feature points, the algorithm stability is poor, the number of extracted correct matching points is small, and the step input depending on the matching result in the application is too few to calculate. In the above method, the SIFT algorithm has a good effect, but the complexity is too high, the processing efficiency is low, and the method cannot be applied to a real-time processing system and an embedded platform. The SURF algorithm is improved in speed over the SIFT algorithm, but has no qualitative change. The binary algorithm is low in complexity and high in algorithm efficiency, but the performance is poor, and features cannot be effectively extracted in the weak texture region of the image.
In summary, the problems of the prior art are as follows: the SIFT algorithm is high in complexity, needs a large number of floating point calculation methods and cache spaces, and cannot meet the real-time requirement; the binary algorithm has low complexity, high algorithm efficiency and poor performance, and features cannot be effectively extracted in the weak texture region of the image.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method for increasing the number of characteristic points in a weak texture area of an image.
The invention is realized in such a way, and provides a method for increasing the number of characteristic points in an image weak texture area, which comprises the following steps: extracting details of an image to obtain a detail texture map of the image; constructing a Gaussian difference pyramid for an original image, and detecting an angular point at the same time; extracting details and calculating a Gaussian difference pyramid in parallel, simultaneously extracting corners of the detail images, and finally combining the detail images into a feature point set; for the extracted feature points, generating a binary descriptor with rotation invariance on a corresponding image; and then matching and filtering the descriptors of the two images to obtain a correct matching point set.
Further, the method for increasing the number of the feature points in the image weak texture area specifically comprises the following steps:
firstly, weak texture extraction is carried out on an image to generate a weak texture image;
extracting corners and spatial extreme points of an original input image, extracting corners of a detail image, and performing parallel calculation by the three modules;
selecting a neighborhood with the feature point as a center 31 multiplied by 31, calculating the centroid of the image in the neighborhood, and taking the direction from the midpoint q to the centroid as the main direction of the feature point;
rotating the neighborhood of the feature points along the main direction of the neighborhood, mapping the neighborhood to a new coordinate system, and using the image where the feature points are located during calculation;
and step five, acquiring the positions of the sampling point pairs on the image where the feature points are located from the lookup table, solving the pixel intensity of the 3 multiplied by 3 neighborhood of the sampling point pairs and comparing the pixel intensity to generate a 256-dimensional binary vector serving as a feature point description vector.
Further, in the step one:
texture map computation using a 3 x 3 second order gradient gaussian templateLike, with A representing the original image, GxAnd GyRespectively representing the gray values of the images detected by the transverse and longitudinal edges, and the formula is as follows:
calculating a gradient value G:
further, in the second step:
the angular point extraction method comprises the following steps:
an arbitrary point p of the image NL having a pixel value IpSetting a threshold value t, and selecting 16 points in the field p to form a circle with the radius of 4; of the 16 dots, there are 12 continuous dots each having a pixel value larger than Ip+ t is either less than Ip-t, then the point is selected as a corner point;
for a central pixel, selecting 16 points of a neighborhood circle, setting a threshold t, and dividing pixel points on the circle into three states according to pixel intensity, wherein the dividing method comprises the following steps:
there are n consecutive points that are all more intense than the center point pixel, or if all are less, such center point is the corner point. Typically, the value of n is 9 or 12;
the method for calculating the spatial extreme point comprises the following steps:
extracting a scale space of an image required to be constructed by extracting a space extreme point, wherein the construction process of the scale space comprises the following steps:
defining a scale function for the image:
L(x,y,σ)=G(x,y,σ)*I(x,y);
where the pixel intensity of the I (x, y) image NL represents the convolution operation, and G (x, y, σ) is a gaussian function:
the constant k is introduced to calculate two adjacent scales:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ);
further, extracting a spatial local maximum value and a spatial local minimum value point, wherein each pixel point needs to be compared with the other 8 points in the 3 × 3 field of the current layer and the 9 × 2 points of the upper and lower adjacent scales; only when the current point is the maximum or minimum of all 27 points, it is selected as the feature point.
Further, in the third step: selecting a neighborhood with a feature point as a center of 31 multiplied by 31, and an image (p + q) order moment mpqThe calculation formula of (2):
the direction angle θ of the feature point is calculated as follows:
θ=atan2(m01,m10);
a principal direction is assigned to each feature point.
Further, in the fourth step: rotating the neighborhood of feature point coordinates 31 × 31 along the main direction, mapping into a new coordinate system, and relating the rotated coordinate position (x ', y') to the original coordinate (x, y) as follows:
where θ is the principal direction of the feature point.
The invention also aims to provide an image matching system using the method for increasing the number of the characteristic points of the image weak texture region.
The invention also aims to provide a computer vision system utilizing the method for improving the number of the characteristic points of the image weak texture area.
The invention has the advantages and positive effects that: extracting feature points in parallel by using two methods of angular point detection and spatial extreme point detection; in addition, the extraction of the weak texture region features of the image is optimized by enhancing the local information of the image; the method has high parallelism, is convenient for hardware realization, and improves the calculation speed on a PC by more than 300 times compared with the SIFT algorithm. The extracted feature points are partially calculated in parallel and mutually supplemented, hardware implementation is facilitated, experiments show that the positions of the final matching points extracted by the original image and the detail image are not overlapped, the matching performance can be effectively improved, and the number of correct matching points is improved by 20-80%. Meanwhile, the extraction of the image details enhances the local features of the image, and more than 50% of matching points of the detail map are distributed in the weak texture area of the image.
Compared with the prior art, the invention has the following advantages:
compared with SIFT and SURF algorithms, the algorithm has high complexity, can only be realized on a PC (personal computer) through software, has low arithmetic speed and cannot be applied to a system needing real-time processing. The present invention uses binary operators and only requires simple summation and comparison operations to generate the description vectors. When an image with the resolution of 640 multiplied by 480 is processed on an i72.8GHz processor PC platform, the average time consumption of the SIFT algorithm is 5228ms, but the method only needs 16ms, and the speed is improved by two orders of magnitude.
Compared with the existing binary operator, the algorithm has poor stability, the number of correct matching points after matching and filtering is too small, and the algorithm is sensitive to noise, rotation and scaling. The method has the advantages of good algorithm stability, and multiple methods are used for parallel calculation, so that the obtained feature points are more in number, and have the advantages of brightness invariance, scale invariance and rotation invariance. Meanwhile, the number of correct matching points is increased by extracting the detail map by using the sobel operator, and the number of the correct matching points can be increased by 20% -80%. Because the loss of high-frequency information is serious in the transmission and compression processes of the image, the local features of the image are amplified by extracting the details of the image, and the feature extraction is facilitated. Particularly, the invention can effectively extract the characteristic points in the weak texture region of the characteristic image which cannot be extracted by the existing binary algorithm.
The method aims at the problem that the feature points can not be effectively extracted in the weak texture area of the image by the common image local feature extraction algorithm, extracts and enhances the texture information of the image by using the image detail extraction algorithm, and then generates the feature vector by using the improved binary feature extraction algorithm according to the distribution characteristics of the weak texture image. Compared with a class of binary operators based on angular point detection, the weak texture region with the feature points which cannot be extracted in the latter class has considerable performance, the number of the feature points can be integrally increased by 20-80%, and meanwhile, the calculation is quick, so that the method has high engineering application value.
Drawings
Fig. 1 is a flowchart of a method for increasing the number of feature points in a weak texture region of an image according to an embodiment of the present invention.
Fig. 2 is a flowchart of a specific implementation of the method for increasing the number of feature points in the image weak texture region according to the embodiment of the present invention.
Fig. 3 is a schematic diagram of a neighborhood selected by corner detection according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
As shown in fig. 1, the method for increasing the number of feature points in a weak texture region of an image according to an embodiment of the present invention includes the following steps:
s101: extracting details of an image to obtain a detail texture map of the image; constructing a Gaussian difference pyramid for an original image, detecting angular points at the same time, and performing parallel calculation by using the two detection methods;
s102: simultaneously, extracting corner points of the detail images, and finally combining the detail images into a feature point set; for the extracted feature points, generating a binary descriptor with rotation invariance on a corresponding image;
s103: and matching and filtering the descriptors of the two images to obtain a correct matching point set.
The application of the principles of the present invention will now be described in further detail with reference to the accompanying drawings.
As shown in fig. 2, the method for increasing the number of feature points in a weak texture region of an image according to an embodiment of the present invention specifically includes the following steps:
step 1: the acquired image is labeled NI and the grayscale image of image NI is labeled NL, and the image texture is extracted using the modified sobel operator. Aiming at the problem that the sobel operator has obvious response to the edge of the image and insufficient response to weak texture, a 3 multiplied by 3 second-order gradient Gaussian template is used for calculating the texture image, A represents the original image, G represents the original imagexAnd GyRespectively representing the gray values of the images detected by the transverse and longitudinal edges, and the formula is as follows:
calculating a gradient value G:
further, in the second step: step 2: the three modules of the part are processed in parallel. And extracting corner points of the detail picture, the corner points of the original picture and the spatial extreme points.
The angular point extraction method comprises the following steps:
an arbitrary point p of the image NL having a pixel value IpSetting a threshold t, and selecting 16 points in the field p to form a circle with the radius of 4. If there are 12 continuous points among the 16 points, their pixel values are all larger than Ip+ t is either less than IpT, the point is selected as a corner point.
The neighborhood selected by corner detection is shown in figure 3.
For a central pixel, selecting 16 points of a neighborhood circle, setting a threshold t, and dividing pixel points on the circle into three states according to pixel intensity, wherein the dividing method comprises the following steps:
referring to the above formula, if there are n consecutive points that are all more intense than the center point pixel, or all less intense, such center point is the corner point. Typically, the value of n is 9 or 12.
The method for calculating the spatial extreme point comprises the following steps:
extracting a scale space of an image required to be constructed by extracting a space extreme point, wherein the construction process of the scale space comprises the following steps:
defining a scale function for the image:
L(x,y,σ)=G(x,y,σ)*I(x,y);
where the pixel intensity of the I (x, y) image NL represents the convolution operation, and G (x, y, σ) is a gaussian function:
to efficiently detect the location of dimensionally stable keypoints, a difference gaussian operator (DOG) is used, introducing a constant k to compute two adjacent scales:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ);
in the actual calculation, the difference between the adjacent upper and lower layers of images in each group of the Gaussian pyramid is used to obtain a Gaussian difference image.
In order to extract the spatial local maximum and minimum points, each pixel point needs to be compared with the other 8 points in the 3 × 3 field of the current layer and the 9 × 2 points of the upper and lower adjacent scales. Only when the current point is the maximum or minimum of all 27 points, it is selected as the feature point.
And step 3: selecting a neighborhood with a feature point as a center of 31 multiplied by 31, and an image (p + q) order moment mpqThe calculation formula of (2):
the direction angle θ of the feature point is calculated as follows:
θ=atan2(m01,m10);
a principal direction is assigned to each feature point.
And 4, step 4: rotating the neighborhood of feature point coordinates 31 × 31 along the main direction, mapping into a new coordinate system, and relating the rotated coordinate position (x ', y') to the original coordinate (x, y) as follows:
where θ is the principal direction of the feature point.
And 5: and acquiring the positions of the sampling point pairs on the image where the feature points are located from the lookup table, and solving the pixel intensity sum of the 3 multiplied by 3 neighborhood of the image for comparison. If greater than, a 1 is generated, and if less than, a 0 is generated. Finally, a 256-dimensional binary vector is generated as the feature point description vector. Specifically, the positions of the sampling points for generating the vectors and the comparison order completely agree with each other for each feature point.
The following describes the effects of the present invention in detail with reference to the accompanying drawings.
The invention provides a method for detecting the image to be detected. Extracting details of the image to be detected by using a detail extraction method; extracting correct matching points on a detail image extracted from the image to be detected; compared with the ORB algorithm which only has 89 pairs of correct matching points, the final correct point matching number is 141, and meanwhile, the green rectangular area where the ORB cannot extract correct matching points has good performance.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A method for increasing the number of feature points in a weak texture region of an image is characterized by comprising the following steps: extracting details of an image to obtain a detail texture map of the image; constructing a Gaussian difference pyramid for an original image, and detecting an angular point at the same time; extracting details and calculating a Gaussian difference pyramid in parallel, simultaneously extracting corners of the detail image, and combining a detail image corner extraction result and a Gaussian difference pyramid calculation result into a feature point set; for the extracted feature points, generating a binary descriptor with rotation invariance on a corresponding image; and then matching and filtering the descriptors of the images to be matched to obtain a correct matching point set.
2. The method for increasing the number of the feature points in the image weak texture region according to claim 1, wherein the method for increasing the number of the feature points in the image weak texture region specifically includes the following steps:
firstly, weak texture extraction is carried out on an image to generate a weak texture image;
extracting corners and spatial extreme points of an original input image, extracting corners of a detail image, calculating original image corners, calculating original image spatial extreme points, and calculating detail image corners in parallel;
selecting a neighborhood with the feature point as a center 31 multiplied by 31, calculating the centroid of the image in the neighborhood, and taking the direction from the midpoint q to the centroid as the main direction of the feature point;
rotating the neighborhood of the feature points along the main direction of the neighborhood, mapping the neighborhood to a new coordinate system, and using the image where the feature points are located during calculation;
and step five, acquiring the positions of the sampling point pairs on the image where the feature points are located from the lookup table, solving the pixel intensity of the 3 multiplied by 3 neighborhood of the sampling point pairs and comparing the pixel intensity to generate a 256-dimensional binary vector serving as a feature point description vector.
3. The method for increasing the number of the feature points of the image weak texture region as claimed in claim 2, wherein in the first step: texture images were computed using a 3 × 3 second order gradient gaussian template:
in the horizontal direction:
where I (x, y) represents the pixel intensity of image NL;
in the vertical direction:
calculating gradient values:
4. the method for increasing the number of the feature points of the image weak texture region as claimed in claim 2, wherein in the second step:
the angular point extraction method comprises the following steps:
an arbitrary point p of the image NL having a pixel value IpSetting a threshold value t, and selecting 16 points in the field p to form a circle with the radius of 4; of the 16 dots, there are 12 continuous dots each having a pixel value larger than Ip+ t is either less than Ip-t, then the point is selected as a corner point;
for a central pixel, selecting 16 points of a neighborhood circle, setting a threshold t, and dividing pixel points on the circle into three states according to pixel intensity, wherein the dividing method comprises the following steps:
if the pixel intensity of each continuous n points is greater than that of the central point, or the pixel intensity of each continuous n points is smaller than that of the central point, the central point is the angular point; the value of n is 9 or 12;
the method for calculating the spatial extreme point comprises the following steps:
extracting a scale space of an image required to be constructed by extracting a space extreme point, wherein the construction process of the scale space comprises the following steps:
defining a scale function for the image:
L(x,y,σ)=G(x,y,σ)*I(x,y);
where the pixel intensity of the I (x, y) image NL represents the convolution operation, and G (x, y, σ) is a gaussian function:
the constant k is introduced to calculate two adjacent scales:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)
=L(x,y,kσ)-L(x,y,σ)。
5. the method for increasing the number of the feature points in the weak texture region of the image as claimed in claim 4, wherein the local maximum and minimum points in the space are extracted, and each pixel point needs to be compared with the other 8 points in the 3 x 3 field of the current layer and the 9 x 2 points in the upper and lower adjacent scales; only when the current point is the maximum or minimum of all 27 points, it is selected as the feature point.
6. The method for increasing the number of the feature points of the image weak texture region as claimed in claim 2, wherein in the third step: selecting a neighborhood with the feature point as the center 31 multiplied by 31, and a calculation formula of the (p + q) order moment of the image:
the direction angle θ of the feature point is calculated as follows:
θ=atan2(m01,m10);
a principal direction is assigned to each feature point.
7. The method for increasing the number of the feature points of the image weak texture region as claimed in claim 2, wherein in the fourth step: rotating the neighborhood of feature point coordinates 31 × 31 along the main direction, mapping into a new coordinate system, and relating the rotated coordinate position (x ', y') to the original coordinate (x, y) as follows:
where θ is the principal direction of the feature point.
8. An image matching system using the method for increasing the number of the feature points of the image weak texture region as claimed in any one of claims 1 to 7.
9. A computer vision system using the method for increasing the number of the feature points of the image weak texture area according to any one of claims 1 to 7.
CN201710145106.4A 2017-03-13 2017-03-13 Method for increasing number of feature points of image weak texture area Active CN106960451B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710145106.4A CN106960451B (en) 2017-03-13 2017-03-13 Method for increasing number of feature points of image weak texture area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710145106.4A CN106960451B (en) 2017-03-13 2017-03-13 Method for increasing number of feature points of image weak texture area

Publications (2)

Publication Number Publication Date
CN106960451A CN106960451A (en) 2017-07-18
CN106960451B true CN106960451B (en) 2019-12-31

Family

ID=59471741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710145106.4A Active CN106960451B (en) 2017-03-13 2017-03-13 Method for increasing number of feature points of image weak texture area

Country Status (1)

Country Link
CN (1) CN106960451B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657524B (en) * 2017-10-11 2021-03-05 阿里巴巴(中国)有限公司 Image matching method and device
CN108455228B (en) * 2017-12-29 2023-07-28 长春师范大学 Automatic tire loading system
CN110276836A (en) * 2018-03-13 2019-09-24 幻视互动(北京)科技有限公司 A kind of method and MR mixed reality intelligent glasses accelerating characteristic point detection
GB2572756B (en) * 2018-04-05 2020-05-06 Imagination Tech Ltd Sampling for feature detection
US11527044B2 (en) * 2018-06-27 2022-12-13 Samsung Electronics Co., Ltd. System and method for augmented reality
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 The Feature Points Extraction of view-based access control model
CN109934777B (en) * 2019-01-09 2023-06-02 深圳市三宝创新智能有限公司 Image local invariant feature extraction method, device, computer equipment and storage medium
CN110046623B (en) * 2019-03-04 2021-09-10 青岛小鸟看看科技有限公司 Image feature point extraction method and camera
CN110264556A (en) * 2019-06-10 2019-09-20 张慧 A kind of generation method without the random complex texture of repetition
CN112348032B (en) * 2019-08-09 2022-10-14 珠海一微半导体股份有限公司 SIFT algorithm key point detection method based on hardware circuit
CN110675388B (en) * 2019-09-27 2024-02-02 沈阳派得林科技有限责任公司 Weld joint image similarity comparison method
CN110852235A (en) * 2019-11-05 2020-02-28 长安大学 Image feature extraction method
CN111460941B (en) * 2020-03-23 2023-06-09 南京智能高端装备产业研究院有限公司 Visual navigation feature point extraction and matching method in wearable navigation equipment
CN111599171A (en) * 2020-04-24 2020-08-28 重庆科技学院 Intelligent control system, method and storage medium for applying big data to traffic
CN112308797B (en) * 2020-10-30 2024-02-02 维沃移动通信有限公司 Corner detection method and device, electronic equipment and readable storage medium
CN112818989B (en) * 2021-02-04 2023-10-03 成都工业学院 Image matching method based on gradient amplitude random sampling
CN112907580B (en) * 2021-03-26 2024-04-19 东南大学 Image feature extraction and matching algorithm applied to comprehensive dotted line features in weak texture scene
CN116704031B (en) * 2023-06-13 2024-01-30 中国人民解放军61540部队 Method and system for rapidly acquiring satellite image connection point

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101419709A (en) * 2008-12-08 2009-04-29 北京航空航天大学 Plane target drone characteristic point automatic matching method for demarcating video camera
CN102122359A (en) * 2011-03-03 2011-07-13 北京航空航天大学 Image registration method and device
CN103761739A (en) * 2014-01-23 2014-04-30 武汉大学 Image registration method based on half energy optimization
CN105069815A (en) * 2015-07-27 2015-11-18 广东东软学院 Weak and small object tracking method and device of sea surface monitoring image
CN105551035A (en) * 2015-12-09 2016-05-04 深圳市华和瑞智科技有限公司 Stereoscopic vision matching method based on weak edge and texture classification
CN105844663A (en) * 2016-03-21 2016-08-10 中国地质大学(武汉) Adaptive ORB object tracking method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8724893B2 (en) * 2011-09-27 2014-05-13 Thomson Licensing Method and system for color look up table generation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101419709A (en) * 2008-12-08 2009-04-29 北京航空航天大学 Plane target drone characteristic point automatic matching method for demarcating video camera
CN102122359A (en) * 2011-03-03 2011-07-13 北京航空航天大学 Image registration method and device
CN103761739A (en) * 2014-01-23 2014-04-30 武汉大学 Image registration method based on half energy optimization
CN105069815A (en) * 2015-07-27 2015-11-18 广东东软学院 Weak and small object tracking method and device of sea surface monitoring image
CN105551035A (en) * 2015-12-09 2016-05-04 深圳市华和瑞智科技有限公司 Stereoscopic vision matching method based on weak edge and texture classification
CN105844663A (en) * 2016-03-21 2016-08-10 中国地质大学(武汉) Adaptive ORB object tracking method

Also Published As

Publication number Publication date
CN106960451A (en) 2017-07-18

Similar Documents

Publication Publication Date Title
CN106960451B (en) Method for increasing number of feature points of image weak texture area
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN111435438A (en) Graphical fiducial mark recognition for augmented reality, virtual reality and robotics
Yao et al. A new pedestrian detection method based on combined HOG and LSS features
CN106991689B (en) Target tracking method based on FHOG and color characteristics and GPU acceleration
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
WO2019237976A1 (en) Differential image-based foreign matter detection method and apparatus, and device and storage medium
CN109101981B (en) Loop detection method based on global image stripe code in streetscape scene
Uchiyama et al. Toward augmenting everything: Detecting and tracking geometrical features on planar objects
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN108319961B (en) Image ROI rapid detection method based on local feature points
Zhu et al. A fast image stitching algorithm based on improved SURF
CN111199558A (en) Image matching method based on deep learning
CN107527348B (en) Significance detection method based on multi-scale segmentation
Suryawibawa et al. Herbs recognition based on android using opencv
CN107358244B (en) A kind of quick local invariant feature extracts and description method
CN103336964A (en) SIFT image matching method based on module value difference mirror image invariant property
Cai et al. Feature detection and matching with linear adjustment and adaptive thresholding
Bao et al. A corner detection method based on adaptive multi-directional anisotropic diffusion
Feng et al. Research on an image mosaic algorithm based on improved ORB feature combined with surf
Wang et al. Unified detection of skewed rotation, reflection and translation symmetries from affine invariant contour features
CN105913068B (en) It is a kind of for describing the multi-dimensional direction gradient representation method of characteristics of image
Tang et al. An improved local feature descriptor via soft binning
CN103617616A (en) Affine invariant image matching method
Xing et al. An improved algorithm on image stitching based on SIFT features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant