CN110705569A - Image local feature descriptor extraction method based on texture features - Google Patents

Image local feature descriptor extraction method based on texture features Download PDF

Info

Publication number
CN110705569A
CN110705569A CN201910882534.4A CN201910882534A CN110705569A CN 110705569 A CN110705569 A CN 110705569A CN 201910882534 A CN201910882534 A CN 201910882534A CN 110705569 A CN110705569 A CN 110705569A
Authority
CN
China
Prior art keywords
image
neighborhood
feature
local
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910882534.4A
Other languages
Chinese (zh)
Inventor
郭文华
贺晨龙
马耀军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201910882534.4A priority Critical patent/CN110705569A/en
Publication of CN110705569A publication Critical patent/CN110705569A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Abstract

The invention discloses an image local feature descriptor extraction method based on textural features, which comprises the following steps: firstly, the method comprises the following steps: detecting a local characteristic domain of the image by using a Hessian-Affinine detector for the whole image to obtain an elliptical neighborhood around a characteristic point; II, secondly: calculating the main direction of the elliptical neighborhood, taking the major axis of the neighborhood as the transverse axis of an image coordinate system, taking the minor axis as the vertical axis of the image coordinate system, and rotating the elliptical neighborhood; performing affine projection on the neighborhood obtained by rotation to convert the neighborhood into a circular neighborhood; thirdly, the method comprises the following steps: dividing M sub-areas of the circular neighborhood by adopting a Cartesian grid, and calculating a DCS-LBP characteristic value of each sub-area; fourthly, the method comprises the following steps: calculating a characteristic statistical histogram of each subregion, and connecting the characteristic statistical histograms in series to obtain the characteristic statistical histogram of the subregion; fifthly: all the feature statistical histograms of all the sub-regions are connected in series to obtain a feature descriptor of the local feature region; sixthly, the method comprises the following steps: and carrying out normalization and thresholding on the local feature descriptor to finally obtain the DCS-LBP feature descriptor.

Description

Image local feature descriptor extraction method based on texture features
Technical Field
The invention belongs to the field of computer image processing, and particularly relates to a method for extracting a Local image feature Descriptor (DCS _ LBP for short) based on texture features.
Background
With the development of computers, information technologies and multimedia technologies, computer image processing has been developed in recent years, meanwhile, people have higher and higher requirements for computer vision applications, and fast, convenient and intelligent high-performance digital image processing algorithms have become the future development direction, while image features are important problems in machine vision, and have wide applications in the aspects of target identification and tracking, image classification and retrieval, three-dimensional reconstruction of scenes and the like, and further, the description of local features of images has become a research hotspot in the field of computer vision.
The practical application concept of the local feature descriptor of the image in computer vision is as follows: firstly, detecting a detection point with certain properties in an image to be matched; secondly, selecting a proper description block in the neighborhood around the feature point; thirdly, designing a local feature descriptor with certain invariance for the description block; and finally, determining the corresponding points of the target by matching the characteristic points among different images. The key to this approach is an efficient method of encoding the target domain to obtain invariance to the scale, rotation, perspective and affine transformations of the image. In recent years, many descriptor algorithms for describing Local features of an image have been developed, such as Scale Invariant Feature Transform (SIFT), Speeded Up Robust Feature (SURF), Local Binary Feature (LBP), and so on. The defects of the methods are mainly as follows: the descriptor feature dimension is high, and the matching speed is low.
Disclosure of Invention
In order to overcome the defects of high dimensionality and low matching speed of descriptor features in the conventional method, the invention aims to provide an image local feature descriptor extraction method based on texture features.
The invention is realized by adopting the following technical scheme:
a method for extracting image local feature descriptors based on texture features comprises the following steps:
the method comprises the following steps: detecting a local characteristic domain of the image by using a Hessian-Affinine detector for the whole image to obtain an elliptical neighborhood around a characteristic point;
step two: calculating the main direction of the elliptical neighborhood, taking the major axis of the neighborhood as the transverse axis of an image coordinate system, taking the minor axis as the vertical axis of the image coordinate system, and rotating the elliptical neighborhood; then, performing affine projection on the neighborhood obtained by rotation to convert the neighborhood into a circular neighborhood;
step three: dividing the circular neighborhood into M sub-regions by adopting a Cartesian grid, and calculating the DCS-LBP characteristic value DCS-ion of each sub-regionAnd DCS-
Figure BDA0002206314930000022
Step four: calculating a feature statistical histogram for each sub-region: hupperAnd HlowerThe feature statistical histograms of the sub-regions are obtained by connecting the sub-regions in series;
step five: all the feature statistical histograms of all the sub-regions are connected in series to obtain a feature descriptor of the local feature region;
step six: and carrying out normalization and thresholding on the local feature descriptor to finally obtain the image local feature descriptor.
The further improvement of the invention is that the specific implementation method of the step one is as follows:
detecting feature points by using a multi-scale Hessian detector, constructing a scale space for each detected feature point, and calculating a result L (x, Σ) g (x, Σ) I (x) of local image Gaussian smoothing on each scale of a scale sequence, wherein the result L (x, Σ) g (x, Σ) I (x) is obtained by performing Gaussian smoothing on each scale of the scale sequence
Figure BDA0002206314930000023
Where g (x, Σ) is an affine gaussian kernel, i (x) is the gray level of the local image, Σ represents a positive scaling covariance matrix, and is the convolution operation of the image block, then the second moment is μ (x, Σ)ID)=g(x,ΣI)*((▽L)(x,ΣD)(▽L)(x,ΣD)T),WhereinISum-sigmaDThe feature point.
The invention is further improved in that the calculation in the third step is carried out according to the following formula:
Figure BDA0002206314930000031
in the formula giIs P neighborhood points which are distributed on a circle with the characteristic point as the center and the radius of R at equal intervals,
Figure BDA0002206314930000032
is giCentral symmetry point of (1), DCS-
Figure BDA0002206314930000033
And DCS-
Figure BDA0002206314930000034
Representing the upper and lower halves of the DCS-LBP, respectively, and T is the threshold parameter.
In a further development of the invention, in step four, the statistical histogram of the features of each subregion is calculated according to the following formula: hupperAnd Hlower
Figure BDA0002206314930000035
Wherein k belongs to [0, k ], and k is the maximum value of the code of DCS-LBP; h is the length of the local texture image; w is the width of the local texture image.
The invention is further improved in that the DCS-LBP characteristic descriptor in the step six is as follows:
Figure BDA0002206314930000036
in the formula
Figure BDA0002206314930000037
Is a statistical histogram vector of the DCS-LBP feature descriptor, L ═ L1,l2,...,lN) Is a normalized statistical histogram vector, N is the characteristic dimension of the histogram vector of the DCS-LBP descriptor,
Figure BDA0002206314930000038
wherein M is the number of the divided sub-regions.
The invention has the following beneficial technical effects:
compared with the existing descriptor extraction method for the local features of the image, the method has invariance such as affine, scale, rotation, illumination and the like, and simultaneously reduces the feature dimension of the descriptor, thereby reducing the computational complexity and having certain real-time property.
Drawings
FIG. 1 is a diagram of an elliptical affine invariant region;
FIG. 2 is a feature area division diagram;
FIG. 3 is a DCS-LBP feature description subgraph;
fig. 4 is a Mikolajczyk dataset picture, where fig. 4(a) is a Bark1 original image, fig. 4(b) is Bark2 obtained by scale and rotation transformation of Bark1 original image, fig. 4(c) is a Graf1 original image, fig. 4(d) is a Graf2 obtained by perspective transformation of Graf1 original image, fig. 4(e) is a Bikes1 original image, and fig. 4(f) is a Bikes2 obtained by fuzzy transformation of Bikes1 original image;
fig. 5 is a graph of the experimental results, in which fig. 5(a) is an evaluation graph of five local feature descriptors against recallvs.1-precision of Bark images, fig. 5(b) is an evaluation graph of five local feature descriptors against recall vs.1-precision of Graf images, and fig. 5(c) is an evaluation graph of five local feature descriptors against recall vs.1-precision of Bikes images.
Detailed Description
The invention is further described below with reference to the following figures and examples.
The invention provides an image local feature descriptor extraction method based on texture features, which comprises the following steps:
the method comprises the following steps: detecting feature points by using a multi-scale Hessian detector for the whole image, constructing a scale space for each detected feature point, and calculating a result L (x, sigma) g (x, sigma) I (x) of local image Gaussian smoothing on each scale of a scale sequence, wherein
Figure BDA0002206314930000041
Where g (x, Σ) is an affine gaussian kernel, i (x) is the gray level of the local image, Σ represents a positive scaling covariance matrix, and is the convolution operation of the image block, then the second moment is μ (x, Σ)ID)=g(x,ΣI)*((▽L)(x,ΣD)(▽L)(x,ΣD)T) Wherein ∑ISum-sigmaDThe feature point is used as a center, and the local feature with affine invariance, which takes the ellipse field as a feature region, can be obtained by using the feature point as the center and using the feature value of the second moment and the feature vector to determine the region shape of the feature point correlation neighborhood, wherein ▽ L is a gradient operator, T is a transposition operator, as shown in fig. 1.
Step two: and calculating the main direction of the elliptic neighborhood, taking the major axis of the neighborhood as the transverse axis of the image coordinate system and the minor axis as the vertical axis of the image coordinate system, and rotating the elliptic neighborhood. Then, affine projection is carried out on the neighborhood obtained by rotation, and the neighborhood is converted into a circular neighborhood.
Step three: dividing the circular neighborhood into M sub-regions by adopting a Cartesian grid, wherein the M sub-regions are divided into 16 sub-regions which are 4x4, as shown in FIG. 2, and calculating the DCS-LBP characteristic value DCS-
Figure BDA0002206314930000051
And DCS-
Figure BDA0002206314930000052
Figure BDA0002206314930000053
In the formula giIs P neighborhood points which are distributed at equal intervals on a circle with the characteristic point as the center and the radius of R, wherein R is 1, P is 8,
Figure BDA0002206314930000054
is giCentral symmetry point of (1), DCS-And DCS-
Figure BDA0002206314930000056
Representing the upper and lower halves of the DCS-LBP, respectively, and T is the threshold parameter.
Step four: calculating a feature statistical histogram for each sub-region according to the following formula: hupperAnd HlowerThe feature statistical histograms of the sub-regions are obtained by connecting the sub-regions in series;
Figure BDA0002206314930000057
wherein k belongs to [0, k ], and k is the maximum value of the code of DCS-LBP; h is the length of the local texture image; w is the width of the local texture image.
Step five: all the feature statistical histograms of all the sub-regions are connected in series to obtain a feature descriptor of the local feature region;
step six: the local feature descriptor is normalized and thresholded according to the following formula to finally obtain the DCS-LBP feature descriptor, as shown in FIG. 3.
Figure BDA0002206314930000061
In the formula
Figure BDA0002206314930000062
Is a statistical histogram vector of the DCS-LBP feature descriptor, L ═ L1,l2,...,lN) Is normalizedN is the feature dimension of the histogram vector of the DCS-LBP descriptor,
Figure BDA0002206314930000063
where N is 16 x 24512, much smaller than the LBP operator M × 2 under the same parametersP=16*284096 and LTP operator M3P=16*38=104976。
The thresholding is to limit the value of each component of the feature vector below a certain threshold value, where 0.2 is used, and to perform feature vector normalization again.
The method comprises the steps of carrying out image matching on pictures of scene types containing different geometric deformation and illumination transformation in a data set of Mikolajczyk, wherein a Bark test image pair is a texture scene with scale and rotation transformation, a Graf test image pair is a structured scene with visual angle transformation, and a Bikes test image pair is a structured scene with fuzzy transformation, and is shown in figure 4. Firstly, extracting local feature descriptors from two images to be matched, and calculating the similarity measurement of each point in the image feature descriptors and the images to be matched. Setting a threshold value to judge two descriptors which are close to each other. In this way zero or more matching points can be obtained. The present invention sets the threshold to 0.6. That is, if the ratio of the most similar feature point to the next most similar feature point is greater than 0.6, the point is considered to have a matching point and the matching point is the most similar point, otherwise, the matching point is considered not to exist.
Recall (recall) and accuracy (precision) were used as evaluation criteria for image matching. The Euclidean distance between each local region in the image and the feature vector of each local region in the image to be matched is calculated, then the matched operator is calculated by applying the nearest close distance ratio threshold, and finally the correct matching number and the wrong matching number are counted. The final match result curve is expressed using recall vs. 1-precision.
Figure BDA0002206314930000071
Correlation consensus (coreespondances) refers to the number of feature points extracted from two images that can be matched successfully at most. All match points (all matches) are the sum of the correct match point and the incorrect match point. The matching result is shown in fig. 5. It can be seen from the figure that the DCS-LBP feature descriptor performs better than the other four feature descriptors. Compared with SIFT descriptor and SURF descriptor, the DCS-LBP descriptor of the invention also has good scale, rotation, affine and compression invariance and is superior to the SIFT descriptor and SURF descriptor in the aspect of depicting the local texture features of the image.
The calculation speed of each feature point of the Graff raw image in FIG. 4(c) is calculated, and as shown in Table 1 below, it can be seen that the calculation speed is fast and has a certain real-time property.
TABLE 1 average operating speed per feature point for Graff raw images
Feature descriptors Average operation speed/ms of each feature point
SIFT descriptor 6.06
CS-LBP descriptor 2.64
SURF descriptor 2.98
CS-LTP descriptor 3.70
DCS-LBP descriptor 2.78

Claims (5)

1. A method for extracting image local feature descriptors based on texture features is characterized by comprising the following steps:
the method comprises the following steps: detecting a local characteristic domain of the image by using a Hessian-Affinine detector for the whole image to obtain an elliptical neighborhood around a characteristic point;
step two: calculating the main direction of the elliptical neighborhood, taking the major axis of the neighborhood as the transverse axis of an image coordinate system, taking the minor axis as the vertical axis of the image coordinate system, and rotating the elliptical neighborhood; then, performing affine projection on the neighborhood obtained by rotation to convert the neighborhood into a circular neighborhood;
step three: dividing the circular neighborhood into M sub-regions by adopting a Cartesian grid, and calculating the DCS-LBP characteristic value of each sub-region
Figure FDA0002206314920000011
And
Figure FDA0002206314920000012
step four: calculating a feature statistical histogram for each sub-region: hupperAnd HlowerThe feature statistical histograms of the sub-regions are obtained by connecting the sub-regions in series;
step five: all the feature statistical histograms of all the sub-regions are connected in series to obtain a feature descriptor of the local feature region;
step six: and carrying out normalization and thresholding on the local feature descriptor to finally obtain the image local feature descriptor.
2. The method for extracting the image local feature descriptor based on the texture feature as claimed in claim 1, wherein the specific implementation method of the first step is as follows:
detecting feature points by using a multi-scale Hessian detector, constructing a scale space for each detected feature point, and calculating on each scale of a scale sequenceThe result of gaussian smoothing of the local image L (x, Σ) is g (x, Σ) i (x), where
Figure FDA0002206314920000013
Where g (x, Σ) is an affine gaussian kernel, i (x) is the gray level of the local image, Σ represents a positive scaling covariance matrix, and is the convolution operation of the image block, then the second moment is μ (x, Σ)ID)=g(x,ΣI)*((▽L)(x,ΣD)(▽L)(x,ΣD)T) Wherein ∑ISum-sigmaDThe feature point.
3. The method for extracting the local feature descriptors of the image based on the texture features as claimed in claim 2, wherein the calculation in the third step is according to the following formula:
Figure FDA0002206314920000021
in the formula giIs P neighborhood points which are distributed on a circle with the characteristic point as the center and the radius of R at equal intervals,
Figure FDA0002206314920000022
is giThe point of central symmetry of (a),
Figure FDA0002206314920000023
andrepresenting the upper and lower halves of the DCS-LBP, respectively, and T is the threshold parameter.
4. The method of claim 3The image local feature descriptor extraction method based on the texture features is characterized in that in the fourth step, a feature statistical histogram of each sub-region is calculated according to the following formula: hupperAnd Hlower
Wherein k belongs to [0, k ], and k is the maximum value of the code of DCS-LBP; h is the length of the local texture image; w is the width of the local texture image.
5. The method of claim 4, wherein the DCS-LBP descriptor in the sixth step is as follows:
in the formulaIs a statistical histogram vector of the DCS-LBP feature descriptor, L ═ L1,l2,...,lN) Is a normalized statistical histogram vector, N is the characteristic dimension of the histogram vector of the DCS-LBP descriptor,
Figure FDA0002206314920000028
wherein M is the number of the divided sub-regions.
CN201910882534.4A 2019-09-18 2019-09-18 Image local feature descriptor extraction method based on texture features Pending CN110705569A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910882534.4A CN110705569A (en) 2019-09-18 2019-09-18 Image local feature descriptor extraction method based on texture features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910882534.4A CN110705569A (en) 2019-09-18 2019-09-18 Image local feature descriptor extraction method based on texture features

Publications (1)

Publication Number Publication Date
CN110705569A true CN110705569A (en) 2020-01-17

Family

ID=69194789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910882534.4A Pending CN110705569A (en) 2019-09-18 2019-09-18 Image local feature descriptor extraction method based on texture features

Country Status (1)

Country Link
CN (1) CN110705569A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739006A (en) * 2020-06-22 2020-10-02 深圳企业云科技股份有限公司 Elliptical image detection algorithm and system based on enclosed road integral
CN112560666A (en) * 2020-12-11 2021-03-26 北部湾大学 Robot vision servo grabbing target positioning method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243386A (en) * 2014-07-10 2016-01-13 汉王科技股份有限公司 Face living judgment method and system
CN103093226B (en) * 2012-12-20 2016-01-20 华南理工大学 A kind of building method of the RATMIC descriptor for characteristics of image process
CN105825183A (en) * 2016-03-14 2016-08-03 合肥工业大学 Face expression identification method based on partially shielded image
CN108027886A (en) * 2015-09-23 2018-05-11 高通股份有限公司 Use the system and method for increment object detection of dual threshold local binary pattern operator

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093226B (en) * 2012-12-20 2016-01-20 华南理工大学 A kind of building method of the RATMIC descriptor for characteristics of image process
CN105243386A (en) * 2014-07-10 2016-01-13 汉王科技股份有限公司 Face living judgment method and system
CN108027886A (en) * 2015-09-23 2018-05-11 高通股份有限公司 Use the system and method for increment object detection of dual threshold local binary pattern operator
CN105825183A (en) * 2016-03-14 2016-08-03 合肥工业大学 Face expression identification method based on partially shielded image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WENHUA GUO 等,: "Object Tracking Using Local Multiple Features and a Posterior Probability Measure", 《SENSORS》 *
唐祎玲 等,: "基于眼优势的非对称失真立体图像质量评价", 《自动化学报》 *
钟金琴 等,: "基于二阶矩的SIFT特征匹配算法", 《计算机应用》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739006A (en) * 2020-06-22 2020-10-02 深圳企业云科技股份有限公司 Elliptical image detection algorithm and system based on enclosed road integral
CN111739006B (en) * 2020-06-22 2021-07-13 深圳企业云科技股份有限公司 Elliptical image detection algorithm and system based on enclosed road integral
CN112560666A (en) * 2020-12-11 2021-03-26 北部湾大学 Robot vision servo grabbing target positioning method
CN112560666B (en) * 2020-12-11 2021-08-17 北部湾大学 Robot vision servo grabbing target positioning method

Similar Documents

Publication Publication Date Title
Jiang et al. Robust feature matching using spatial clustering with heavy outliers
Su et al. A fast forgery detection algorithm based on exponential-Fourier moments for video region duplication
CN110807473B (en) Target detection method, device and computer storage medium
US7620250B2 (en) Shape matching method for indexing and retrieving multimedia data
Tang et al. Distinctive image features from illumination and scale invariant keypoints
CN111753119A (en) Image searching method and device, electronic equipment and storage medium
Ali et al. Speeded up robust features for efficient iris recognition
CN105913069A (en) Image identification method
CN110705569A (en) Image local feature descriptor extraction method based on texture features
CN109840529B (en) Image matching method based on local sensitivity confidence evaluation
Sun et al. Graph-matching-based character recognition for Chinese seal images
CN108694411B (en) Method for identifying similar images
Wang et al. Accurate and robust image copy-move forgery detection using adaptive keypoints and FQGPCET-GLCM feature
Arjun et al. An efficient image retrieval system based on multi-scale shape features
CN108764245B (en) Method for improving similarity judgment accuracy of trademark graphs
CN116415210A (en) Image infringement detection method, device and storage medium
CN113283478B (en) Assembly body multi-view change detection method and device based on feature matching
Zahra Image duplication forgery detection using two robust features
Fan et al. Local patterns constrained image histograms for image retrieval
Zhou et al. Shape matching based on rectangularized curvature scale-space maps
Anvaripour et al. Accurate object detection using local shape descriptors
CN111160397A (en) Multi-scale visual dictionary generation method and system
CN111931791B (en) Method for realizing image turnover invariance
CN108897746B (en) Image retrieval method
Ouyang et al. Feature Point Extraction and Matching Based on Improved SURF Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117

RJ01 Rejection of invention patent application after publication