CN113408569A - Image registration method based on density clustering - Google Patents

Image registration method based on density clustering Download PDF

Info

Publication number
CN113408569A
CN113408569A CN202110460146.4A CN202110460146A CN113408569A CN 113408569 A CN113408569 A CN 113408569A CN 202110460146 A CN202110460146 A CN 202110460146A CN 113408569 A CN113408569 A CN 113408569A
Authority
CN
China
Prior art keywords
final
point
points
image
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110460146.4A
Other languages
Chinese (zh)
Other versions
CN113408569B (en
Inventor
刘伟壹
谌小维
廖祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Third Military Medical University TMMU
Original Assignee
Third Military Medical University TMMU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Third Military Medical University TMMU filed Critical Third Military Medical University TMMU
Priority to CN202110460146.4A priority Critical patent/CN113408569B/en
Publication of CN113408569A publication Critical patent/CN113408569A/en
Application granted granted Critical
Publication of CN113408569B publication Critical patent/CN113408569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image registration method based on density clustering, which comprises the following steps: the image registration method based on density clustering comprises the following steps: extracting a first final feature point set of the template image and a second final feature point set of each frame of target image; respectively generating description vectors of all first final feature points in the first final feature point set and all second final feature points in the second final feature point set; matching the first final characteristic point with a second final characteristic point of the current frame target image according to the similarity of the description vectors; calculating registration parameters according to all successfully matched first final feature points and second final feature points of the current frame target image; and carrying out image transformation on the current frame target image according to the registration parameters. The method provided by the invention can be used for rapidly updating the characteristic points of the target image frame by frame through a density clustering method, so that the registration processing speed is greatly increased.

Description

Image registration method based on density clustering
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image registration method based on density clustering.
Background
The configuration is that a fixed reference image is selected as a template, an image with relative offset is taken as a target, the coordinate offset of the template image is calculated through an algorithm, and then the target image is subjected to rigid or non-rigid change to enable the target image and the template image to be in the same relative coordinate. The mainstream algorithm is as follows: feature point matching-based registration algorithm, similarity measurement-based registration algorithm, and deep learning-based registration algorithm.
The registration algorithm based on feature point matching is a global search strategy by traversing all pixel points, describing the neighborhood gradient of a certain point and defining the point which meets a certain rule as a feature point; such algorithms are generally of good general applicability, but slow.
The registration algorithm based on similarity measurement is to perform statistics and transformation on image global pixel points and perform sliding matching, such as FFT (fast fourier transform) and MI (mutual information) registration; such algorithms have relatively high accuracy, but are also slow.
Deep learning based registration algorithms have the highest accuracy on a particular data set, but require extensive data training.
Disclosure of Invention
The invention aims to overcome one or more defects in the prior art and provides an image registration method based on density clustering.
The purpose of the invention is realized by the following technical scheme: the image registration method based on density clustering comprises the following steps:
extracting a first final feature point set of the template image and a second final feature point set of each frame of target image;
respectively generating description vectors of all first final feature points in the first final feature point set and all second final feature points in the second final feature point set;
matching the first final characteristic point with a second final characteristic point of the current frame target image according to the similarity of the description vectors;
calculating registration parameters according to all successfully matched first final feature points and second final feature points of the current frame target image;
and carrying out image transformation on the current frame target image according to the registration parameters.
Preferably, if the target image is a frame, extracting a first final feature point of the template image and a second final feature point of the target image includes:
extracting a first initial characteristic point of the template image and a second initial characteristic point of the target image;
performing density clustering on all the first initial characteristic points to obtain a first intermediate characteristic point set, and performing density clustering on all the second initial characteristic points to obtain a second intermediate characteristic point set;
performing density clustering according to the first initial characteristic points to obtain density serving as a threshold value to filter noise points in the first intermediate characteristic point set, and performing density clustering according to the second initial characteristic points to obtain density serving as a threshold value to filter noise points in the second intermediate characteristic point set;
carrying out proximity merging and density clustering on all the first intermediate characteristic points to obtain a first final characteristic point set, and carrying out proximity merging and density clustering on all the second intermediate characteristic points to obtain a second final characteristic point set of the target image;
if the target image is an image sequence containing a plurality of frames of images, extracting a first final feature point of the template image and a second final feature point of the target image, wherein the steps of:
extracting a first initial characteristic point of the template image and a second initial characteristic point of the first frame target image;
performing density clustering on all the first initial feature points to obtain a first intermediate feature point set, and performing density clustering on all the second initial feature points of the first frame of target image to obtain a second intermediate feature point set;
performing density clustering according to the first initial characteristic points to obtain density serving as a threshold value to filter noise points in the first intermediate characteristic point set, and performing density clustering according to the second initial characteristic points of the first frame of target image to obtain density serving as a threshold value to filter noise points in the second intermediate characteristic point set;
carrying out proximity merging and density clustering on all the first intermediate characteristic points to obtain a first final characteristic point set, and carrying out proximity merging and density clustering on all the second intermediate characteristic points of the first frame of target image to obtain a second final characteristic point set of the first frame of target image;
and for the non-first frame target image, performing density clustering on all second final feature points in the second final feature point set of the previous frame target image to obtain a second final feature point set of the frame target image.
Preferably, the first initial feature point is extracted by a FAST algorithm or generated randomly, and the second initial feature point is extracted by a FAST algorithm or generated randomly.
Preferably, the density clustering of the first initial feature points includes:
taking each first initial characteristic point as a center, and taking a neighborhood image in a preset range;
respectively taking the pixel values of a preset number of pixel points in the neighborhood image as weights, iterating by a self-adaptive step length fast gradient ascent method, updating each feature point, ending at a density local extreme point, namely a density attraction point, and taking the density attraction point as a first intermediate feature point;
performing density clustering on the second initial feature points comprises:
taking each second initial characteristic point as a center, and taking a neighborhood image in a preset range;
and respectively taking the pixel values of a preset number of pixel points in the neighborhood image as weights, iterating by a self-adaptive step length fast gradient ascent method, updating each feature point, ending at a density local extreme point, namely a density attraction point, and taking the density attraction point as a second intermediate feature point.
Preferably, the calculation formula of the rapid gradient ascent method is as follows:
Figure BDA0003042113180000031
Figure BDA0003042113180000032
in the formula (I), the compound is shown in the specification,
Figure BDA0003042113180000033
as a coordinate vector of an arbitrary point
Figure BDA0003042113180000034
The corresponding estimated density of the image is calculated,
Figure BDA0003042113180000035
as a coordinate vector of an arbitrary point
Figure BDA0003042113180000036
The value of the pixel of (a) is,
Figure BDA0003042113180000037
is the estimated point of the region, the h bandwidth is a constant used to smooth the empirical distribution, and epsilon is a limiting parameter set to terminate the gradient rise process.
Preferably, the method for generating the description vector of the first final feature point includes: taking a difference vector set of the position distribution of one first final characteristic point and all the rest first final characteristic points as a group of description vectors of the first final characteristic point;
the method for generating the description vector of the second final feature point comprises the following steps: and taking a set of difference vectors of the position distribution of one second final feature point and all the other second final feature points in the same second final feature point set as a group of description vectors of the second final feature point.
Preferably, matching the description vector of the first final feature point with the description vector of the second final feature point includes: calculating Jacard similarity coefficients of the description vector of the first final characteristic point and the description vector of the second final characteristic point, and if the Jacard similarity coefficients are larger than a preset value, determining that the first final characteristic point and the second final characteristic point are successfully matched;
the calculation method of the Jacard similarity coefficient comprises the following steps: if the ratio of the norm of the difference vector in one description vector to the norm of one difference vector in the other description vector to the sum of the norms of the two difference vectors is smaller than a preset value, the two difference vectors are considered to be the same, and the Jacard similarity coefficient of the two description vectors is the intersection of the difference vectors in the two description vectors divided by the union, namely the homogeneous difference vector ratio is smaller than the total difference vector number minus the homogeneous difference vector number.
Preferably, the formula for calculating the Jacard similarity coefficient is as follows:
Figure BDA0003042113180000038
Figure BDA0003042113180000041
Figure BDA0003042113180000042
in the formula (I), the compound is shown in the specification,
Figure BDA0003042113180000043
as a set of points KpTemplateThe characteristic point of (1) is determined,
Figure BDA0003042113180000044
as a set of points KpDestinationThe characteristic point of (1) is determined,
Figure BDA0003042113180000045
is a characteristic point
Figure BDA0003042113180000046
Is described by
Figure BDA0003042113180000047
And point set KpTemplateThe coordinate difference of all other feature points in the image,
Figure BDA0003042113180000048
is a characteristic point
Figure BDA0003042113180000049
Is described by
Figure BDA00030421131800000410
And point set KpDestinationThe coordinate difference of all other characteristic points in the description vector form, J represents the Jacard similarity coefficient of two description vectors,τsamemeans that there is a repeated matching number of times, ζ, that one disparity vector of one set of the description vectors matches a plurality of disparity vectors of the other set of the description vectors when comparing the two description vectors1The ratio of the norm representing the difference between two disparity vectors and the sum of the norms of the two disparity vectors is used to define whether the two disparity vectors are homogenous or not.
Preferably, the image registration method based on density clustering further includes:
when the first final feature point and the second final feature point are matched, judging whether the target image has an angle rotation difference larger than a threshold value, if so, respectively calculating the main directions of a first final feature point description vector and a second final feature point description vector;
rotating all difference vectors in the description vector of the first final characteristic point and all difference vectors in the description vector of the second final characteristic point of the target image according to the angle of the main direction;
and matching the first final feature point and the second final feature point according to the similarity of the transformed description vectors.
Preferably, calculating the main direction of the description vector comprises:
selecting a first final characteristic point and a second final characteristic point to be matched;
performing main direction histogram statistics on the description vector of the first final feature point to obtain the maximum trend direction of the description vector;
performing main direction histogram statistics on the description vector of the second final feature point to obtain the maximum trend direction of the description vector;
the direction histogram statistical method comprises the following steps: dividing the peripheral angle into corresponding angle ranges according to the fixed angle, counting the norm value of the difference vector in each angle range, and calling the angle with the maximum total norm value in each angle range as the maximum trend direction of the angle;
all the difference vectors in the description vectors of the first final characteristic point are subjected to rotation linear change by taking the maximum trend direction as the coordinate reference direction;
and performing rotation linear change on all the difference vectors in the description vectors of the second final characteristic point by taking the maximum trend direction of the difference vectors as the coordinate reference direction of the difference vectors.
The invention has the beneficial effects that:
(1) the invention provides a characteristic point with high pixel intensity by a density clustering method, which is obtained by ascending and iterating any initial point according to smooth micro empirical distribution gradient of a neighborhood region of the initial point, and has the characteristics of noise resistance, stability and local calculation; the method extracts high-quality feature points while reducing the calculation amount, and is suitable for large-size and large-volume image data.
(2) When the method is used for calculating frame by frame, according to the characteristics of similar structure and similar feature points of adjacent frames of images, a priori strategy is used for starting from the feature point of the previous frame, the neighborhood of each feature point is selected for density clustering and is updated to be the corresponding feature point of the next frame, and due to the fact that the feature points are similar, the method is fast converged during updating, and the processing speed of image registration is greatly improved.
(3) The method carries out feature point matching through the internal orientation relation of the feature points, and finally calculates the corresponding deviation for registration, thereby avoiding the process of widely calculating and matching the neighborhood gradient information of the feature points in the similar algorithm and improving the processing speed of image registration.
(4) The method can be operated on simple equipment without any pre-training and pre-calculation, does not construct multi-scale images, adopts local calculation, saves memory and calculation power, and is convenient and easy to operate.
(5) The invention carries out registration by comparing the coordinate relationship of the common high-pixel intensity characteristics matched with the target image and the template, has intuitive characteristics and direct mode, and obtains good registration effect while greatly improving the registration efficiency.
Drawings
FIG. 1 is a schematic flow chart of an image registration method based on density clustering;
FIG. 2a is a schematic diagram of initial feature points obtained using the FAST algorithm;
FIG. 2b is a schematic diagram of the final feature points obtained from FIG. 2a after density clustering and a series of processing;
FIG. 3a is a diagram of a first final feature point of a template image in a case;
FIG. 3b is a diagram of a second final feature point of the target image in a case;
FIG. 4 is a schematic flow chart of another image registration method based on density clustering;
FIG. 5a is a template image in yet another case;
FIG. 5b is a schematic diagram illustrating the feature points in the image sequence being updated continuously frame by frame in another embodiment;
FIG. 6a is a schematic diagram of the first final feature point successfully matched in FIG. 3 a;
FIG. 6b is a diagram of the second final feature point successfully matched in FIG. 3 b;
fig. 7a is a schematic diagram of a first final feature point set of the template image and a description vector of a first final feature point to be matched in the case where there is a rotation angle;
fig. 7b is a schematic diagram of a second final feature point set of the rotated target image and a description vector of a second final feature point to be matched in the case where there is a rotation angle;
fig. 7c is a direction histogram of the description vectors of the first final feature points to be matched in the case where there is a rotation angle;
fig. 7d is a direction histogram of the description vectors of the second final feature points to be matched in the case when there is a rotation angle;
fig. 7e is a schematic diagram of the rotation linearity variation of all the disparity vectors in the description vectors of the first final feature point in the case where the rotation angle exists;
fig. 7f is a schematic diagram showing the rotational linear change of all the disparity vectors in the description vectors of the second final feature point in the case where there is a rotational angle;
FIG. 8a is a template image in the case of a two-photon imaging sequence of a neuronal cell;
FIG. 8b is an unregistered target image in the case of a two-photon imaging sequence of a neuronal cell body;
FIG. 8c is a registered target image in the case of a two-photon imaging sequence of a neuronal cell;
FIG. 8d is a template image in the case of a two-photon imaging sequence of a neuron dendrite;
FIG. 8e is an unregistered target image in the case of a two-photon imaging sequence of a neuron dendrite;
FIG. 8f is the registered target image in the case of a two-photon imaging sequence of a neuron dendrite;
FIG. 9a is a schematic representation of the mean frame of the original image sequence in the case of a two-photon imaging sequence of a neuronal cell body;
FIG. 9b is a schematic representation of the mean frames of the registered image sequence in the case of a two-photon imaging sequence of neuronal cells;
FIG. 9c is a schematic representation of the average frame of the original image sequence in the case of a two-photon imaging sequence of yet another neuron dendrite;
fig. 9d is a schematic representation of the averaged frames of the registered image sequence in the case of a two-photon imaging sequence of yet another neuron dendrite.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1-9, the present invention provides an image registration method based on density clustering:
as shown in fig. 1, the image registration method based on density clustering includes:
s1, extracting a first final feature point set of the template image and a second final feature point set of each frame of target image.
Specifically, if the target image is a frame, the step S1 includes:
and S11, extracting first initial feature points of the template image and second initial feature points of the target image.
Generally, the first initial feature point may be obtained by a random generation method, or may be obtained by using an existing feature point extraction algorithm, for example, a FAST algorithm may be used to perform feature point extraction on the template image to obtain the first initial feature point. Similarly, the second initial feature point may be obtained by a random generation method, or may be obtained by using an existing feature point extraction algorithm, for example, a FAST algorithm may be used to extract a feature point of the target image to obtain the second initial feature point.
And S12, performing density clustering on all the first initial characteristic points to obtain a first intermediate characteristic point set, and performing density clustering on all the second initial characteristic points to obtain a second intermediate characteristic point set.
Performing density clustering on the first initial feature points comprises: taking each first initial characteristic point as a center, and taking a neighborhood image in a preset range; and respectively taking the pixel values of a preset number of pixel points in the neighborhood image as weights, iterating by a self-adaptive step length fast gradient ascent method, updating each feature point, ending at a density local extreme point, namely a density attraction point, and taking the density attraction point as a first intermediate feature point.
Performing density clustering on the second initial feature points comprises: taking each second initial characteristic point as a center, and taking a neighborhood image in a preset range; and respectively taking the pixel values of a preset number of pixel points in the neighborhood image as weights, iterating by a self-adaptive step length fast gradient ascent method, updating each feature point, ending at a density local extreme point, namely a density attraction point, and taking the density attraction point as a second intermediate feature point.
Derivation of the fast gradient ascent method of the adaptive step size:
performing weighted Gaussian kernel density estimation on a preset number of pixel points in the neighborhood image, wherein a density calculation formula is as follows:
Figure BDA0003042113180000071
Figure BDA0003042113180000072
in the formula (I), the compound is shown in the specification,
Figure BDA0003042113180000073
as a coordinate vector of an arbitrary point
Figure BDA0003042113180000074
The corresponding estimated density;
Figure BDA0003042113180000075
as a coordinate vector of an arbitrary point
Figure BDA0003042113180000076
A pixel value of (a);
Figure BDA0003042113180000077
all the estimation points in the region; h bandwidth is a constant used to smooth the empirical distribution; d is the dimension of the data, and is a constant of 2 in the algorithm;
according to the above, the estimated density
Figure BDA0003042113180000078
The derivative calculated gradient has the following formula:
Figure BDA0003042113180000079
accordingly, the gradient of an arbitrary point within the estimation region can be calculated, and therefore a local maximum value (i.e., a density attraction point) can be found as a first intermediate feature point according to the gradient ascent method, and the calculation formula is as follows:
Figure BDA0003042113180000081
in the formula, δ is a fixed step length of the gradient ascent method.
On the basis of the gradient ascent method, the present embodiment proposes a calculation formula of a fast gradient ascent method with adaptive step size:
Figure BDA0003042113180000082
Figure BDA0003042113180000083
in the formula (I), the compound is shown in the specification,
Figure BDA0003042113180000084
as a coordinate vector of an arbitrary point
Figure BDA0003042113180000085
The corresponding estimated density of the image is calculated,
Figure BDA0003042113180000086
as a coordinate vector of an arbitrary point
Figure BDA0003042113180000087
The value of the pixel of (a) is,
Figure BDA0003042113180000088
is the estimated point of the region, the h bandwidth is a constant used to smooth the empirical distribution, and epsilon is a limiting parameter set to terminate the gradient rise process.
Under the application of the above formula, a gradient ascent method is applied to the first initial characteristic point to find a density local extreme value, namely a density attraction point, namely a first intermediate characteristic point; and searching a local extreme value of density, namely a density attraction point, namely a second intermediate characteristic point by applying a gradient ascent method to the second initial characteristic point.
And S13, filtering the noise points in the first intermediate characteristic point set by taking a density obtained by density clustering according to the first initial characteristic points as a threshold value, and filtering the noise points in the second intermediate characteristic point set by taking a density obtained by density clustering according to the second initial characteristic points as a threshold value.
S14, performing proximity merging and density clustering on all the first intermediate characteristic points to obtain a first final characteristic point set, and performing proximity merging and density clustering on all the second intermediate characteristic points to obtain a second final characteristic point set of the target image.
Performing neighbor merging and density clustering again on the first intermediate feature points and the second intermediate feature points can remove duplicate extreme points.
In some embodiments, noise points in the first intermediate feature point set are filtered by taking the upper quartile of the density obtained by performing density clustering on the first initial feature point as a threshold, and an error generated when the first intermediate feature points in a preset range are adjacently merged and then are subjected to clustering correction merging again is obtained to obtain a first final feature point set; and filtering noise points in the second intermediate characteristic point set by taking the upper quartile of the density obtained by carrying out density clustering on the second initial characteristic points of the first frame of target image as a threshold value, and carrying out clustering correction merging again after the second intermediate characteristic points in the preset range are adjacently merged to obtain a final characteristic point set of the first frame of target image. The threshold value and the specific size of the preset range for combining the first intermediate characteristic point and the second intermediate characteristic point are set according to actual needs.
After all the first intermediate feature points and the second intermediate feature points are obtained through calculation, because certain noise exists in the image, some intermediate feature points may not reflect real features, the first intermediate feature points are processed again to obtain first final feature points, and the second intermediate feature points are processed to obtain second final feature points.
Fig. 2 provides an illustration of obtaining final feature points based on initial feature points, where fig. 2a is a schematic diagram of initial feature points (first initial feature points or second initial feature points) obtained by using a FAST algorithm, and fig. 2b is a schematic diagram of final feature points (first final feature points or second final feature points) obtained after density clustering and a series of processing.
Fig. 3 provides a schematic diagram of the final feature points of the template image and the target image in a case, where fig. 3a is a schematic diagram of the first final feature point of the template image, and fig. 3b is a schematic diagram of the second final feature point of the target image.
As shown in fig. 4, when the target image is an image sequence including multiple frames of images, the method for extracting the second final feature point set of the target image of the first frame is the same as the method for extracting the second final feature point set of the target image when the target image is one frame, that is: extracting a first initial characteristic point of the template image and a second initial characteristic point of the first frame target image; performing density clustering on all the first initial feature points to obtain a first intermediate feature point set, and performing density clustering on all the second initial feature points of the first frame of target image to obtain a second intermediate feature point set; performing density clustering according to the first initial characteristic points to obtain density serving as a threshold value to filter noise points in the first intermediate characteristic point set, and performing density clustering according to the second initial characteristic points of the first frame of target image to obtain density serving as a threshold value to filter noise points in the second intermediate characteristic point set; and carrying out proximity merging and density clustering on all the first intermediate characteristic points to obtain a first final characteristic point set, and carrying out proximity merging and density clustering on all the second intermediate characteristic points of the first frame of target image to obtain a second final characteristic point set of the first frame of target image.
And when the target image is an image sequence containing a plurality of frames of images, for a non-first frame of target image, performing density clustering on all second final feature points in a second final feature point set of a previous frame of target image to obtain a second final feature point set of the frame of target image. Therefore, between adjacent frame target images, the feature points can reach local extreme points only through few iterations, and the calculation of local neighborhoods bypasses the inefficient mode of global retrieval.
Fig. 5 is a set of examples of generating the second final feature point of the non-first frame target image, in which fig. 5a shows the template image, and fig. 5b shows the schematic diagram of continuously updating the feature points frame by frame in the image sequence.
The method in the embodiment can be used for the double-photon image, on the basis that any point is taken as a starting point, the differentiable empirical distribution of the neighborhood of the double-photon image is smoothly estimated and calculated through a density clustering method, the characteristic point with high pixel intensity is iteratively updated, the characteristic point matching is carried out through the internal orientation relation of the characteristic point, and finally the corresponding deviation is calculated for registration. During frame-by-frame calculation, according to the characteristic that the feature points of adjacent frames are close, the neighborhood of each feature point is selected from the feature points of the previous frame by a priori strategy to carry out density clustering and updated to be the corresponding feature point of the next frame. Compared with the existing algorithm, the method in the embodiment can more quickly perform motion calibration on the image data set with large size and large data volume while achieving similar precision under the same equipment.
And S2, respectively generating description vectors of all the first final feature points in the first final feature point set and all the second final feature points in the second final feature point set.
In some embodiments, the description vector of the first final feature point is generated by: and taking a set of difference vectors of the position distribution of one first final characteristic point and all the rest first final characteristic points as a group of description vectors of the first final characteristic point.
The method for generating the description vector of the second final feature point comprises the following steps: and taking a set of difference vectors of the position distribution of one second final feature point and all the other second final feature points in the same second final feature point set as a group of description vectors of the second final feature point.
In particular, for the set of points KpTemplateLast feature point in (1)
Figure BDA0003042113180000101
And a set of points KpDestinationLast feature point in (1)
Figure BDA0003042113180000102
In other words, the description vector is of the form:
Figure BDA0003042113180000103
Figure BDA0003042113180000104
and S3, matching the first final characteristic point with a second final characteristic point of the current frame target image according to the similarity of the description vectors.
Specifically, matching the description vector of the first final feature point with the description vector of the second final feature point includes: and calculating Jacard similarity coefficients of the description vector of the first final characteristic point and the description vector of the second final characteristic point, and if the Jacard similarity coefficients are larger than a preset value, determining that the first final characteristic point and the second final characteristic point are successfully matched.
The calculation method of the Jacard similarity coefficient comprises the following steps: if the ratio of the norm of the difference vector in one description vector to the norm of one difference vector in the other description vector to the sum of the norms of the two difference vectors is smaller than a preset value, the two difference vectors are considered to be the same, and the Jacard similarity coefficient of the two description vectors is the intersection of the difference vectors in the two description vectors divided by the union, namely the homogeneous difference vector ratio is smaller than the total difference vector number minus the homogeneous difference vector number.
For matching of two sets of description vectors, the embodiment selects a Jaccard Similarity coefficient (Jaccard Similarity coefficient) as a measure, i.e., a ratio of an intersection and a union of two vector sets. Since a set of description vectors is composed of a set of difference vectors of the position distributions of one final feature point (i.e., the first final feature point or the second final feature point) and all the remaining second final feature points, and each difference vector is unique, there is no perfect agreement between elements inside the description vectors of two visually matched final feature points, i.e., the union of the Jacard similarity coefficients cannot be calculated. In contrast, in the present embodiment, parameter ζ is set1Evaluating whether the difference vectors are homogeneous or not, if the sum of the norm of the difference vector in one description vector and the norm of one difference vector in the other description vector and the norms of the two difference vectors is less than a preset value ζ1If the two difference vectors are the same, the calculation formula of the Jacard similarity coefficient is as follows:
Figure BDA0003042113180000111
Figure BDA0003042113180000112
Figure BDA0003042113180000113
in the formula (I), the compound is shown in the specification,
Figure BDA0003042113180000114
as a set of points KpTemplateThe characteristic point of (1) is determined,
Figure BDA0003042113180000115
as a set of points KpDestinationThe characteristic point of (1) is determined,
Figure BDA0003042113180000116
is a characteristic point
Figure BDA0003042113180000117
Is described by
Figure BDA0003042113180000118
And point set KpTemplateThe coordinate difference of all other feature points in the image,
Figure BDA0003042113180000119
is a characteristic point
Figure BDA00030421131800001110
Is described by
Figure BDA00030421131800001111
And point set KpDestinationThe coordinate difference of all other characteristic points in the description vector is formed, J represents the Jacard similarity coefficient of two description vectors, and tausameMeans that there is a repeated matching number of times, ζ, that one disparity vector of one set of the description vectors matches a plurality of disparity vectors of the other set of the description vectors when comparing the two description vectors1The ratio of the norm representing the difference between two difference vectors and the sum of the norms of the two difference vectors is used to define two differencesWhether the eigenvectors are homogenous.
Fig. 6 is a set of example diagrams of matching of the first final feature point and the second final feature point, wherein fig. 6a and 6b are schematic diagrams of the first final feature point and the second final feature point successfully matched in fig. 3a and 3 b.
And S4, calculating registration parameters according to all the successfully matched first final feature points and the second final feature points of the current frame target image.
And S5, carrying out image transformation on the current frame target image according to the registration parameters.
In some embodiments, the image registration method based on density clustering further comprises: when the first final feature point and the second final feature point are matched, judging whether the target image has an angle rotation difference larger than a threshold value, if so, respectively calculating the main directions of a first final feature point description vector and a second final feature point description vector; rotating all difference vectors in the description vector of the first final characteristic point and all difference vectors in the description vector of the second final characteristic point of the target image according to the angle of the main direction; and matching the first final feature point and the second final feature point according to the similarity of the transformed description vectors.
In these embodiments, when there is a rotation angle between the template image and the target image, the first final feature point description vector and the second final feature point description vector are used for processing, so as to obtain stability facing rotation when the feature points are matched.
In these embodiments, the main directions of the description vectors of the two groups of histograms are counted respectively by using the direction histogram, and transformation and normalization processes are performed to make the two groups of description vectors in the same relative coordinate system.
Calculating a principal direction of the description vector, comprising: selecting a first final characteristic point and a second final characteristic point to be matched; performing main direction histogram statistics on the description vector of the first final feature point to obtain the maximum trend direction of the description vector; performing main direction histogram statistics on the description vector of the second final feature point to obtain the maximum trend direction of the description vector; the direction histogram statistical method comprises the following steps: dividing the peripheral angle into corresponding angle ranges according to the fixed angle, counting the norm value of the difference vector in each angle range, and calling the angle with the maximum total norm value in each angle range as the maximum trend direction of the angle; all the difference vectors in the description vectors of the first final characteristic point are subjected to rotation linear change by taking the maximum trend direction as the coordinate reference direction; and performing rotation linear change on all the difference vectors in the description vectors of the second final characteristic point by taking the maximum trend direction of the difference vectors as the coordinate reference direction of the difference vectors.
Fig. 7 is a case diagram of a template image and a target image when there is a rotation angle, where one template image and one target image are selected, the target image is rotated by 90 °, and then a pair of visually matched final feature points to be matched are selected and their description vectors are respectively drawn (each arrow in the diagram is a difference vector in a set of description vectors). Fig. 7a shows a first final feature point set of the template image and a description vector of the first final feature point to be matched; FIG. 7b shows the second final feature point set of the rotated target image and the description vector of the second final feature point to be matched; fig. 7c shows the direction histogram statistics of the description vector of the first final feature point to be matched, and the maximum trend direction is found; FIG. 7d shows the direction histogram statistics of the description vector of the second final feature point to be matched, finding the maximum deflection direction; fig. 7e shows that all the difference vectors in the description vector of the first final feature point undergo rotational linear change with the maximum trend direction as the coordinate reference direction; fig. 7f shows that all the disparity vectors in the description vector of the second final feature point are rotated and linearly changed with the maximum trend direction as the coordinate reference direction.
And S08, calculating registration parameters according to all the successfully matched first final characteristic points and the second final characteristic points of the current frame target image.
And S09, carrying out image transformation on the current frame target image according to the registration parameters.
The method in the embodiment can be used for the double-photon image, and based on any point as a starting point, the differentiable empirical distribution of the neighborhood is smoothly estimated and calculated by a density clustering method, and the characteristic point with high pixel intensity is quickly updated in an iterative manner. The proposed characteristic point with high pixel intensity has the characteristics of noise resistance, stability and local calculation; the method extracts high-quality feature points while reducing the calculation amount, and is suitable for large-size and large-volume image data.
In the embodiment, when the final feature points are calculated frame by frame, according to the characteristics of similar image structures and similar feature points of adjacent frames, the prior strategy is used for taking the feature point of the previous frame as the starting point, the neighborhood of each feature point is selected for density clustering and is updated to be the corresponding feature point of the next frame, and due to the similar feature points, the fast convergence is realized during the updating, so that the processing speed of image registration is greatly improved.
In the embodiment, the feature point matching is carried out through the internal orientation relation of the feature points, and the corresponding deviation is finally calculated for registration, so that the process of widely calculating and matching the neighborhood gradient information of the feature points in the similar algorithm is avoided, and the processing speed of image registration is improved.
The method can be realized on simple equipment without any pre-training and pre-calculation, does not construct a multi-scale image, adopts local calculation, saves memory and calculation power, and is convenient and easy to operate.
The embodiment performs registration by comparing the coordinate relationship of the common high-pixel-intensity characteristics matched with the target image and the template, has intuitive characteristics and direct mode, and obtains good registration effect while greatly improving the registration efficiency.
Two-photon imaging (two-photon image) is a confocal imaging technique used to observe neurons of living organisms. Typically the observation is a sequence of images that, due to biological and technical factors, is subject to random jitter, i.e. the images cannot be at consistent relative coordinates. For a two-photon image sequence, motion correction is to select a frame of image as a reference fixed template and register other frames with one so that the whole sequence is in the same relative coordinate system.
FIG. 8 is two example diagrams of the registration effect of a single frame target image in a two-photon imaging sequence of a neuron cell, wherein FIGS. 8a-8c are two-photon imaging sequence cases of a neuron cell, FIG. 8a is a template image, FIG. 8b is an unregistered target image, and FIG. 8c is a registered target image; fig. 8d-8f are two-photon imaging sequence cases of another neuron dendrite, fig. 8d is a template image, fig. 8e is an unregistered target image, and fig. 8f is a registered target image.
Fig. 9 is two examples of registration effect of sequence images, where fig. 9a and 9b are two-photon imaging sequence cases of a neuron cell, fig. 9a is an average frame diagram of an original image sequence, fig. 9b is an average frame diagram of a registered image sequence, fig. 9c and 9d are two-photon imaging sequence cases of another neuron dendrite, fig. 9c is an average frame diagram of an original image sequence, and fig. 9d is an average frame diagram of a registered image sequence.
The data set was tested using the method of the present invention and a portion of the existing algorithm, table 1 shows the results of testing a 200 frame 500 x 500 pixel sequence of neuronal soma images, and table 2 shows the results of testing a 1600 frame 250 x 250 pixel dendrite data set.
The similarity measurement indexes comprise MSE (mean square error), NRMSE (normalized root mean square error), PSNR (peak signal-to-noise ratio), SSIM (structural similarity) and NMI (normalized mutual information), wherein the lower the values of MSE (mean square error) and NRMSE (normalized root mean square error) are, the more accurate the values of PSNR (peak signal-to-noise ratio), SSIM (structural similarity) and NMI (normalized mutual information) are, the more accurate the values are.
The existing algorithms include: SIFT (Scale-invariant feature transform), ORB (organized FAST and rolling brief), AKAZE (an improvement of SIFT algorithm, using nonlinear diffusion filtering iteration to extract and construct Scale space, using a method similar to SIFT to find feature points, using ORB to generate descriptors in a descriptor generation Phase, Phase Cross Correlation), TurboReg (ImageJ plug-in to align original image sequence and template image), moco (MOtion coder, speed optimization algorithm of TurboReg).
The test process is as follows: calibrating each frame of the target sequence image and the template, respectively calculating the similarity measurement of each frame and the template after registration, and finally calculating the average value of the similarity measurement of each frame as the overall measurement.
Table 1 test results 1
Figure BDA0003042113180000141
Table 2 test results 2
Figure BDA0003042113180000151
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. The image registration method based on density clustering is characterized by comprising the following steps:
extracting a first final feature point set of the template image and a second final feature point set of each frame of target image;
respectively generating description vectors of all first final feature points in the first final feature point set and all second final feature points in the second final feature point set;
matching the first final characteristic point with a second final characteristic point of the current frame target image according to the similarity of the description vectors;
calculating registration parameters according to all successfully matched first final feature points and second final feature points of the current frame target image;
and carrying out image transformation on the current frame target image according to the registration parameters.
2. The image registration method based on density clustering of claim 1, wherein if the target image is a frame, extracting a first final feature point of the template image and a second final feature point of the target image comprises:
extracting a first initial characteristic point of the template image and a second initial characteristic point of the target image;
performing density clustering on all the first initial characteristic points to obtain a first intermediate characteristic point set, and performing density clustering on all the second initial characteristic points to obtain a second intermediate characteristic point set;
performing density clustering according to the first initial characteristic points to obtain density serving as a threshold value to filter noise points in the first intermediate characteristic point set, and performing density clustering according to the second initial characteristic points to obtain density serving as a threshold value to filter noise points in the second intermediate characteristic point set;
carrying out proximity merging and density clustering on all the first intermediate characteristic points to obtain a first final characteristic point set, and carrying out proximity merging and density clustering on all the second intermediate characteristic points to obtain a second final characteristic point set of the target image;
if the target image is an image sequence containing a plurality of frames of images, extracting a first final feature point of the template image and a second final feature point of the target image, wherein the steps of:
extracting a first initial characteristic point of the template image and a second initial characteristic point of the first frame target image;
performing density clustering on all the first initial feature points to obtain a first intermediate feature point set, and performing density clustering on all the second initial feature points of the first frame of target image to obtain a second intermediate feature point set;
performing density clustering according to the first initial characteristic points to obtain density serving as a threshold value to filter noise points in the first intermediate characteristic point set, and performing density clustering according to the second initial characteristic points of the first frame of target image to obtain density serving as a threshold value to filter noise points in the second intermediate characteristic point set;
carrying out proximity merging and density clustering on all the first intermediate characteristic points to obtain a first final characteristic point set, and carrying out proximity merging and density clustering on all the second intermediate characteristic points of the first frame of target image to obtain a second final characteristic point set of the first frame of target image;
and for the non-first frame target image, performing density clustering on all second final feature points in the second final feature point set of the previous frame target image to obtain a second final feature point set of the frame target image.
3. The image registration method based on density clustering according to claim 2, wherein the first initial feature point extraction method or FAST algorithm or random generation, and the second initial feature point extraction method or FAST algorithm or random generation.
4. The image registration method based on density clustering of claim 2, wherein density clustering the first initial feature points comprises:
taking each first initial characteristic point as a center, and taking a neighborhood image in a preset range;
respectively taking the pixel values of a preset number of pixel points in the neighborhood image as weights, iterating by a self-adaptive step length fast gradient ascent method, updating each feature point, ending at a density local extreme point, namely a density attraction point, and taking the density attraction point as a first intermediate feature point;
performing density clustering on the second initial feature points comprises:
taking each second initial characteristic point as a center, and taking a neighborhood image in a preset range;
and respectively taking the pixel values of a preset number of pixel points in the neighborhood image as weights, iterating by a self-adaptive step length fast gradient ascent method, updating each feature point, ending at a density local extreme point, namely a density attraction point, and taking the density attraction point as a second intermediate feature point.
5. The image registration method based on density clustering according to claim 4, wherein the fast gradient ascent method has the calculation formula:
Figure FDA0003042113170000021
Figure FDA0003042113170000022
in the formula (I), the compound is shown in the specification,
Figure FDA0003042113170000023
as a coordinate vector of an arbitrary point
Figure FDA0003042113170000024
The corresponding estimated density of the image is calculated,
Figure FDA0003042113170000025
as a coordinate vector of an arbitrary point
Figure FDA0003042113170000026
The value of the pixel of (a) is,
Figure FDA0003042113170000027
is the estimated point of the region, the h bandwidth is a constant used to smooth the empirical distribution, and epsilon is a limiting parameter set to terminate the gradient rise process.
6. The image registration method based on density clustering according to claim 1, wherein the description vector of the first final feature point is generated by: taking a difference vector set of the position distribution of one first final characteristic point and all the rest first final characteristic points as a group of description vectors of the first final characteristic point;
the method for generating the description vector of the second final feature point comprises the following steps: and taking a set of difference vectors of the position distribution of one second final feature point and all the other second final feature points in the same second final feature point set as a group of description vectors of the second final feature point.
7. The image registration method based on density clustering according to claim 1, wherein matching the description vector of the first final feature point with the description vector of the second final feature point comprises: calculating Jacard similarity coefficients of the description vector of the first final characteristic point and the description vector of the second final characteristic point, and if the Jacard similarity coefficients are larger than a preset value, determining that the first final characteristic point and the second final characteristic point are successfully matched;
the calculation method of the Jacard similarity coefficient comprises the following steps: if the ratio of the norm of the difference vector in one description vector to the norm of one difference vector in the other description vector to the sum of the norms of the two difference vectors is smaller than a preset value, the two difference vectors are considered to be the same, and the Jacard similarity coefficient of the two description vectors is the intersection of the difference vectors in the two description vectors divided by the union, namely the homogeneous difference vector ratio is smaller than the total difference vector number minus the homogeneous difference vector number.
8. The image registration method based on density clustering according to claim 7, wherein the calculation formula of the Jacard similarity coefficient is:
Figure FDA0003042113170000031
Figure FDA0003042113170000032
Figure FDA0003042113170000041
in the formula (I), the compound is shown in the specification,
Figure FDA0003042113170000042
as a set of points KpTemplateThe characteristic point of (1) is determined,
Figure FDA0003042113170000043
as a set of points KpDestinationThe characteristic point of (1) is determined,
Figure FDA0003042113170000044
is a characteristic point
Figure FDA0003042113170000045
Is described by
Figure FDA0003042113170000046
And point set KpTemplateThe coordinate difference of all other feature points in the image,
Figure FDA0003042113170000047
is a characteristic point
Figure FDA0003042113170000048
Is described by
Figure FDA0003042113170000049
And point set KpDestinationThe coordinate difference of all other characteristic points in the description vector is formed, J represents the Jacard similarity coefficient of two description vectors, and tausameMeans that there is a repeated matching number of times, ζ, that one disparity vector of one set of the description vectors matches a plurality of disparity vectors of the other set of the description vectors when comparing the two description vectors1The ratio of the norm representing the difference between two disparity vectors and the sum of the norms of the two disparity vectors is used to define whether the two disparity vectors are homogenous or not.
9. The image registration method based on density clustering of claim 1, wherein the image registration method based on density clustering further comprises:
when the first final feature point and the second final feature point are matched, judging whether the target image has an angle rotation difference larger than a threshold value, if so, respectively calculating the main directions of a first final feature point description vector and a second final feature point description vector;
rotating all difference vectors in the description vector of the first final characteristic point and all difference vectors in the description vector of the second final characteristic point of the target image according to the angle of the main direction;
and matching the first final feature point and the second final feature point according to the similarity of the transformed description vectors.
10. The image registration method based on density clustering according to claim 9, wherein calculating the main direction of the description vector comprises:
selecting a first final characteristic point and a second final characteristic point to be matched;
performing main direction histogram statistics on the description vector of the first final feature point to obtain the maximum trend direction of the description vector;
performing main direction histogram statistics on the description vector of the second final feature point to obtain the maximum trend direction of the description vector;
the direction histogram statistical method comprises the following steps: dividing the peripheral angle into corresponding angle ranges according to the fixed angle, counting the norm value of the difference vector in each angle range, and calling the angle with the maximum total norm value in each angle range as the maximum trend direction of the angle;
all the difference vectors in the description vectors of the first final characteristic point are subjected to rotation linear change by taking the maximum trend direction as the coordinate reference direction;
and performing rotation linear change on all the difference vectors in the description vectors of the second final characteristic point by taking the maximum trend direction of the difference vectors as the coordinate reference direction of the difference vectors.
CN202110460146.4A 2021-04-27 2021-04-27 Image registration method based on density clustering Active CN113408569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110460146.4A CN113408569B (en) 2021-04-27 2021-04-27 Image registration method based on density clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110460146.4A CN113408569B (en) 2021-04-27 2021-04-27 Image registration method based on density clustering

Publications (2)

Publication Number Publication Date
CN113408569A true CN113408569A (en) 2021-09-17
CN113408569B CN113408569B (en) 2022-07-19

Family

ID=77678054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110460146.4A Active CN113408569B (en) 2021-04-27 2021-04-27 Image registration method based on density clustering

Country Status (1)

Country Link
CN (1) CN113408569B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110122226A1 (en) * 2009-03-16 2011-05-26 Siemens Corporation System and method for robust 2d-3d image registration
WO2011069021A2 (en) * 2009-12-02 2011-06-09 Qualcomm Incorporated Improving performance of image recognition algorithms by pruning features, image scaling, and spatially constrained feature matching
CN104700401A (en) * 2015-01-30 2015-06-10 天津科技大学 Image affine transformation control point selecting method based on K-Means clustering method
CN105405146A (en) * 2015-11-17 2016-03-16 中国海洋大学 Feature density clustering and normal distribution transformation based side-scan sonar registration method
CN109242759A (en) * 2018-07-16 2019-01-18 杭州电子科技大学 Figure based on Density Clustering shrinks grouping method for registering
CN109753940A (en) * 2019-01-11 2019-05-14 京东方科技集团股份有限公司 Image processing method and device
CN109919885A (en) * 2019-02-15 2019-06-21 中国人民解放军陆军军医大学 CVH image and Image registration fusion method based on B-spline and mutual information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110122226A1 (en) * 2009-03-16 2011-05-26 Siemens Corporation System and method for robust 2d-3d image registration
WO2011069021A2 (en) * 2009-12-02 2011-06-09 Qualcomm Incorporated Improving performance of image recognition algorithms by pruning features, image scaling, and spatially constrained feature matching
CN104700401A (en) * 2015-01-30 2015-06-10 天津科技大学 Image affine transformation control point selecting method based on K-Means clustering method
CN105405146A (en) * 2015-11-17 2016-03-16 中国海洋大学 Feature density clustering and normal distribution transformation based side-scan sonar registration method
CN109242759A (en) * 2018-07-16 2019-01-18 杭州电子科技大学 Figure based on Density Clustering shrinks grouping method for registering
CN109753940A (en) * 2019-01-11 2019-05-14 京东方科技集团股份有限公司 Image processing method and device
US20200226781A1 (en) * 2019-01-11 2020-07-16 Beijing Boe Optoelectronics Technology Co., Ltd. Image processing method and apparatus
CN109919885A (en) * 2019-02-15 2019-06-21 中国人民解放军陆军军医大学 CVH image and Image registration fusion method based on B-spline and mutual information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
H. YANG ET AL.: "Intelligent classification of point clouds for indoor components based on dimensionality reduction", 《2020 5TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND APPLICATIONS (ICCIA)》 *
逄博: "基于特征的SAR图像配准技术研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑(月刊),2020年第06期》 *

Also Published As

Publication number Publication date
CN113408569B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN110188824B (en) Small sample plant disease identification method and system
CN110310310B (en) Improved method for aerial image registration
CN112215119B (en) Small target identification method, device and medium based on super-resolution reconstruction
WO2019136772A1 (en) Blurred image restoration method, apparatus and device, and storage medium
CN112257738A (en) Training method and device of machine learning model and classification method and device of image
CN109344845A (en) A kind of feature matching method based on Triplet deep neural network structure
CN112949454B (en) Iris recognition method based on small sample learning
CN109544603A (en) Method for tracking target based on depth migration study
CN112668718B (en) Neural network training method, device, electronic equipment and storage medium
CN114005046A (en) Remote sensing scene classification method based on Gabor filter and covariance pooling
CN109190505A (en) The image-recognizing method that view-based access control model understands
CN111553250B (en) Accurate facial paralysis degree evaluation method and device based on face characteristic points
CN113408569B (en) Image registration method based on density clustering
CN117576497A (en) Training method and device for memory Dirichlet process Gaussian mixture model
CN112329798A (en) Image scene classification method based on optimized visual bag-of-words model
CN109165586B (en) Intelligent image processing method for AI chip
CN108492256B (en) Unmanned aerial vehicle video fast splicing method
CN115019175B (en) Pest identification method based on migration element learning
CN116167921A (en) Method and system for splicing panoramic images of flight space capsule
CN111553249B (en) H-B grading-based accurate facial paralysis degree evaluation method and device under CV
CN113051901A (en) Identification card text recognition method, system, medium and electronic terminal
Yin et al. Image Enhancement Method Based on Improved DCGAN for Limit Sample
Lan et al. A Combinatorial K-View Based algorithm for Texture Classification.
CN112418317A (en) Method for identifying and classifying precision machining structural part based on PSO-SVM
CN110807464B (en) Method and system for obtaining image fuzzy invariant texture feature descriptor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant