CN114494373A - High-precision rail alignment method and system based on target detection and image registration - Google Patents

High-precision rail alignment method and system based on target detection and image registration Download PDF

Info

Publication number
CN114494373A
CN114494373A CN202210062682.3A CN202210062682A CN114494373A CN 114494373 A CN114494373 A CN 114494373A CN 202210062682 A CN202210062682 A CN 202210062682A CN 114494373 A CN114494373 A CN 114494373A
Authority
CN
China
Prior art keywords
detection
image
key points
key
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210062682.3A
Other languages
Chinese (zh)
Inventor
杜卫红
谢立欧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Beyebe Network Technology Co ltd
Original Assignee
Shenzhen Beyebe Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Beyebe Network Technology Co ltd filed Critical Shenzhen Beyebe Network Technology Co ltd
Priority to CN202210062682.3A priority Critical patent/CN114494373A/en
Publication of CN114494373A publication Critical patent/CN114494373A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a high-precision rail alignment method and system based on target detection and image registration, which comprises the following steps: step S1, firstly, carrying out preliminary matching on the target area through target detection; step S2, searching the key point positions on all the scale spaces after the preliminary matching, and extracting the key points; step S3, positioning the extracted key points through a fitting model, and determining the gradient distribution characteristics and the direction distribution characteristics of the key points; step S4, constructing feature vectors of the key points to carry out feature comparison, and establishing descriptors of each key point according to the position, the scale and the direction of the key point; in step S5, the image key points are matched to realize the alignment with the original image. The method can align the image to the pixel level accurately, realizes quick and accurate key point matching and image alignment aiming at the track scene well, has practical significance for measuring the information quantity, and provides a good data basis for the subsequent track abnormity detection.

Description

High-precision rail alignment method and system based on target detection and image registration
Technical Field
The invention relates to a graph alignment method, in particular to a high-precision rail alignment method based on target detection and image registration and a high-precision rail alignment system adopting the high-precision rail alignment method based on the target detection and the image registration.
Background
With the annual increase of the occupation ratio of the rail transit in the urban traffic, the importance of the rail safety inspection is increasingly highlighted. If the track is abnormal, the train is likely to have a serious safety accident, so the track state must be detected and maintained. The current track safety inspection mainly adopts the mode of manual detection or manual reading after data acquisition to detect, and these two kinds of current methods all need consume a large amount of manpowers, moreover, because people's subjectivity leads to the false retrieval very easily or leak hunting, in order to avoid these problems, need research and development a full-automatic track intelligent detection system.
However, in the algorithm for detecting the abnormality of the components in the track and the electrical devices around the track, the accuracy of the generation of the reference image and the target position and the matching is an important factor that affects whether the final abnormality detection is accurate. Only the position of the detection target in the reference map is specified, the initial detection position of the target and the detection cycle can be determined. The accurate matching result can provide complete image information for later model identification, and is meaningful for measuring the information quantity. Therefore, how to accurately align the to-be-detected rail image and the reference image is also one of the key technologies of the intelligent rail detection system.
Disclosure of Invention
The invention aims to solve the technical problem of providing a high-precision rail alignment method which can quickly and accurately align an image of a rail to be detected with a reference image and is based on target detection and image registration, and aims to provide a good data base for subsequent abnormal detection. On the basis, the rail high-precision alignment system adopting the rail high-precision alignment method based on target detection and image registration is further provided.
In view of the above, the present invention provides a high-precision alignment method for a track based on target detection and image registration, comprising the following steps:
step S1, firstly, carrying out preliminary matching on the target area through target detection;
step S2, searching the key point positions on all the scale spaces after the preliminary matching, and extracting the key points;
step S3, positioning the extracted key points through a fitting model, and determining the gradient distribution characteristics and the direction distribution characteristics of the key points;
step S4, constructing feature vectors of the key points to compare features, and establishing descriptors of each key point according to the positions, the scales and the directions of the key points;
in step S5, the image key points are matched to align the original image.
A further refinement of the invention is that said step S1 comprises the following sub-steps:
step S101, inputting a target detection picture into a pre-trained classification network, acquiring feature mapping of the target detection picture, and modifying the target detection network;
step S102, extracting feature maps of convolutional layers in a target detection network, constructing frames with preset numbers and different sizes at each point on the feature maps, taking all the generated frames as detection frames, comparing the detection frames with real target frames in labels, reserving the detection frames with the cross-over ratio larger than a preset cross-over ratio threshold value, and endowing the detection frames with the types of the real target frames;
and step S103, combining the detection frames obtained from different feature maps, filtering the overlapped or incorrect detection frames by a non-maximum suppression method, generating a final detection result, and obtaining the primarily matched target detection data.
The further improvement of the present invention is that, in step S101, the implementation process of modifying the target detection network is as follows: and deleting the dropout layer and the last connecting layer of the VGG16 convolutional neural network when the target detection network is established, and modifying the pooling kernel size of the pooling layer to be adaptive to the target detection picture so as to serve as a new target detection network.
In a further development of the invention, in step S103, the loss function is passed
Figure BDA0003478897550000021
Calculating the final detection result, wherein N represents the number of fixed frames from the matching graph to the real label, and Lconf(x, c) represents a class confidence loss of the detection frame, α represents confidence weight information, and Lloc(x, l, g) represents the position regression loss of the detection box.
In step S2, the detected picture is transformed by a two-dimensional gaussian function, the obtained transformed picture is used as a characteristic map, each pixel point in the characteristic map corresponds to a characteristic region of the picture before transformation, and finally, a pixel point of the characteristic map is used as a key point of the characteristic region.
In a further improvement of the present invention, in the step S2, the detected picture is scaled by a spatial scaling formula L (x, y, σ) ═ G (x, y, σ) × I (x, y), where G (x, y, σ) represents a two-dimensional gaussian function, x represents a convolution operation, and I (x, y) represents an input original image.
A further refinement of the invention is that said step S3 comprises the following sub-steps:
step S301, through the formula
Figure BDA0003478897550000031
Acquiring gradient distribution characteristics m (x, y) of pixels of the key points in a window of the Gaussian pyramid image field;
step S302, obtaining the value of tan by the formula θ (x, y)-1And acquiring directional distribution characteristics theta (x, y) of the key points in pixels in a Gaussian pyramid image domain window (L (x, y +1) -L (x, y-1))/(L (x +1, y) -L (x-1, y))).
A further refinement of the invention is that said step S4 comprises the following sub-steps:
step S401, constructing a feature vector of the key point through the gradient distribution feature and the direction distribution feature of the key point, comparing every two key points through the feature vector of each key point, and taking the key points with the comparison backward quantity difference value smaller than a preset vector difference threshold value as mutually matched key point pairs so as to establish corresponding relations between different key points;
step S402, combining the position, scale and direction of the key point into a vector, and using the vector as the descriptor of the key point.
In a further improvement of the present invention, in step S5, feature point search is performed on the basis of the feature point of the target image through the structure tree, and after an original image feature point most adjacent to the feature point of the target image is searched, matching of image key points is performed; and then, taking the characteristic points of the target image as a reference, and aligning the matched characteristic points in the original image.
The invention also provides a high-precision alignment system of the orbit based on target detection and image registration, which adopts the high-precision alignment method of the orbit based on target detection and image registration and comprises the following steps:
the preliminary matching module is used for carrying out preliminary matching on the target area through target detection;
the key point extraction module is used for searching key point positions on all scale spaces after the initial matching and extracting key points;
the key point positioning module is used for positioning the extracted key points through the fitting model and determining the gradient distribution characteristics and the direction distribution characteristics of the key points;
the feature comparison module is used for constructing feature vectors of the key points to perform feature comparison, and establishing descriptors of each key point according to the position, the scale and the direction of the key point;
and the alignment module is used for realizing alignment with the original image through image key point matching.
Compared with the prior art, the invention has the beneficial effects that: the method comprises the steps of firstly quickly determining an initial detection position and a detection period of a target through target detection, then pertinently extracting and positioning key points, constructing descriptors of the key points according to feature vectors to perform feature comparison and matching, and finally realizing high-precision alignment through a key point matching structure. Therefore, the method can align the image to the pixel level accurately, and realizes quick and accurate key point matching and image alignment well aiming at the special use environment of the orbit scene. The method can provide complete image information and accurate data foundation for later model identification and data processing, has practical significance for measuring information quantity, provides good data foundation for subsequent track anomaly detection, and lays a solid foundation.
Drawings
FIG. 1 is a schematic workflow diagram of one embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a principle of implementing a gaussian pyramid of an image by scale-space transformation according to an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
For the image alignment problem of a common scene, alignment meeting the general accuracy requirement can be realized only by adopting feature comparison. However, it should be noted that if the background ratio in the data is too large in a special use environment, such as an orbit scene, the efficiency of directly adopting the existing feature comparison method is very low, and under the condition of the same processing equipment, the accuracy is greatly reduced, and an accurate data basis cannot be provided for subsequent anomaly detection.
For this reason, in this embodiment, it is preferable to quickly determine the initial detection position and detection period of the target by target detection, then perform extraction and positioning of the keypoints in a targeted manner, construct descriptors of the keypoints according to the feature vectors to perform feature comparison and matching, and finally implement high-precision alignment by using the keypoint matching structure.
More specifically, this example provides a high-precision alignment method for a track based on target detection and image registration, including the following steps:
step S1, firstly, carrying out preliminary matching on the target area through target detection;
step S2, searching the key point positions on all the scale spaces after the preliminary matching, and extracting the key points;
step S3, positioning the extracted key points through a fitting model, and determining the gradient distribution characteristics and the direction distribution characteristics of the key points;
step S4, constructing feature vectors of the key points to compare features, and establishing descriptors of each key point according to the positions, the scales and the directions of the key points;
in step S5, the image key points are matched to align the original image.
And acquiring data for a full-line benchmark of the orbit scene, wherein the data comprise full-line targets to be identified. There may be many blank regions, i.e. target regions, in the full-line data, and in order to determine the appearance characteristics and frequency of the targets, the positions of the targets in the reference collected data need to be determined first. In this embodiment, the target detection model is preferably used to locate and detect the target from the reference data.
Before starting to perform target detection, the present example preferentially trains the classification network in advance, and firstly labels the acquired historical image data, wherein the labeled labels include the position, size and category of the target area. And (4) the marked data is processed according to the following steps of 8: 1: the proportion of 1 is divided into a training set, a verification set and a test set for training to obtain a classification network. And starting the high-precision alignment method of the orbit after obtaining the classification network, preferably continuously returning data to the classification network in real time to realize circulating dynamic training after starting, and further continuously optimizing the classification network along with the time.
Step S1 in this example preferably includes the following sub-steps:
step S101, inputting a target detection picture into a pre-trained classification network, wherein the size of the target detection picture is preferably 300x300, acquiring feature mapping of the target detection picture, and modifying the target detection network;
step S102, extracting feature maps of convolutional layers in a target detection network, constructing frames with preset numbers and different sizes at each point on the feature maps, taking all the generated frames as detection frames, comparing the detection frames with real target frames in labels, reserving the detection frames with the cross-over ratio larger than a preset cross-over ratio threshold value, and endowing the detection frames with the types of the real target frames; the preset number refers to the number of frames which can be preset and adjusted, and is preferably 6; the preset intersection ratio threshold refers to a preset and adjustable intersection ratio judgment value, and is preferably 50%;
and step S103, combining the detection frames obtained from different feature maps, filtering the overlapped or incorrect detection frames by a non-maximum suppression method, generating a final detection result, and obtaining the primarily matched target detection data. Since each detection frame has a label, an incorrect detection frame refers to a detection frame whose label does not correspond to the label labeled before the start of target detection.
More specifically, in step S101 in this example, the implementation process of modifying the target detection network is preferably: and deleting the dropout layer and the last connection layer of the VGG16 convolutional neural network when the target detection network is established, and modifying the pooling kernel size of the pooling layer to be suitable for the target detection picture, namely increasing the pooling kernel size of the pooling layer from 2x2 to 3x3, so as to serve as a new target detection network.
In step S103 of the present example, the loss function is passed
Figure BDA0003478897550000051
Calculating the final detection result, wherein N represents the number of fixed frames from the matching graph to the real label, and Lconf(x, c) represents a class confidence loss of the detection frame, α represents confidence weight information, and Lloc(x, l, g) represents the position regression loss of the detection box.
This example preferably uses a calculation formula
Figure BDA0003478897550000061
Calculating class confidence loss L for a detection boxconf(x, c) wherein,
Figure BDA0003478897550000062
representing the detection probability of the detection frame of the ith search corresponding to the category p; i represents the detection frame number; j represents the real box number; p represents a category number, and p-0 represents a background.
Figure BDA0003478897550000063
It means that the ith detection box is matched to the jth real box, and the category of the real box is p. i ∈ Pos denotes the foreground and i ∈ Neg denotes the background.
Figure BDA0003478897550000064
And the detection probability that the detection box corresponding to the ith search is in the background is shown.
This example preferably follows the formula
Figure BDA0003478897550000065
Position regression loss L of detection frameloc(x, l, g) wherein l is a detection box,
Figure BDA0003478897550000066
is a real frame. (c)x,cy) To center the default box after compensation, (w, h) is the width and height of the default box. x is the number ofijThe intersection ratio of the ith detection box and the jth real box is shown. liRepresents the ith detection box;
Figure BDA0003478897550000067
the j-th real box is shown, i and j are natural numbers representing sequence numbers.
In the process of extracting the key points, the special application environment of the orbit data is also needed to be noticed in the embodiment. Based on this consideration, the key points of the orbit data selected in this example include some quite prominent pixel points that are less affected by factors such as illumination, scale, and rotation, for example, extracting corner points, edge points, bright points in a dark area, and dark points in a bright area as key points. This step S2 is to search the keypoint locations on all scale spaces and then further identify potential keypoints with scale and rotation invariance by means of gaussian derivative functions (also called two-dimensional gaussian functions).
In step S2, when extracting the key point, the detected picture is transformed by the two-dimensional gaussian function, and the two-dimensional gaussian function is used to implement the sampling of the image, and then the obtained transformed image is used as the characteristic map, each pixel point in the characteristic map corresponds to the characteristic region of the picture before transformation, and finally the pixel point of the characteristic map is used as the key point of the characteristic region.
In step S2 of the present example, the detected picture is preferably scaled by a spatial scaling formula L (x, y, σ) ═ G (x, y, σ) × I (x, y), where G (x, y, σ) represents a two-dimensional gaussian function, x represents a convolution operation, and I (x, y) represents an input original image. The expression of the two-dimensional Gaussian function G (x, y, sigma) is
Figure BDA0003478897550000068
Where σ is the standard deviation of the normal distribution and is also a scale space factor, and a smaller value of the standard deviation indicates that the image is less smoothed, and the corresponding scale is smaller. The scale space is represented by a gaussian pyramid during implementation, as shown in fig. 2, that is, after images are continuously downsampled, different scale space factors are used for each image with different scales, so that different scale space descriptions can be obtained.
That is, the obtaining of the position and the scale is performed in the step of extracting the key point in step S2, in step S2, the original image is subjected to scale transformation, each pixel point in the image with the smallest scale may represent a key point, and the key point in the image with other scales may be obtained by mapping the key point to the image with other scales. The step S3 is used to realize the positioning of the key points, where the positioning in the step S3 represents that the corresponding gradient distribution features and direction distribution features are obtained according to the magnitude and direction of the gradient of the key points, and the key points are distinguished by the gradient distribution features and the direction distribution features, so that the positioning of the key points is realized.
Step S3 in this example is used to achieve the localization of keypoints, and at each candidate location, the location and scale is determined by a fitting fine model. The selection of the key points depends on their degree of stability. Each keypoint location is then assigned one or more directions based on the local gradient direction of the image. All subsequent operations on the image data are transformed with respect to the direction, scale and location of the keypoints, thereby ensuring invariance/consistency of subsequent various data processing and transformations. By evaluating extreme points through scale invariance, a reference direction needs to be selected for each key point by using local features of the image, so that the descriptor has invariance to image rotation. For the key points detected in the differential pyramid, the gradient distribution characteristics and the direction distribution characteristics of the pixels in the neighborhood window of the Gaussian pyramid image in which the key points are located are collected, and the key points can be quickly positioned.
There is a gradient at each position of the image, and the gradient distribution characteristic and the direction distribution characteristic at each position can be calculated according to the following two formulas. Preferably, step S3 in this example includes the following sub-steps:
step S301, through the formula
Figure BDA0003478897550000071
Acquiring gradient distribution characteristics m (x, y) of pixels of the key points in a window of the Gaussian pyramid image field; gradient refers to the difference between adjacent pixels, and is commonly used to represent the local features of an image;
step S302, obtaining the value of tan by the formula θ (x, y)-1And acquiring directional distribution characteristics theta (x, y) of the key points in pixels in a Gaussian pyramid image domain window (L (x, y +1) -L (x, y-1))/(L (x +1, y) -L (x-1, y))).
Step S4 in this example includes the following substeps:
step S401, constructing a feature vector of the key point through the gradient distribution feature and the direction distribution feature of the key point, comparing every two key points through the feature vector of each key point, taking the key points with the comparison backward quantity difference value smaller than a preset vector difference threshold value as mutually matched key point pairs, and establishing a corresponding relation between different key points, namely establishing a corresponding relation between scenes;
step S402, combining the position, scale and direction of the key point into a vector, and using the vector as the descriptor of the key point. It should be noted that the descriptors of the keypoints in this example do not change with various transformations, such as illumination changes, visual changes, and the like, and the descriptors of the keypoints do not change. The descriptor not only comprises the key points, but also comprises pixel points which contribute to the descriptor around the key points, has higher uniqueness, and can effectively improve the probability of correct matching of the feature points.
Step S5 is used to match the image key points, and the present example preferably establishes the key point descriptor sets for the template graph and the real-time graph respectively. The identification of the target is accomplished by the alignment of the keypoint descriptors in the two point sets. The similarity measure of the 128-dimensional key point descriptors can be calculated by Euclidean distance, and the matching can be realized by an exhaustive method or a structure tree, wherein the structure tree is more efficient.
In step S5 of this example, it is preferable that feature point search is performed using a feature point of the target image as a reference through a structure tree (e.g., a data structure such as a kd-tree) first, and after an original image feature point closest to the feature point of the target image is searched, matching of image key points is performed; and then, taking the characteristic points of the target image as a reference, and aligning the matched characteristic points in the original image.
The present embodiment further provides an orbit high-precision alignment system based on target detection and image registration, which employs the above-mentioned orbit high-precision alignment method based on target detection and image registration, and includes:
the preliminary matching module is used for carrying out preliminary matching on the target area through target detection;
the key point extraction module is used for searching key point positions on all scale spaces after the initial matching and extracting key points;
the key point positioning module is used for positioning the extracted key points through the fitting model and determining the gradient distribution characteristics and the direction distribution characteristics of the key points;
the feature comparison module is used for constructing feature vectors of the key points to perform feature comparison, and establishing descriptors of each key point according to the position, the scale and the direction of the key point;
and the alignment module is used for realizing alignment with the original image through image key point matching.
In summary, the initial detection position and the detection period of the target are quickly determined through target detection, then the extraction and the positioning of the key points are performed in a targeted manner, descriptors of the key points are constructed according to the feature vectors to perform feature comparison and matching, and finally high-precision alignment is realized through the key point matching structure. Therefore, the method can align the image to the pixel level accurately, and realizes quick and accurate key point matching and image alignment well aiming at the special use environment of the orbit scene. The method can provide complete image information and accurate data foundation for later model identification and data processing, has practical significance for measuring information quantity, provides good data foundation for subsequent track anomaly detection, and lays a solid foundation.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. A high-precision rail alignment method based on target detection and image registration is characterized by comprising the following steps:
step S1, firstly, carrying out preliminary matching on the target area through target detection;
step S2, searching the key point positions on all the scale spaces after the preliminary matching, and extracting the key points;
step S3, positioning the extracted key points through a fitting model, and determining the gradient distribution characteristics and the direction distribution characteristics of the key points;
step S4, constructing feature vectors of the key points to compare features, and establishing descriptors of each key point according to the positions, the scales and the directions of the key points;
in step S5, the image key points are matched to realize the alignment with the original image.
2. The method for high-precision alignment of a rail based on object detection and image registration according to claim 1, wherein the step S1 comprises the following sub-steps:
step S101, inputting a target detection picture into a pre-trained classification network, acquiring feature mapping of the target detection picture, and modifying the target detection network;
step S102, extracting feature maps of convolutional layers in a target detection network, constructing frames with preset numbers and different sizes at each point on the feature maps, taking all the generated frames as detection frames, comparing the detection frames with real target frames in labels, reserving the detection frames with the cross-over ratio larger than a preset cross-over ratio threshold value, and endowing the detection frames with the types of the real target frames;
and step S103, combining the detection frames obtained from different feature maps, filtering the overlapped or incorrect detection frames by a non-maximum suppression method, generating a final detection result, and obtaining preliminarily matched target detection data.
3. The method for aligning the orbit with high precision based on the target detection and the image registration according to claim 2, wherein in the step S101, the implementation process of modifying the target detection network is as follows: and deleting the dropout layer and the last connecting layer of the VGG16 convolutional neural network when the target detection network is established, and modifying the pooling kernel size of the pooling layer to be adaptive to the target detection picture so as to serve as a new target detection network.
4. The method for high-precision alignment of tracks based on object detection and image registration according to claim 2, wherein in step S103, the method is implemented by a loss function
Figure FDA0003478897540000011
Calculating the final detection result, wherein N represents the number of fixed frames from the matching graph to the real label, and Lconf(x, c) represents a detection frameA represents confidence weight information, Lloc(x, l, g) represents the position regression loss of the detection box.
5. The method according to any one of claims 1 to 4, wherein in step S2, the detected picture is transformed by a two-dimensional Gaussian function, the transformed picture is used as a characteristic map, each pixel point in the characteristic map corresponds to a characteristic region of the picture before transformation, and finally, the pixel point of the characteristic map is used as a key point of the characteristic region.
6. The method for high-precision alignment of tracks based on target detection and image registration according to claim 5, wherein in step S2, the detected picture is scaled by a spatial scaling formula L (x, y, σ) ═ G (x, y, σ) × I (x, y), where G (x, y, σ) represents a two-dimensional gaussian function, x represents a convolution operation, and I (x, y) represents the input original image.
7. The method for rail high-precision alignment based on object detection and image registration according to claim 5, wherein the step S3 comprises the following sub-steps:
step S301, through the formula
Figure FDA0003478897540000021
Acquiring gradient distribution characteristics m (x, y) of pixels of the key points in a window of the Gaussian pyramid image field;
step S302, obtaining the value of tan by the formula θ (x, y)-1And acquiring directional distribution characteristics theta (x, y) of the key points in pixels in a Gaussian pyramid image domain window (L (x, y +1) -L (x, y-1))/(L (x +1, y) -L (x-1, y))).
8. The method for rail high-precision alignment based on object detection and image registration according to any one of claims 1 to 4, wherein the step S4 comprises the following sub-steps:
step S401, constructing a feature vector of the key point through the gradient distribution feature and the direction distribution feature of the key point, comparing every two key points through the feature vector of each key point, and taking the key points with the comparison backward quantity difference value smaller than a preset vector difference threshold value as mutually matched key point pairs so as to establish corresponding relations between different key points;
step S402, combining the position, scale and direction of the key point into a vector, and using the vector as the descriptor of the key point.
9. The method for high-precision alignment of tracks based on target detection and image registration according to any one of claims 1 to 4, wherein in step S5, feature point search is performed by using feature points of the target image as references through a structure tree, and after an original image feature point nearest to the feature point of the target image is searched, matching of image key points is performed; and then, taking the characteristic points of the target image as a reference, and aligning the matched characteristic points in the original image.
10. An orbit high-precision alignment system based on object detection and image registration, which is characterized in that the orbit high-precision alignment method based on object detection and image registration of any one of claims 1 to 9 is adopted, and comprises:
the preliminary matching module is used for carrying out preliminary matching on the target area through target detection;
the key point extraction module is used for searching key point positions on all scale spaces after the initial matching and extracting key points;
the key point positioning module is used for positioning the extracted key points through the fitting model and determining the gradient distribution characteristics and the direction distribution characteristics of the key points;
the feature comparison module is used for constructing feature vectors of the key points to perform feature comparison, and establishing descriptors of each key point according to the position, the scale and the direction of the key point;
and the alignment module is used for realizing alignment with the original image through image key point matching.
CN202210062682.3A 2022-01-19 2022-01-19 High-precision rail alignment method and system based on target detection and image registration Pending CN114494373A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210062682.3A CN114494373A (en) 2022-01-19 2022-01-19 High-precision rail alignment method and system based on target detection and image registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210062682.3A CN114494373A (en) 2022-01-19 2022-01-19 High-precision rail alignment method and system based on target detection and image registration

Publications (1)

Publication Number Publication Date
CN114494373A true CN114494373A (en) 2022-05-13

Family

ID=81472246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210062682.3A Pending CN114494373A (en) 2022-01-19 2022-01-19 High-precision rail alignment method and system based on target detection and image registration

Country Status (1)

Country Link
CN (1) CN114494373A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998630A (en) * 2022-07-19 2022-09-02 北京科技大学 Ground-to-air image registration method from coarse to fine
CN116091787A (en) * 2022-10-08 2023-05-09 中南大学 Small sample target detection method based on feature filtering and feature alignment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998630A (en) * 2022-07-19 2022-09-02 北京科技大学 Ground-to-air image registration method from coarse to fine
CN116091787A (en) * 2022-10-08 2023-05-09 中南大学 Small sample target detection method based on feature filtering and feature alignment

Similar Documents

Publication Publication Date Title
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
Yin et al. Hot region selection based on selective search and modified fuzzy C-means in remote sensing images
CN109145745B (en) Face recognition method under shielding condition
CN105809651B (en) Image significance detection method based on the comparison of edge non-similarity
CN111914642B (en) Pedestrian re-identification method, device, equipment and medium
CN114494373A (en) High-precision rail alignment method and system based on target detection and image registration
CN111932582A (en) Target tracking method and device in video image
CN112329559A (en) Method for detecting homestead target based on deep convolutional neural network
CN111967337A (en) Pipeline line change detection method based on deep learning and unmanned aerial vehicle images
CN114494161A (en) Pantograph foreign matter detection method and device based on image contrast and storage medium
CN111932579A (en) Method and device for adjusting equipment angle based on motion trail of tracked target
CN114677633B (en) Multi-component feature fusion-based pedestrian detection multi-target tracking system and method
Xiao et al. Geo-spatial aerial video processing for scene understanding and object tracking
CN106845458A (en) A kind of rapid transit label detection method of the learning machine that transfinited based on core
CN110929782A (en) River channel abnormity detection method based on orthophoto map comparison
Wang et al. Real-time damaged building region detection based on improved YOLOv5s and embedded system from UAV images
CN114581654A (en) Mutual inductor based state monitoring method and device
CN112419243B (en) Power distribution room equipment fault identification method based on infrared image analysis
CN111462310B (en) Bolt defect space positioning method based on multi-view geometry
Li et al. Road-network-based fast geolocalization
CN116704270A (en) Intelligent equipment positioning marking method based on image processing
CN116385477A (en) Tower image registration method based on image segmentation
CN116363655A (en) Financial bill identification method and system
Chen et al. Utilizing Road Network Data for Automatic Identification of Road Intersections from High Resolution Color Orthoimagery.
CN111401286B (en) Pedestrian retrieval method based on component weight generation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination