CN116167921B - Method and system for splicing panoramic images of flight space capsule - Google Patents
Method and system for splicing panoramic images of flight space capsule Download PDFInfo
- Publication number
- CN116167921B CN116167921B CN202310431498.6A CN202310431498A CN116167921B CN 116167921 B CN116167921 B CN 116167921B CN 202310431498 A CN202310431498 A CN 202310431498A CN 116167921 B CN116167921 B CN 116167921B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- feature point
- matching
- pairs
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 239000002775 capsule Substances 0.000 title claims abstract description 25
- 238000001228 spectrum Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 9
- 230000005855 radiation Effects 0.000 claims description 7
- 230000001537 neural effect Effects 0.000 claims description 4
- 230000000750 progressive effect Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 description 15
- 230000008569 process Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 210000005036 nerve Anatomy 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/37—Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/44—Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30268—Vehicle interior
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image data processing, and provides a method and a system for splicing panoramic images of a flight space capsule, wherein the method comprises the following steps: acquiring a plurality of images to be spliced by using cameras with a plurality of different position parameters in a flight space cabin; acquiring a plurality of groups of view angle image pairs according to the position parameters of the camera, and acquiring a target image, a matched image and a reference image in each group of view angle image pairs; acquiring a matching point pair, a reference feature point and a matching feature point in each group of view angle image pairs through a SIFT algorithm, and carrying out a fuzzy range descriptor of each matching feature point through an LBP algorithm; and obtaining a local feature descriptor and a fuzzy range descriptor of the target feature point in each group of view angle images to obtain a matching result of the target image and the matched image in each group of view angle images, and completing panoramic image stitching. The invention aims to solve the problem of inaccurate panoramic image matching and splicing results caused by view angle distortion.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to a method and a system for splicing panoramic images of a flight space capsule.
Background
The flying space cabin has a complex structure, and because the view angle of a single camera lens is limited and can not cover all view fields, a plurality of cameras are required to be responsible for shooting a part of scenes, and then the partial scenes shot by the plurality of cameras are spliced by using a panoramic image splicing algorithm; the existing image stitching transforms images from a plurality of cameras with different view angles into a wide view field image under the same view angle, for example, 360-degree panorama and even 360-degree 180-degree spherical panorama, and the image stitching firstly needs to match pixel points on different images through a characteristic point matching method.
In the prior art, a SIFT method is generally adopted to perform matching calculation of feature points, but when each camera shoots an image, the spatial positions of the cameras are different, so that one-to-many matching occurs due to the influence of visual angle distortion when the feature points are actually matched; when the matching degree is not great, the maximum matching degree is often selected as a characteristic point pair, and the matching can lead to the fact that two characteristic points irrelevant under the distortion of the visual angle are erroneously matched into the characteristic point pair, so that the panoramic image splicing result is wrong; therefore, a view image pair with larger view similarity needs to be obtained through view transformation, and then the characteristic information of the characteristic points is obtained based on the view image pair combined with the influence range of the characteristic points under different fuzzy parameters of SIFT, so that the matching among the characteristic points is realized, the matching precision of the view image pair is improved, and the splicing effect of the panoramic image is improved.
Disclosure of Invention
The invention provides a method and a system for splicing panoramic images of a flight space capsule, which are used for solving the problem of inaccurate matching and splicing results of panoramic images caused by distortion of an existing visual angle, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for stitching panoramic images of a space capsule, the method including the steps of:
acquiring a plurality of images to be spliced through a plurality of cameras with different position parameters;
according to the position parameters, the spatial distance between any two cameras is obtained, two images to be spliced, which are obtained by any one camera and the camera with the nearest spatial distance, form view angle image pairs, a plurality of groups of view angle image pairs are obtained, the largest image to be spliced in the image contrast in each group of view angle image pairs is taken as a target image of each group of view angle image pairs, and the image contrast is the smallest image to be matched in each group of view angle image pairs; acquiring a plurality of newly added view images according to the target image and the neural radiation field network, and acquiring reference images of each group of view image pairs according to the target image, the newly added view images and the matched images;
acquiring a plurality of matching point pairs, reference feature points, matching feature points and local feature descriptors of the matching feature points in each group of view angle image pairs through a SIFT algorithm, acquiring a consistency range of each reference feature point in each difference image in each group of view angle image pairs under the same scale as the reference image according to gray values of the reference feature points in a preset variable window range in different difference images under the same scale as the reference image, and acquiring a fuzzy range descriptor of each matching feature point according to the consistency range and the matching point pairs;
Acquiring target feature points and local feature descriptors of the target feature points in each group of view angle image pairs through a SIFT algorithm, acquiring a consistency range of each target feature point in each differential image under the same scale as the target image through a reference feature point consistency range acquisition method, acquiring a fuzzy range descriptor of each target feature point according to the consistency range, and acquiring a key point pair of the target image and a matched image in each group of view angle image pairs according to the local feature descriptors and the fuzzy range descriptors of the target feature points and the matched feature points;
and splicing the target images in each group of view angle image pairs with the matched images according to the key point pairs to obtain a plurality of spliced images, and continuously splicing the images through a key point pair acquisition method based on the spliced images until all the images to be spliced acquired by all cameras are spliced into the same image to obtain the panoramic image of the flight space capsule.
Optionally, the acquiring the reference image of each group of view image pairs according to the target image, the newly added view image and the matched image includes the following specific steps:
taking any one group of view angle image pair as a target image pair, taking any newly added view angle image in the target image pair as a target view angle image, carrying out Fourier transform on the target view angle image and a matched image to obtain two spectrograms, forming spectrum line vectors by connecting each spectrogram end to end through progressive pixel values, calculating cosine similarity of the two spectrum line vectors, and recording the cosine similarity as reference similarity of the target view angle image and the matched image;
And acquiring the reference similarity between each newly added view image and the target image in the target image pair and the matched image, and taking the newly added view image with the maximum reference similarity as the reference image of the target image pair.
Optionally, the acquiring, by using a SIFT algorithm, a plurality of matching point pairs, reference feature points, matching feature points and local feature descriptors of the matching feature points in each group of view angle image pairs includes the following specific steps:
and detecting feature points of the reference image and the matched image in each group of view angle image pairs by using a SIFT algorithm, obtaining feature descriptors, marking the feature descriptors as local feature descriptors of feature points, obtaining a plurality of matching point pairs of the reference image and the matched image in each group of view angle image pairs by using SIFT matching according to the local feature descriptors, marking the feature points of the reference image in the matching point pairs as reference feature points, marking the feature points of the matched image in the matching point pairs as matching feature points, and marking the local feature descriptors of the feature points corresponding to the matching feature points as local feature descriptors of the matching feature points.
Optionally, the method for acquiring the consistency range of each reference feature point in each pair of view images in each differential image under the same scale as the reference image includes the following specific steps:
Taking any one group of view angle image pairs as target image pairs, taking any one reference feature point in the target image pairs as a target reference feature point, acquiring a corresponding position of the target reference feature point on each differential image under the same scale as the reference image, taking any one differential image under the same scale as the reference image as a target differential image, taking the corresponding position of the target reference feature point in the target differential image as a center, constructing a preset variable window, and acquiring the LBP value of the target reference feature point in the target differential image of each preset variable window according to the initial size and the growth size of the preset variable window;
counting the number of 0 and 1 of LBP values in a preset variable window of the initial size, taking the number with the largest number as an LBP mark value of the reference feature point, and taking the ratio of the number of the LBP mark values to the number of bits of the LBP value under the initial size as the LBP mark duty ratio of the target reference feature point under the initial size in the target differential image;
acquiring a size corresponding to a first LBP mark with a duty ratio smaller than or equal to a preset first threshold, taking the previous size of the corresponding size as a consistency size of the target reference feature point in the target differential image, and marking a result of 1/2 of the side length of the consistency size and rounding down as a consistency range of the target reference feature point in the target differential image; and acquiring a consistency range of the target reference feature point in each differential image under the same scale as the reference image.
Optionally, the method for obtaining the fuzzy range descriptor of each matching feature point according to the consistency range and the matching point pair includes the following specific steps:
taking any group of view image pairs as target image pairs, taking any reference feature point in the target image pairs as a target reference feature point, taking the maximum value in all consistency ranges of the target reference feature point as first data in a target reference feature point fuzzy range descriptor, and taking the minimum value as second data in the target reference feature point fuzzy range descriptor to obtain a fuzzy range descriptor of the target reference feature point;
acquiring a fuzzy range descriptor of each reference feature point in the target image pair, and taking the fuzzy range descriptor of the reference feature point as a fuzzy range descriptor of the corresponding matching feature point according to the corresponding relation between the reference feature point and the matching feature point in the matching point pair; and acquiring a fuzzy range descriptor of each matching characteristic point in each group of view angle images.
Optionally, the acquiring key point pairs of the target image and the matched image in each set of view angle image pairs includes the following specific steps:
taking any group of view image pairs as target image pairs, taking any one matching feature point in the target image pairs as a target matching feature point, acquiring cosine similarity between the target matching feature point and each target feature point in a local feature descriptor, marking the cosine similarity as feature similarity, arranging all the feature similarities from large to small to obtain a feature similarity sequence of the target matching feature point, and taking target feature points corresponding to the elements of the preset quantity in the feature similarity sequence as reserved feature points of the target matching feature points;
Acquiring cosine similarity between the target matching feature point and each reserved feature point in the fuzzy range descriptor, marking the cosine similarity as fuzzy similarity, and taking the product of the feature similarity and the fuzzy similarity of the same reserved feature point as the comprehensive similarity of the target matching feature point and the reserved feature point;
acquiring the comprehensive similarity between the target matching feature point and each reserved feature point, and taking the reserved feature point corresponding to the maximum value of the comprehensive similarity as the final matching point of the target matching feature point; obtaining a final matching point of each matching characteristic point in the target image pair, and taking the matching characteristic points and the final matching points as key point pairs; and acquiring a plurality of key point pairs in each group of view angle image pairs.
In a second aspect, another embodiment of the present invention provides a flight space capsule panoramic image stitching system, the system comprising:
the image acquisition module acquires a plurality of images to be spliced through a plurality of cameras with different position parameters;
and an image matching module: according to the position parameters, the spatial distance between any two cameras is obtained, two images to be spliced, which are obtained by any one camera and the camera with the nearest spatial distance, form view angle image pairs, a plurality of groups of view angle image pairs are obtained, the largest image to be spliced in the image contrast in each group of view angle image pairs is taken as a target image of each group of view angle image pairs, and the image contrast is the smallest image to be matched in each group of view angle image pairs; acquiring a plurality of newly added view images according to the target image and the neural radiation field network, and acquiring reference images of each group of view image pairs according to the target image, the newly added view images and the matched images;
Acquiring a plurality of matching point pairs, reference feature points, matching feature points and local feature descriptors of the matching feature points in each group of view angle image pairs through a SIFT algorithm, acquiring a consistency range of each reference feature point in each difference image in each group of view angle image pairs under the same scale as the reference image according to gray values of the reference feature points in a preset variable window range in different difference images under the same scale as the reference image, and acquiring a fuzzy range descriptor of each matching feature point according to the consistency range and the matching point pairs;
acquiring target feature points and local feature descriptors of the target feature points in each group of view angle image pairs through a SIFT algorithm, acquiring a consistency range of each target feature point in each differential image under the same scale as the target image through a reference feature point consistency range acquisition method, acquiring a fuzzy range descriptor of each target feature point according to the consistency range, and acquiring a key point pair of the target image and a matched image in each group of view angle image pairs according to the local feature descriptors and the fuzzy range descriptors of the target feature points and the matched feature points;
and the image stitching module is used for stitching the target images in each group of view angle image pairs with the matched images according to the key point pairs to obtain a plurality of stitched images, and stitching is continued through a key point pair acquisition method based on the plurality of stitched images until all the images to be stitched acquired by all cameras are stitched into the same image, so that the panoramic image of the flight space cabin is obtained.
The beneficial effects of the invention are as follows: the invention groups the images to be spliced through the position parameters of the camera to obtain the visual angle image pair, extracts the target image and the matched image in the visual angle image pair through the contrast ratio, and quantizes the intersection area of the images with different visual angles and the matched image by combining the frequency spectrum to obtain the reference image, so that the characteristic point matching between the reference image and the matched image can avoid the matching error caused by visual angle distortion, and further improve the matching precision; obtaining a fuzzy range descriptor through a consistency range of a reference image under different fuzzy parameters, endowing matched characteristic points, further reducing matching errors under double verification of a local characteristic descriptor and the fuzzy range descriptor through target characteristic points and the matched characteristic points, obtaining key point pairs, splicing images to be spliced based on the key point pairs, finally obtaining a panoramic image, reducing influence caused by changing a topological relation due to view angle distortion through an LBP algorithm, and enabling the finally obtained panoramic image to be spliced accurately and high in accuracy.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for splicing panoramic images of a space capsule in a flight according to an embodiment of the present invention;
fig. 2 is a block diagram of a system for stitching panoramic images of a space capsule according to another embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart of a method for splicing panoramic images of a space capsule in flight according to an embodiment of the invention is shown, and the method comprises the following steps:
and S001, acquiring a plurality of images to be spliced by using cameras with different position parameters in the flight space capsule.
The aim of the embodiment is to splice panoramic images of images acquired by a plurality of cameras in a flight space capsule, so that the images at the same time are acquired by the plurality of cameras arranged in the flight space capsule, each acquired image is recorded as an image to be spliced, and meanwhile, the position parameters of each camera in the flight space capsule are acquired; the method comprises the steps that cameras are arranged at different positions of a flight space capsule and images are acquired, the position parameters of the cameras are calibrated in advance in the prior art, and the position parameters of each camera can be directly acquired.
So far, a plurality of images to be spliced, which are acquired by cameras at different positions in the flight space capsule, are acquired, and meanwhile, the position parameters of each camera are acquired.
Step S002, a plurality of groups of view angle image pairs are obtained according to the position parameters of the camera, and the target image, the matched image and the reference image in each group of view angle image pairs are obtained by combining the image contrast and the frequency spectrum through view angle transformation.
It should be noted that, the images at different positions have view angle distortion, so that it is first required to determine, according to the position parameters of the cameras, that each camera acquires an image to be stitched and a corresponding image to be matched, where the distance between the camera corresponding to the image to be matched and the position parameter of the camera corresponding to the image to be stitched is the smallest, and the more similar places are between the images; forming a visual angle image pair by the image to be matched and the image to be spliced, quantifying the definition degree of the image by the contrast of the image, taking the image with the largest contrast as a target image, and taking the image with the smallest contrast as a matched image; further, the visual angle transformation is carried out on the target image through the existing nerve radiation field network, so that a plurality of newly-added visual angle images based on the target image can be obtained; and the target image and the matched image, the newly added view angle image and the matched image have intersecting areas, and the intersecting areas have differences in the positions of the respective images, so that the difference image cannot be used for quantifying the size of the intersecting areas, and the similarity of the frequency spectrum images can be used for quantifying the intersecting areas, so that the reference image is obtained.
Specifically, firstly, the spatial distance between any two cameras is obtained according to the position parameter of each camera, wherein the spatial distance obtained by the calibrated position parameter is the prior art, and the embodiment is not repeated; forming a group of visual angle image pairs by any one camera and two images to be spliced of the camera with the minimum space distance from the camera, and acquiring a group of visual angle image pairs corresponding to each camera according to the method to obtain a plurality of groups of visual angle image pairs; it should be noted that, when the same situation exists in the pair of view images of different cameras, that is, the two cameras are the cameras with the smallest spatial distance, the pair of view images of the two cameras is the same group of pair of view images; for any group of view angle image pairs, respectively calculating image contrast of two images to be spliced in the view angle image pairs, wherein the image contrast is calculated as the prior art, and the embodiment is not repeated; taking the image to be spliced with the largest image contrast as a target image of the visual angle image pair, and taking the image with the smallest image contrast as a matched image of the visual angle image pair; and acquiring target images and matched images of each group of view image pairs according to the method.
Further, inputting the target image of any group of view angle image pairs into a nerve radiation field network, outputting to obtain a plurality of images with different view angles, and recording the images as newly added view angle images, wherein the nerve radiation field network is the prior art, and the embodiment is not repeated; taking any newly added view image of the group of view image pairs as an example, carrying out Fourier transform on the newly added view image and the matched image to obtain two spectrograms, forming a spectrum row vector by connecting each spectrogram end to end through a progressive pixel value, calculating cosine similarity of the two spectrum row vectors, and recording the cosine similarity as reference similarity of the newly added view image and the matched image; obtaining the reference similarity between each newly added view image and the target image in the set of view images and the matched image respectively according to the method, and taking the newly added view image with the maximum reference similarity as the reference image of the set of view image pairs; if the reference similarity of the target image is the maximum, the target image is used as the reference image; the reference image for each set of view image pairs is acquired as described above.
So far, a plurality of groups of view angle image pairs are obtained through the position parameters of the camera, and a target image, a matched image and a reference image in the view angle image pairs are obtained.
Step S003, obtaining feature points and local feature descriptors in a reference image and a matched image in each group of view angle image pair through a SIFT algorithm, obtaining a matching point pair and the reference feature points and the matching feature points according to the local feature descriptors, obtaining a consistency range of each reference feature point under different fuzzy parameters through an LBP algorithm, and obtaining a fuzzy range descriptor of each matching feature point according to the consistency range.
It should be noted that, the reference image is obtained based on the transformation view angle of the target image, and meanwhile, the reference image is also the view angle image with the largest intersection area with the matched image in the view angle image pair, then the reference image and the matched image need to be subjected to feature point detection by the SIFT algorithm, local feature descriptors of the feature points are obtained, a matching point pair of the reference image and the matched image is obtained according to the local feature descriptors, the feature points of the reference image in the matching point pair are marked as reference feature points, and the feature points of the matched image are marked as matching feature points; in order to avoid the influence of view angle distortion, quantifying a consistency range of the reference feature points through an LBP algorithm, wherein the consistency range is a range which can be influenced by features, constructing a fuzzy range descriptor of the reference feature points by combining the consistency range of the reference feature points under different fuzzy parameters in the SIFT algorithm process, and further obtaining a fuzzy range descriptor of the corresponding matching feature points according to the matching point pairs; after the fuzzy range descriptors of the matched feature points are obtained, the similarity of the fuzzy range descriptors is required to be met in addition to the similarity of the local feature descriptors for the matching between the target image and the matched image, the fuzzy range descriptors are correction of view angle distortion, the view angle distortion influences the change of the feature range, and the fixed fuzzy range descriptors can avoid the change.
It should be further noted that, the view angle distortion is the stretching change of the ground object at different angles, but the topological relation is the adjacent relation of different ground objects is less affected by the view angle distortion, so that the topological relation of the reference feature point needs to be obtained after the matching point pair is obtained; the LBP algorithm is a descriptor obtained by comparing a central point with surrounding points, and carries certain topological information, while the traditional method for calculating the topological information, such as a triangle network topological method, has larger calculated amount and larger influence by noise, and the traditional triangle network topological method quantifies topological relations, namely adjacency relations, and has more strict requirement on position information; however, most of the actual space capsule areas are metal products, when cameras at different positions shoot the same area, the reflection characteristics are different, so that larger errors are caused, the LBP algorithm is topology statistical information, specific position information is not contained, matching precision can be improved, and meanwhile matching precision improvement caused by too strict requirements on topology relations is avoided.
Specifically, for any group of view image pairs, firstly, feature point detection is carried out on a reference image and a matched image respectively through a SIFT algorithm, and feature descriptors are obtained, wherein the feature descriptors are 128-dimensional vectors, and the feature descriptors are marked as local feature descriptors of feature points; the SIFT algorithm detects feature points and obtains feature descriptors in the prior art, which is not described in detail in this embodiment; performing SIFT matching according to local feature descriptors in the reference image and the matched image to obtain a plurality of matching point pairs, wherein SIFT matching is the prior art, and the embodiment is not repeated; marking the characteristic points of the reference image in the matching point pair as reference characteristic points, and marking the characteristic points of the matched image in the matching point pair as matching characteristic points; it should be noted that, the detected feature points are not all successfully matched, and there are feature points in part of the reference image and feature points in the matched image that are not successfully matched; according to the method, a plurality of matching point pairs, reference feature points and matching feature points in each group of view angle image pairs are obtained, and local feature descriptors of the matching feature points are extracted.
In the process of detecting the feature points by the SIFT algorithm, a gaussian pyramid and a gaussian differential pyramid are respectively constructed on the images, a plurality of images are arranged in the gaussian pyramid under the same scale, each image corresponds to one fuzzy parameter, fuzzy parameters are gaussian kernels with different powers, the differential images in the gaussian differential pyramid under the same scale are obtained through convolution image differences of different fuzzy parameters in the same scale, the feature points are obtained in the gaussian differential pyramid, the different differential images in the same scale correspond to one fuzzy parameter respectively, and then quantification of the consistency range of the feature points can be carried out through an LBP algorithm according to a plurality of differential images in the same scale as the reference image, so that a fuzzy range descriptor is obtained.
In particular, toAcquiring a plurality of differential images with the same scale as the reference image in the detection process of the SIFT algorithm from any group of view angle image pairs, wherein each differential image corresponds to a fuzzy parameter, and the reference feature point has a corresponding position on each differential image; for any one reference feature point, the corresponding position of the reference feature point on each differential image is obtained, taking any one differential image as an example, a preset variable window is constructed by taking the corresponding position of the reference feature point as the center, and the initial size of the window is set as follows The increase size is set to 2, i.e. the preset variable window is increased once by a size of +.>The method comprises the steps of carrying out a first treatment on the surface of the For the reference feature point, counting the LBP value in each preset variable window of the differential image, wherein the LBP value is obtained by calculating the gray values of other pixel points in the preset variable window according to the gray value of the central pixel point through an LBP algorithm, which is not described in detail in the prior art, for example, the LBP value in the preset variable window of the initial size is 11111111, and the LBP value in the preset variable window which is increased once is 111111110011000100001000;
further, counting the number of 0 and 1 for the LBP value in the preset variable window of the initial size, taking the number with the largest number as the LBP mark value of the reference feature point, namely, 0 is the LBP mark value if the number of 0 is large, 1 is the LBP mark value if the number of 1 is large, taking the ratio of the number of the LBP mark values to the number of bits of the LBP value under the initial size as the LBP mark duty ratio of the reference feature point under the initial size in the differential image, for example, the LBP value is 11111100, and the LBP mark duty ratio is 6/8=0.75; the size of the preset variable window is gradually increased, the LBP mark value is unchanged, the LBP mark duty ratio is changed, a preset first threshold value is given for judging the consistency range, the preset first threshold value in the embodiment is calculated by 0.5, the size corresponding to the first LBP mark duty ratio is obtained to be smaller than or equal to the size corresponding to the preset first threshold value along with the increase of the size of the preset variable window, and the previous size of the size is taken as the reference feature point at the reference feature point Zhang Chafen the consistency size in the image, the result of 1/2 of the side length of the consistency size and rounding down is recorded as the consistency range of the reference feature point in the differential image, for example, the consistency size is thatThe consistency range is 3; obtaining the consistency range of the reference feature point in each differential image under the same scale as the reference image according to the method, taking the maximum value in all consistency ranges as the first data in the reference feature point fuzzy range descriptor and the minimum value as the second data in the reference feature point fuzzy range descriptor, and obtaining the fuzzy range descriptor of the reference feature point, wherein the fuzzy range descriptor is a data binary group; acquiring a fuzzy range descriptor of each reference feature point in the group of view angle images according to the method, and taking the fuzzy range descriptor of the reference feature point as a fuzzy range descriptor of the corresponding matching feature point according to the corresponding relation between the reference feature point and the matching feature point in the matching point pair; and acquiring the fuzzy range descriptors of each matching characteristic point in each group of view angle image pairs according to the method.
Thus, a plurality of reference feature points and matching feature points in each group of view angle image pairs are obtained, and a local feature descriptor and a fuzzy range descriptor of each matching feature point are obtained.
Step S004, obtaining target feature points and local feature descriptors in each group of view angle images through a SIFT algorithm, obtaining consistency range and fuzzy range descriptors of each target feature point through an LBP algorithm, and obtaining matching results of the target images and matched images in each group of view angle images according to the local feature descriptors and fuzzy range descriptors of the target feature points and the matching feature points, so that panoramic image stitching is completed.
It should be noted that, a fuzzy range descriptor is given to the matched feature points through the matching relation between the reference image and the feature points in the matched image, and the fuzzy range descriptor is obtained for the feature points in the target image as well, so that the target feature points and the matched feature points can be successfully matched only by the similar fuzzy range descriptor on the basis of the similar local feature descriptors, and the influence of view angle distortion on the matching is avoided through the limitation of the fuzzy range descriptor; the fuzzy range descriptors are acquired from the view angle image with the maximum reference similarity with the matched image and are endowed to the matched feature points, and if the fuzzy range descriptors are similar in the matching process of the target feature points and the matched feature points, the influence of view angle distortion on the features is smaller, and the matching precision can be improved.
Specifically, feature point detection is performed on a target image in each group of view angle image pairs through a SIFT algorithm, the obtained feature points are marked as target feature points, and feature descriptors of the target feature points are marked as local feature descriptors of the target feature points; according to a Gaussian differential pyramid in the process of detecting the feature points by a SIFT algorithm, acquiring the consistency range of each target feature point in each group of view angle images in each differential image under the same scale as the target image according to an acquisition method of the consistency range of the reference feature points; and obtaining a fuzzy range descriptor of each target feature point according to the maximum value and the minimum value in all the consistent ranges of each target feature point and the acquisition method of the fuzzy range descriptor of the reference feature point, wherein the fuzzy range descriptor of the target feature point is also a data binary group.
Further, for any group of view image pairs, each target feature point and each matching feature point are provided with a local feature descriptor and a fuzzy range descriptor; for any one matching feature point, obtaining cosine similarity between the matching feature point and each target feature point in a local feature descriptor, marking the cosine similarity as feature similarity, arranging all feature similarities from large to small to obtain a feature similarity sequence of the matching feature point, and taking target feature points corresponding to a preset number of elements in the feature similarity sequence as reserved feature points of the matching feature point, wherein the preset number is 10 in the embodiment; acquiring cosine similarity between the matching feature point and each reserved feature point in a fuzzy range descriptor, marking the cosine similarity as fuzzy similarity, and taking the product of the feature similarity and the fuzzy similarity of the same reserved feature point as the comprehensive similarity of the matching feature point and the reserved feature point; acquiring the comprehensive similarity between the matching feature point and each reserved feature point, and taking the reserved feature point corresponding to the maximum value of the comprehensive similarity as the final matching point of the matching feature point; according to the method, the final matching point of each matching feature point in the set of view angle images is obtained, the matching feature point and the final matching point are used as key point pairs, and image stitching is performed on the target image and the matched image according to the key point pairs, wherein the obtained matching key point pairs are subjected to image stitching to form the prior art, and the embodiment is not repeated, for example, an image stitching method disclosed in patent CN106886979 a.
Further, respectively performing image stitching on the target images and the matched images in all the group of view angle image pairs to obtain a plurality of stitched images; acquiring a spliced image according to the size of an intersecting area for each spliced image, and forming a spliced image pair by the spliced image and the spliced image, wherein the size of the intersecting area is judged to obtain a frequency spectrum row vector according to a spectrogram of the spliced image, and other spliced images with the maximum cosine similarity between the frequency spectrum row vectors are used as the spliced image of the spliced image; obtaining key point pairs for each group of spliced image pairs according to the key point pair obtaining method and carrying out image splicing; continuing to splice the images obtained based on the spliced image pair according to the key point pair obtaining method until all the images to be spliced acquired by all cameras are spliced into the same image, so as to obtain a panoramic image of the flight space capsule; it should be noted that, the first image acquisition is based on two images to be stitched, and there is an intersection between the image pairs with different viewing angles, that is, any image to be stitched may exist in multiple sets of image pairs with viewing angles, so that the number of image pairs is reduced by acquiring each image pair, and finally a panoramic image including all the images to be stitched is obtained.
Thus, the splicing of the panoramic image of the flight space capsule is completed.
Referring to fig. 2, a block diagram of a system for splicing panoramic images of a space capsule according to another embodiment of the present invention is shown, where the system includes:
the image acquisition module S101 acquires a plurality of images to be spliced through cameras with a plurality of different position parameters in the flight space capsule.
The image matching module S102:
(1) Acquiring a plurality of groups of view image pairs according to the position parameters of the camera, and acquiring a target image, a matched image and a reference image in each group of view image pairs by combining the image contrast and the frequency spectrum through view transformation;
(2) Acquiring feature points and local feature descriptors in a reference image and a matched image in each group of view angle image pair through a SIFT algorithm, acquiring a matching point pair and the reference feature points and the matching feature points according to the local feature descriptors, acquiring a consistency range of each reference feature point under different fuzzy parameters through an LBP algorithm, and acquiring a fuzzy range descriptor of each matching feature point according to the consistency range;
(3) And obtaining target feature points and local feature descriptors in each group of view angle images by using a SIFT algorithm, obtaining consistency range and fuzzy range descriptors of each target feature point by using an LBP algorithm, and obtaining matching results of the target images and matched images in each group of view angle images according to the local feature descriptors and fuzzy range descriptors of the target feature points and the matching feature points.
And the image stitching module S103 completes panoramic image stitching according to the matching result of the target image and the matched image in each group of view angle images.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (5)
1. The method for splicing the panoramic images of the flight space capsule is characterized by comprising the following steps of:
acquiring a plurality of images to be spliced through a plurality of cameras with different position parameters;
according to the position parameters, the spatial distance between any two cameras is obtained, two images to be spliced, which are obtained by any one camera and the camera with the nearest spatial distance, form view angle image pairs, a plurality of groups of view angle image pairs are obtained, the largest image to be spliced in the image contrast in each group of view angle image pairs is taken as a target image of each group of view angle image pairs, and the image contrast is the smallest image to be matched in each group of view angle image pairs; acquiring a plurality of newly added view images according to the target image and the neural radiation field network, and acquiring reference images of each group of view image pairs according to the target image, the newly added view images and the matched images;
Acquiring a plurality of matching point pairs, reference feature points, matching feature points and local feature descriptors of the matching feature points in each group of view angle image pairs through a SIFT algorithm, acquiring a consistency range of each reference feature point in each difference image in each group of view angle image pairs under the same scale as the reference image according to gray values of the reference feature points in a preset variable window range in different difference images under the same scale as the reference image, and acquiring a fuzzy range descriptor of each matching feature point according to the consistency range and the matching point pairs;
acquiring target feature points and local feature descriptors of the target feature points in each group of view angle image pairs through a SIFT algorithm, acquiring a consistency range of each target feature point in each differential image under the same scale as the target image through a reference feature point consistency range acquisition method, acquiring a fuzzy range descriptor of each target feature point according to the consistency range, and acquiring a key point pair of the target image and a matched image in each group of view angle image pairs according to the local feature descriptors and the fuzzy range descriptors of the target feature points and the matched feature points;
splicing the target images in each group of view angle image pairs and the matched images according to the key point pairs to obtain a plurality of spliced images, and continuously splicing the images through a key point pair acquisition method based on the spliced images until all the images to be spliced acquired by all cameras are spliced into the same image to obtain a panoramic image of the flight space capsule;
The specific method for acquiring the consistency range of each reference feature point in each group of view angle images in each differential image under the same scale as the reference image comprises the following steps:
taking any one group of view angle image pairs as target image pairs, taking any one reference feature point in the target image pairs as a target reference feature point, acquiring a corresponding position of the target reference feature point on each differential image under the same scale as the reference image, taking any one differential image under the same scale as the reference image as a target differential image, taking the corresponding position of the target reference feature point in the target differential image as a center, constructing a preset variable window, and acquiring the LBP value of the target reference feature point in the target differential image of each preset variable window according to the initial size and the growth size of the preset variable window;
counting the number of 0 and 1 of LBP values in a preset variable window of the initial size, taking the number with the largest number as an LBP mark value of the reference feature point, and taking the ratio of the number of the LBP mark values to the number of bits of the LBP value under the initial size as the LBP mark duty ratio of the target reference feature point under the initial size in the target differential image;
acquiring a size corresponding to a first LBP mark with a duty ratio smaller than or equal to a preset first threshold, taking the previous size of the corresponding size as a consistency size of the target reference feature point in the target differential image, and marking a result of 1/2 of the side length of the consistency size and rounding down as a consistency range of the target reference feature point in the target differential image; acquiring a consistency range of a target reference feature point in each differential image under the same scale as the reference image;
The fuzzy range descriptors of each matching characteristic point are obtained according to the consistency range and the matching point pairs, and the specific method comprises the following steps:
taking any group of view image pairs as target image pairs, taking any reference feature point in the target image pairs as a target reference feature point, taking the maximum value in all consistency ranges of the target reference feature point as first data in a target reference feature point fuzzy range descriptor, and taking the minimum value as second data in the target reference feature point fuzzy range descriptor to obtain a fuzzy range descriptor of the target reference feature point;
acquiring a fuzzy range descriptor of each reference feature point in the target image pair, and taking the fuzzy range descriptor of the reference feature point as a fuzzy range descriptor of the corresponding matching feature point according to the corresponding relation between the reference feature point and the matching feature point in the matching point pair; and acquiring a fuzzy range descriptor of each matching characteristic point in each group of view angle images.
2. The method for stitching panoramic images of a space-flight chamber according to claim 1, wherein the acquiring the reference image of each group of view image pairs according to the target image, the newly added view image and the matched image comprises the following specific steps:
Taking any one group of view angle image pair as a target image pair, taking any newly added view angle image in the target image pair as a target view angle image, carrying out Fourier transform on the target view angle image and a matched image to obtain two spectrograms, forming spectrum line vectors by connecting each spectrogram end to end through progressive pixel values, calculating cosine similarity of the two spectrum line vectors, and recording the cosine similarity as reference similarity of the target view angle image and the matched image;
and acquiring the reference similarity between each newly added view image and the target image in the target image pair and the matched image, and taking the newly added view image with the maximum reference similarity as the reference image of the target image pair.
3. The method for stitching the panoramic image of the space-flight deck according to claim 1, wherein the obtaining, by SIFT algorithm, a plurality of matching point pairs, reference feature points, matching feature points and local feature descriptors of the matching feature points in each group of view angle image pairs comprises the following specific steps:
and detecting feature points of the reference image and the matched image in each group of view angle image pairs by using a SIFT algorithm, obtaining feature descriptors, marking the feature descriptors as local feature descriptors of feature points, obtaining a plurality of matching point pairs of the reference image and the matched image in each group of view angle image pairs by using SIFT matching according to the local feature descriptors, marking the feature points of the reference image in the matching point pairs as reference feature points, marking the feature points of the matched image in the matching point pairs as matching feature points, and marking the local feature descriptors of the feature points corresponding to the matching feature points as local feature descriptors of the matching feature points.
4. The method for stitching panoramic images of a space-flight chamber according to claim 1, wherein the acquiring key point pairs of the target image and the matched image in each set of view angle image pairs comprises the following specific steps:
taking any group of view image pairs as target image pairs, taking any one matching feature point in the target image pairs as a target matching feature point, acquiring cosine similarity between the target matching feature point and each target feature point in a local feature descriptor, marking the cosine similarity as feature similarity, arranging all the feature similarities from large to small to obtain a feature similarity sequence of the target matching feature point, and taking target feature points corresponding to the elements of the preset quantity in the feature similarity sequence as reserved feature points of the target matching feature points;
acquiring cosine similarity between the target matching feature point and each reserved feature point in the fuzzy range descriptor, marking the cosine similarity as fuzzy similarity, and taking the product of the feature similarity and the fuzzy similarity of the same reserved feature point as the comprehensive similarity of the target matching feature point and the reserved feature point;
acquiring the comprehensive similarity between the target matching feature point and each reserved feature point, and taking the reserved feature point corresponding to the maximum value of the comprehensive similarity as the final matching point of the target matching feature point; obtaining a final matching point of each matching characteristic point in the target image pair, and taking the matching characteristic points and the final matching points as key point pairs; and acquiring a plurality of key point pairs in each group of view angle image pairs.
5. A flight space capsule panoramic image stitching system, the system comprising:
the image acquisition module acquires a plurality of images to be spliced through a plurality of cameras with different position parameters;
and an image matching module: according to the position parameters, the spatial distance between any two cameras is obtained, two images to be spliced, which are obtained by any one camera and the camera with the nearest spatial distance, form view angle image pairs, a plurality of groups of view angle image pairs are obtained, the largest image to be spliced in the image contrast in each group of view angle image pairs is taken as a target image of each group of view angle image pairs, and the image contrast is the smallest image to be matched in each group of view angle image pairs; acquiring a plurality of newly added view images according to the target image and the neural radiation field network, and acquiring reference images of each group of view image pairs according to the target image, the newly added view images and the matched images;
acquiring a plurality of matching point pairs, reference feature points, matching feature points and local feature descriptors of the matching feature points in each group of view angle image pairs through a SIFT algorithm, acquiring a consistency range of each reference feature point in each difference image in each group of view angle image pairs under the same scale as the reference image according to gray values of the reference feature points in a preset variable window range in different difference images under the same scale as the reference image, and acquiring a fuzzy range descriptor of each matching feature point according to the consistency range and the matching point pairs;
Acquiring target feature points and local feature descriptors of the target feature points in each group of view angle image pairs through a SIFT algorithm, acquiring a consistency range of each target feature point in each differential image under the same scale as the target image through a reference feature point consistency range acquisition method, acquiring a fuzzy range descriptor of each target feature point according to the consistency range, and acquiring a key point pair of the target image and a matched image in each group of view angle image pairs according to the local feature descriptors and the fuzzy range descriptors of the target feature points and the matched feature points;
the image stitching module is used for stitching the target images in each group of view angle image pairs with the matched images according to the key point pairs to obtain a plurality of stitched images, and stitching is continued through a key point pair acquisition method based on the stitched images until all the images to be stitched acquired by all the cameras are stitched into the same image, so that a panoramic image of the flight space cabin is obtained;
the specific method for acquiring the consistency range of each reference feature point in each group of view angle images in each differential image under the same scale as the reference image comprises the following steps:
taking any one group of view angle image pairs as target image pairs, taking any one reference feature point in the target image pairs as a target reference feature point, acquiring a corresponding position of the target reference feature point on each differential image under the same scale as the reference image, taking any one differential image under the same scale as the reference image as a target differential image, taking the corresponding position of the target reference feature point in the target differential image as a center, constructing a preset variable window, and acquiring the LBP value of the target reference feature point in the target differential image of each preset variable window according to the initial size and the growth size of the preset variable window;
Counting the number of 0 and 1 of LBP values in a preset variable window of the initial size, taking the number with the largest number as an LBP mark value of the reference feature point, and taking the ratio of the number of the LBP mark values to the number of bits of the LBP value under the initial size as the LBP mark duty ratio of the target reference feature point under the initial size in the target differential image;
acquiring a size corresponding to a first LBP mark with a duty ratio smaller than or equal to a preset first threshold, taking the previous size of the corresponding size as a consistency size of the target reference feature point in the target differential image, and marking a result of 1/2 of the side length of the consistency size and rounding down as a consistency range of the target reference feature point in the target differential image; acquiring a consistency range of a target reference feature point in each differential image under the same scale as the reference image;
the fuzzy range descriptors of each matching characteristic point are obtained according to the consistency range and the matching point pairs, and the specific method comprises the following steps:
taking any group of view image pairs as target image pairs, taking any reference feature point in the target image pairs as a target reference feature point, taking the maximum value in all consistency ranges of the target reference feature point as first data in a target reference feature point fuzzy range descriptor, and taking the minimum value as second data in the target reference feature point fuzzy range descriptor to obtain a fuzzy range descriptor of the target reference feature point;
Acquiring a fuzzy range descriptor of each reference feature point in the target image pair, and taking the fuzzy range descriptor of the reference feature point as a fuzzy range descriptor of the corresponding matching feature point according to the corresponding relation between the reference feature point and the matching feature point in the matching point pair; and acquiring a fuzzy range descriptor of each matching characteristic point in each group of view angle images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310431498.6A CN116167921B (en) | 2023-04-21 | 2023-04-21 | Method and system for splicing panoramic images of flight space capsule |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310431498.6A CN116167921B (en) | 2023-04-21 | 2023-04-21 | Method and system for splicing panoramic images of flight space capsule |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116167921A CN116167921A (en) | 2023-05-26 |
CN116167921B true CN116167921B (en) | 2023-07-11 |
Family
ID=86413430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310431498.6A Active CN116167921B (en) | 2023-04-21 | 2023-04-21 | Method and system for splicing panoramic images of flight space capsule |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116167921B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116433887B (en) * | 2023-06-12 | 2023-08-15 | 山东鼎一建设有限公司 | Building rapid positioning method based on artificial intelligence |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521838B (en) * | 2011-12-19 | 2013-11-27 | 国家计算机网络与信息安全管理中心 | Image searching/matching method and system for same |
WO2016165016A1 (en) * | 2015-04-14 | 2016-10-20 | Magor Communications Corporation | View synthesis-panorama |
US10839556B2 (en) * | 2018-10-23 | 2020-11-17 | Microsoft Technology Licensing, Llc | Camera pose estimation using obfuscated features |
CN111242848B (en) * | 2020-01-14 | 2022-03-04 | 武汉大学 | Binocular camera image suture line splicing method and system based on regional feature registration |
US12106446B2 (en) * | 2021-03-27 | 2024-10-01 | Mitsubishi Electric Research Laboratories, Inc. | System and method of image stitching using robust camera pose estimation |
CN113902657A (en) * | 2021-08-26 | 2022-01-07 | 北京旷视科技有限公司 | Image splicing method and device and electronic equipment |
CN114125269B (en) * | 2021-10-29 | 2023-05-23 | 南京信息工程大学 | Mobile phone real-time panoramic shooting method based on deep learning |
-
2023
- 2023-04-21 CN CN202310431498.6A patent/CN116167921B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN116167921A (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110097093B (en) | Method for accurately matching heterogeneous images | |
CN111340701B (en) | Circuit board image splicing method for screening matching points based on clustering method | |
CN113159300B (en) | Image detection neural network model, training method thereof and image detection method | |
CN111860695A (en) | Data fusion and target detection method, device and equipment | |
CN111369605B (en) | Infrared and visible light image registration method and system based on edge features | |
CN108550166B (en) | Spatial target image matching method | |
CN116167921B (en) | Method and system for splicing panoramic images of flight space capsule | |
CN110084743B (en) | Image splicing and positioning method based on multi-flight-zone initial flight path constraint | |
CN110766657B (en) | Laser interference image quality evaluation method | |
CN112329662B (en) | Multi-view saliency estimation method based on unsupervised learning | |
CN111242026A (en) | Remote sensing image target detection method based on spatial hierarchy perception module and metric learning | |
CN110555820A (en) | Image fusion method based on convolutional neural network and dynamic guide filtering | |
CN113128518B (en) | Sift mismatch detection method based on twin convolution network and feature mixing | |
CN111079585B (en) | Pedestrian re-identification method combining image enhancement with pseudo-twin convolutional neural network | |
CN110969657B (en) | Gun ball coordinate association method and device, electronic equipment and storage medium | |
US11645827B2 (en) | Detection method and device for assembly body multi-view change based on feature matching | |
CN113723380B (en) | Face recognition method, device, equipment and storage medium based on radar technology | |
Wu et al. | An accurate feature point matching algorithm for automatic remote sensing image registration | |
CN116342466A (en) | Image matting method and related device | |
Ren et al. | SAR image matching method based on improved SIFT for navigation system | |
CN111027616B (en) | Line characteristic description system based on end-to-end learning | |
CN114565653A (en) | Heterogeneous remote sensing image matching method with rotation change and scale difference | |
CN106327423B (en) | Remote sensing image registration method and system based on directed line segment | |
CN111401385A (en) | Similarity calculation method for image local topological structure feature descriptors | |
CN116958172B (en) | Urban protection and update evaluation method based on three-dimensional space information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |