CN116757936B - Image matching relation acquisition method and image stitching method thereof - Google Patents

Image matching relation acquisition method and image stitching method thereof Download PDF

Info

Publication number
CN116757936B
CN116757936B CN202311058975.5A CN202311058975A CN116757936B CN 116757936 B CN116757936 B CN 116757936B CN 202311058975 A CN202311058975 A CN 202311058975A CN 116757936 B CN116757936 B CN 116757936B
Authority
CN
China
Prior art keywords
image
images
pairs
point pairs
matching relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311058975.5A
Other languages
Chinese (zh)
Other versions
CN116757936A (en
Inventor
葛俊彦
汤翔
王佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tuodao Medical Technology Co Ltd
Original Assignee
Tuodao Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tuodao Medical Technology Co Ltd filed Critical Tuodao Medical Technology Co Ltd
Priority to CN202311058975.5A priority Critical patent/CN116757936B/en
Publication of CN116757936A publication Critical patent/CN116757936A/en
Application granted granted Critical
Publication of CN116757936B publication Critical patent/CN116757936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image matching relation acquisition method and an image splicing method thereof, wherein the image matching relation acquisition method comprises the following steps: s1, extracting characteristic point pairs of two images with overlapping areas; s2, constructing a matching relation model between the two images; and S3, calculating the matching relationship of the two images based on the matching relationship model obtained in the S2 and different characteristic point pairs selected from the characteristic point pairs obtained in the S1. According to the application, the optimal matching relation is selected through repeated iterative calculation, so that the outlier pairs in the feature point pairs can be eliminated, the interference of the outlier pairs on the image matching relation is reduced, the fault tolerance of the feature point pairs to the wrong matching is improved, the error generated by uncertain factors in the position relation operation is reduced, and the automatic image splicing is completed rapidly and accurately in the image splicing process.

Description

Image matching relation acquisition method and image stitching method thereof
Technical Field
The application relates to the field of image processing, in particular to an image matching relation acquisition method and an image stitching method thereof.
Background
With the development of medical imaging equipment, medical imaging plays a critical role in clinical judgment. According to different medical scenes, some clinical judgments need to be performed according to the medical images of the entire spine according to the human tissue organs or bones of a larger area, such as scoliosis. The movable C-arm X-ray machine has the advantages of low price, small volume, convenient movement and small radiation amount, and is a common medical image acquisition device. However, the C-arm X-ray apparatus cannot acquire a full spine image at the time of image acquisition due to the limitation of FOV (field of view).
In order to solve the problems, the C-arm X-ray machine is required to acquire different bone segment images respectively, then the different bone segment images are spliced to obtain the complete orthopaedics image, and the manual splicing has higher requirements on the professional degree and the proficiency of doctors. The automatic splicing can reduce the requirement on operation, and is quick and convenient.
The conventional automatic splicing process needs several steps of extracting characteristic points of images to be spliced, matching the characteristic points, calculating the matching relation of images, moving and splicing the images and the like. The existing matching relation calculation has large dependence on the accuracy of feature point matching, and the accuracy of the matching relation can be directly affected by the feature point matching inaccuracy.
Disclosure of Invention
The application aims to: the accuracy of the image matching relationship is mainly influenced by two aspects, (1) the characteristic point pairs are inaccurate in pairing and have mismatch; (2) The application provides an image matching relation acquisition method, which can improve the fault tolerance of wrong pairing of characteristic point pairs and reduce the error generated by uncertain factors in position relation operation, and also provides an image splicing method which can quickly and accurately splice a second picture part on a first picture in the image splicing process.
The technical scheme is as follows:
an image matching relationship acquisition method includes:
s1, extracting characteristic point pairs of two images with overlapping areas;
s2, constructing a matching relation model between the two images;
and S3, calculating the matching relationship of the two images based on the matching relationship model obtained in the S2 and different characteristic point pairs selected from the characteristic point pairs obtained in the S1.
Specifically, calculating the matching relationship between the two images includes: and respectively calculating the image matching relations corresponding to the selected different characteristic point pairs, counting the number of the characteristic point pairs adapting to each image matching relation, and identifying the optimal image matching relation according to the number.
More specifically, identifying the optimal image matching relationship according to the number includes:
comparing the number of the feature point pairs adapting to the matching relation of each image, and identifying the optimal matching relation of the images according to the comparison result; or alternatively, the first and second heat exchangers may be,
and comparing the number of the feature point pairs adapting to each image matching relation with a threshold value, and identifying the optimal image matching relation according to the comparison result.
Further, the number of the feature point pairs adapted to each image matching relationship is compared with each other, and the image matching relationship in which the number of the feature point pairs adapted to each image matching relationship is the largest is taken as the optimal image matching relationship.
More specifically, the matching relation model between the two images specifically includes:
wherein r is 1 、r 2 、r 3 、r 4 Respectively, representing rotation-related model parameters, t x And t y To represent translation-related model parameters.
Further, respectively calculating the image matching relations corresponding to the selected different feature point pairs includes:
randomly selecting at least three pairs of the characteristic point pairs obtained in the step S1 to form a group of characteristic point pairs, constructing a limiting condition with the matching relation model between two images, and calculating the model parameters to obtain the matching relation between the two images.
Still further, the constraint conditions include, among the constraint conditions:
wherein,θ∈[3°,5°]。
still further, the counting the number of feature point pairs of the matching relation of each image comprises:
constructing an energy equation:
wherein, (x) j ,y j ) And (x) j ′,y j ') are coordinates of any one of the pairs of the two characteristic points except the selected characteristic point pair, and H is a matching relationship of the two images;
substituting all the characteristic point pairs except the selected characteristic point pairs into the energy equation to obtain energy, and recording the characteristic point pair number in which the energy value is smaller than a set threshold value, namely the point pair number adapting to the matching relation.
The application also provides an image stitching method, which comprises the following steps:
(1) The image acquisition device acquires a plurality of images in a directional translation way;
(2) Calling any two images with overlapping parts obtained in the step (1) to canvas, and identifying the current positions of the two images;
(3) Respectively identifying and obtaining the positions and the characteristics of the characteristic points in the two images, and matching the characteristic points of the two images according to the similarity of the characteristics of the characteristic points to obtain successfully matched characteristic point pairs;
(4) Acquiring the matching relationship of the two images by using the image matching relationship acquisition method;
(5) Splicing the two images according to the matching relation obtained in the step (4) and the current positions of the two images;
(6) Repeating the steps (2) - (5) until all the images are spliced.
Specifically, the image acquisition device is a mobile C-arm X-ray machine.
The beneficial effects are that: according to the application, the optimal matching relation is selected through repeated iterative calculation, so that the outlier pairs in the feature point pairs can be eliminated, the interference of the outlier pairs on the image matching relation is reduced, the fault tolerance of the feature point pairs to the wrong matching is improved, the error generated by uncertain factors in the position relation operation is reduced, and the automatic image splicing is completed rapidly and accurately in the image splicing process.
Drawings
FIG. 1 is a flow chart of an image matching relationship acquisition method of the present application;
fig. 2 is a flowchart of image stitching.
Detailed Description
The application is further elucidated below in connection with the drawings and the specific embodiments.
Referring to fig. 1, the present application provides a method for acquiring an image matching relationship, comprising the steps of:
s1, extracting feature point pairs { a, B } in an image A and an image B with an overlapping region, and identifying the positions of two feature points in each feature point pair.
Specifically, for any feature point pair { a i ,b i -identifying the corresponding feature point a in its image a i Coordinates (x) i ,y i ) Corresponding feature points B in the image B are identified i Coordinates (x) i ',y i '), wherein i is a positive integer, the number of the characteristic point pairs is represented, i is more than or equal to 0 and less than or equal to N, and N is the total number of the characteristic point pairs. Wherein the feature point a i 、b i Is a coordinate system of (c).
S2, establishing an image matching relation model;
specifically, the positions of the feature points are in a matrixExpressed as homography matrix +.>Is an image matching relation model, wherein r is as follows 1 、r 2 、r 3 、r 4 Respectively, are model parameters representing rotation correlation, t x And t y To represent translation-related model parameters;
in theory, the coordinates of two points in each characteristic point pair should satisfy:
s3, constructing a limiting condition of an image matching relation model according to the two point position matrixes in the characteristic point pairs in the S1 and the image matching relation model in the S2, and calculating model parameters according to the limiting condition to obtain an image matching relation.
Specifically, in the characteristic point pairs in S1, M pairs of characteristic point pairs are randomly selected, wherein M is more than or equal to 3 and less than or equal to N; and constructing a limiting condition according to the M pairs of characteristic point pairs and the image matching relation model, and calculating the model parameters according to the limiting condition to obtain a corresponding image matching relation H1.
In one exemplary embodiment, the limiting conditions for the image matching relationship model are:
……;
the above equation set (when M>3 is an overdetermined equation set), and calculating to obtain model parameters r 1 、r 2 、r 3 、r 4 、t x 、t y And determining the value to obtain an image matching relationship H1.
In the application, on the basis of the above-mentioned constraint conditions, in order to further overcome errors generated by uncertain factors in the operations such as SVD algorithm solving and the like, the application further comprises the following constraint conditions:
wherein,the orthogonality of the rotation matrix is ensured,ensuring that no large rotation of the image occurs. Considering that the machine translates in only two directions during pushing the C-arm machine, the rotation amount is small and is displayed in the image between 3 DEG and 5 DEG, so the rotation amount can be limited toθIn the mean-in-time, the first time,in the present application,θ∈[3°,5°]preferably, 3 ° is chosen in this embodiment.
And S4, judging whether the characteristic point pairs of the M pairs of characteristic point pairs are suitable for the image matching relationship.
The judgment is specifically as follows:
constructing an energy equation:
(x j ,y j ) And (x) j ′,y j ' is the coordinates of any one of the pairs of the characteristic points to the two characteristic points, and H is the image matching relation;
substituting all the pairs of the characteristic points of the M pairs of the characteristic points into the energy process to obtain energy, and judging the characteristic point pairs as adaptation point pairs to be suitable for the image matching relationship H1 if the obtained energy value is smaller than a set threshold E; if the number of the feature point pairs is larger than the preset threshold, judging that the feature point pairs are non-adaptive point pairs, and recording the number SUM1 of the adaptive point pairs, wherein the feature point pairs are not suitable for the image matching relation H1. Further, the threshold E is preferably 0.1.
S5, repeating S3-S4 for a set number of times P, wherein the value of P is influenced by M and N, and P=f (M, N); and randomly selecting M pairs of characteristic points which are different from the previous M pairs of characteristic points at each repetition, and acquiring the image matching relation Hp and the corresponding adaptation point pair number SUMP of each time. In the application, M is more than or equal to 3 and less than or equal to N-1, and when M=N, only one group of M pairs of characteristic point pairs exist, and different M pairs of characteristic point pairs do not exist; when m=n-1, there will be N different M pairs of feature point pairs according to the permutation and combination; when m=n-2, N will exist according to permutation and combination 2 -N different M pairs of feature point pairs; and so on. And 3.ltoreq.M is a basic condition that the limiting condition of the image matching relation model is solved. In theory, the smaller M is, the more accurate the comparison result is, but the more M is repeated, the larger the calculated amount is, and the accuracy and the calculated amount are compromised, so that N/2.ltoreq.M.ltoreq.N-2 is preferable in the embodiment.
S6, selecting a matching relation, in which the number of the pairs of points adapting to the corresponding matching relation meets the set condition, as an optimal image matching relation according to each image matching relation Hp obtained in the S5 and the number of the pairs of adaptation points corresponding to the matching relation SUMP.
In an exemplary embodiment, the number of feature point pairs adapted to each image matching relationship is compared, and an optimal image matching relationship is identified according to the comparison result, and preferably, the image matching relationship with the largest number of the adaptation points is selected as the optimal image matching relationship.
In another exemplary embodiment, the number of adaptation point pairs per time is compared to the set amount K, and when the number of adaptation point pairs is smaller than the set amount K, repetition is continued; when the number of the adaptation point pairs is larger than the set quantity K, repetition is stopped, specifically, the set quantity may be selected to be k=0.9n, and the current matching relationship is selected to be the optimal image matching relationship.
Through the steps, the outlier pairs in the feature point pairs can be eliminated while the feature point pair matching relationship, namely the image matching relationship is calculated, so that the interference of the outlier pairs on the image matching relationship is reduced. For the feature point pair set containing 80% noise, the acquired optimal image matching relationship is still accurate and effective.
The application also provides an image splicing method based on the image matching relation acquisition method, which comprises the following steps:
(1) The image acquisition device acquires a plurality of images and stores the images.
Specifically, the image acquisition device sequentially acquires a plurality of images in a directional translation manner, and an overlapping area is formed between two adjacent images. In one exemplary embodiment, the image acquisition device is a mobile C-arm X-ray machine and the images are images of different segments of the spine.
(2) And calling any two images with overlapping parts in the images to canvas, and acquiring the current positions of the two images.
In the application, the image processing system calls the image.
Specifically, any one of the images A1 (hereinafter referred to as a first image) and the image A2 (hereinafter referred to as a second image) having an overlapping region thereof are extracted into the canvas, and the coordinate system of the first image is taken as the canvas coordinate system, namely, the upper left vertex of the first image is taken as the coordinate systemEstablishing a canvas coordinate system for an original point, wherein a long side and a wide side are respectively x and y axes, and acquiring initial positions of the two images in the canvas coordinate system; the positions of the images being in a matrixRepresenting that the initial position of the first image A1 is +.>The initial position of the second image A2 is +.>
In one exemplary embodiment, the positions of the first and second images coincide, i.e., the long and wide sides of the second image coincide with the x and y axes of the canvas coordinate system. Therefore, the coordinate system of any point in the first image and the second image can be unified, and the operation is convenient.
The first and second images are not limited to the first and second images, but may be a third image, a fourth image, or even a third image and a fifth image, as long as the images have overlapping portions. In the present application, the first and second images preferably have adjacent images of the overlapping region.
(3) And respectively identifying characteristic points in the first image and the second image, and acquiring the positions and the characteristics of the characteristic points.
In one exemplary embodiment, feature points in both images are identified by SURF (Speeded Up Robust Features) algorithm or SIFT algorithm. The algorithm provides a feature vector for each feature point to represent its features.
(4) And matching the feature points in the first image with the feature points in the second image according to the similarity of the features of the feature points, and marking the feature point pairs successfully matched. Specifically, whether the feature points can be successfully paired or not is judged according to the comparison result of the similarity of the feature vectors of the feature points and the similarity threshold value.
(5) And acquiring the matching relation H of the second image and the first image according to any one of the image matching relation acquisition methods.
(6) And determining a target position A2' of the second image in the canvas according to the matching relation H between the second image and the first image and the current position of the second image in the canvas, specifically, A2' =A2×H, and moving the second image A2 to the target position A2', so that the second image can be spliced on the first image.
(7) Repeating the steps (2) - (6), and calling and processing two images different from the previous images each time until the target positions of all the images in the canvas are obtained and all the images are moved to the target positions.
Specifically, in the image stitching process, there are often more than two images, and at this time, the positional relationship between the current image and the first image can be obtained by using chain operation only by calculating the positional relationship between two adjacent images, that is, the positional transformation matrix of the (k+1) th image and the first image is: t=h 1 *H 2 *...*H k ,H k Is the image matching relationship between the k+1th image and the k image. And further obtaining target positions of all the images, and completing the splicing of all the images.
In one exemplary embodiment, the images are three, the first image A1 is a first image, the second image A2 is a second image, and the matching relationship H between the second image and the first image 1 The method specifically comprises the following steps:
(7-1) acquiring the image matching relationship H between the third image and the second image by the same method as the acquisition of the matching relationship H between the second image and the first image 2
(7-2) based on the matching relationship H between the second image and the first image 1 Image matching relationship H between third image and second image 2 And calculating an image matching relation T between the third image and the first image, wherein a calculation formula is as follows: t=h 1 *H 2
(7-3) stitching the third image to the stitched second image according to the image matching relationship T between the third image and the first image. Specific: and calculating the target position of the third image in the canvas, wherein the calculation formula is as follows: a3' =a3×t, and the third image is moved according to the target position, so as to complete stitching.
In the image stitching method provided by the application, in an exemplary embodiment, the target positions of all the images can be acquired first, and then each image is subjected to shift stitching according to the target positions.
(8) Displaying or printing the spliced image.
The image matching relation acquisition method and the image splicing method applying the method can improve the fault tolerance of the characteristic point pair error pairing, reduce the error generated by uncertain factors in the position relation operation, and further rapidly and accurately finish the automatic image splicing in the image splicing process.
The preferred embodiments of the present application have been described in detail above, but the present application is not limited to the specific details of the above embodiments, and various equivalent changes (such as number, shape, position, etc.) may be made to the technical solution of the present application within the scope of the technical concept of the present application, and these equivalent changes all fall within the scope of the present application.

Claims (6)

1. An image matching relationship acquisition method is characterized by comprising the following steps:
s1, extracting characteristic point pairs of two images with overlapping areas;
s2, constructing a matching relation model between two images, wherein the matching relation model specifically comprises the following steps:
wherein r is 1 、r 2 、r 3 、r 4 Respectively, representing rotation-related model parameters, t x And t y To represent translation-related model parameters;
s3, calculating the matching relationship of the two images based on the matching relationship model obtained in the S2 and different characteristic point pairs selected from the characteristic point pairs obtained in the S1, wherein the matching relationship is specifically as follows:
respectively calculating the image matching relations corresponding to the selected different feature point pairs, including:
randomly selecting at least three pairs of the characteristic point pairs obtained in the step S1 to form a group of characteristic point pairs, constructing a limiting condition with the matching relation model between the two images obtained in the step S2, and calculating the model parameters to obtain the matching relation between the two images; wherein the constraint conditions further comprise constraint conditions:
wherein,θ∈[3°,5°];
and counting the number of feature point pairs adapting to each image matching relation, and identifying the optimal image matching relation according to the number.
2. The image matching relationship acquisition method according to claim 1, wherein identifying an optimal image matching relationship according to the number comprises:
comparing the number of the feature point pairs adapting to the matching relation of each image, and identifying the optimal matching relation of the images according to the comparison result; or alternatively, the first and second heat exchangers may be,
and comparing the number of the feature point pairs adapting to each image matching relation with a threshold value, and identifying the optimal image matching relation according to the comparison result.
3. The image matching relationship acquiring method according to claim 2, wherein the number of pairs of feature points adapted to each image matching relationship is compared with each other, and the image matching relationship in which the number of pairs of feature points adapted to each image matching relationship is the largest is used as the optimal image matching relationship.
4. The image matching relationship obtaining method according to claim 1, wherein said statistically adapting the number of pairs of feature points of each image matching relationship comprises:
constructing an energy equation:
wherein, (x) j ,y j ) And (x) j ′,y j ') are coordinates of any one of the pairs of the two characteristic points except the selected characteristic point pair, and H is a matching relationship of the two images;
substituting all the characteristic point pairs except the selected characteristic point pairs into the energy equation to obtain energy, and recording the characteristic point pair number in which the energy value is smaller than a set threshold value, namely the point pair number adapting to the matching relation.
5. An image stitching method is characterized by comprising the following steps:
(1) The image acquisition device acquires a plurality of images in a directional translation way;
(2) Calling any two images with overlapping parts obtained in the step (1) to canvas, and identifying the current positions of the two images;
(3) Respectively identifying and obtaining the positions and the characteristics of the characteristic points in the two images, and matching the characteristic points of the two images according to the similarity of the characteristics of the characteristic points to obtain successfully matched characteristic point pairs;
(4) Acquiring the matching relationship of the two images by using the image matching relationship acquisition method according to any one of claims 1 to 4;
(5) Splicing the two images according to the matching relation obtained in the step (4) and the current positions of the two images;
(6) Repeating the steps (2) - (5) until all the images are spliced.
6. The image stitching method according to claim 5, wherein the image acquisition device is a mobile C-arm X-ray machine.
CN202311058975.5A 2023-08-22 2023-08-22 Image matching relation acquisition method and image stitching method thereof Active CN116757936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311058975.5A CN116757936B (en) 2023-08-22 2023-08-22 Image matching relation acquisition method and image stitching method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311058975.5A CN116757936B (en) 2023-08-22 2023-08-22 Image matching relation acquisition method and image stitching method thereof

Publications (2)

Publication Number Publication Date
CN116757936A CN116757936A (en) 2023-09-15
CN116757936B true CN116757936B (en) 2023-11-07

Family

ID=87955599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311058975.5A Active CN116757936B (en) 2023-08-22 2023-08-22 Image matching relation acquisition method and image stitching method thereof

Country Status (1)

Country Link
CN (1) CN116757936B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020995A (en) * 2019-03-06 2019-07-16 沈阳理工大学 For the image split-joint method of complicated image
CN110533590A (en) * 2019-07-31 2019-12-03 华南理工大学 A kind of image split-joint method based on characteristic point
CN110660023A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Video stitching method based on image semantic segmentation
WO2021057743A1 (en) * 2019-09-27 2021-04-01 Oppo广东移动通信有限公司 Map fusion method, apparatus, device and storage medium
CN113222878A (en) * 2021-06-04 2021-08-06 杭州海康威视数字技术股份有限公司 Image splicing method
CN113920177A (en) * 2021-10-11 2022-01-11 南京佗道医疗科技有限公司 Three-dimensional image iterative registration method
CN114170279A (en) * 2021-11-30 2022-03-11 哈尔滨工程大学 Point cloud registration method based on laser scanning
CN115205114A (en) * 2022-06-24 2022-10-18 长春理工大学 High-resolution image splicing improved algorithm based on ORB (object-oriented bounding box) features

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110020995A (en) * 2019-03-06 2019-07-16 沈阳理工大学 For the image split-joint method of complicated image
CN110533590A (en) * 2019-07-31 2019-12-03 华南理工大学 A kind of image split-joint method based on characteristic point
CN110660023A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Video stitching method based on image semantic segmentation
WO2021057743A1 (en) * 2019-09-27 2021-04-01 Oppo广东移动通信有限公司 Map fusion method, apparatus, device and storage medium
CN113222878A (en) * 2021-06-04 2021-08-06 杭州海康威视数字技术股份有限公司 Image splicing method
CN113920177A (en) * 2021-10-11 2022-01-11 南京佗道医疗科技有限公司 Three-dimensional image iterative registration method
CN114170279A (en) * 2021-11-30 2022-03-11 哈尔滨工程大学 Point cloud registration method based on laser scanning
CN115205114A (en) * 2022-06-24 2022-10-18 长春理工大学 High-resolution image splicing improved algorithm based on ORB (object-oriented bounding box) features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于仿射变换的图像分块拼接方法;张平梅 等;《信息技术与信息化》(第01期);第61-65页 *
网格形变细分的大视差图像拼接算法;齐向明 等;《计算机工程》;第46卷(第01期);第236-242页 *

Also Published As

Publication number Publication date
CN116757936A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
US7257245B2 (en) Image position matching method and apparatus therefor
US7324660B2 (en) Image position matching apparatus and image processing apparatus
US6915003B2 (en) Method and apparatus for matching positions of images
EP0840253B1 (en) Methods and apparatus for digital subtraction angiography
CN106447602B (en) Image splicing method and device
JP2003265408A (en) Endoscope guide device and method
CN111584066B (en) Brain medical image diagnosis method based on convolutional neural network and symmetric information
CN112884792B (en) Lung image segmentation method and device, electronic equipment and storage medium
CN112802185A (en) Endoscope image three-dimensional reconstruction method and system facing minimally invasive surgery space perception
CN109171817A (en) Three-dimensional breast ultrasound scan method and ultrasonic scanning system
JP4274400B2 (en) Image registration method and apparatus
CN110752029B (en) Method and device for positioning focus
CN116757936B (en) Image matching relation acquisition method and image stitching method thereof
JP2003263498A (en) Method of forming different images of object to be examined
CN113870331A (en) Chest CT and X-ray real-time registration algorithm based on deep learning
CN112927274A (en) Dual-energy subtraction image registration method, device and equipment and readable storage medium
CN112308764A (en) Image registration method and device
CN114581340A (en) Image correction method and device
Morais et al. Dense motion field estimation from myocardial boundary displacements
JP2001291087A (en) Method and device for positioning image
CN116503453B (en) Image registration method, image registration device, computer-readable storage medium and electronic device
US10832422B2 (en) Alignment system for liver surgery
JP2008541859A (en) 3D-CT registration with guidance method based on 3D-2D pose estimation and application to raw bronchoscopy
KR20060007816A (en) Method for synthesizing image
CN115880469B (en) Registration method of surface point cloud data and three-dimensional image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant