CN117058056A - X-ray image bone splicing method - Google Patents

X-ray image bone splicing method Download PDF

Info

Publication number
CN117058056A
CN117058056A CN202310960111.6A CN202310960111A CN117058056A CN 117058056 A CN117058056 A CN 117058056A CN 202310960111 A CN202310960111 A CN 202310960111A CN 117058056 A CN117058056 A CN 117058056A
Authority
CN
China
Prior art keywords
images
points
image
bone
ray image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310960111.6A
Other languages
Chinese (zh)
Inventor
张前军
池峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Accurad Healthcare Technology Co ltd
Original Assignee
Accurad Healthcare Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Accurad Healthcare Technology Co ltd filed Critical Accurad Healthcare Technology Co ltd
Priority to CN202310960111.6A priority Critical patent/CN117058056A/en
Publication of CN117058056A publication Critical patent/CN117058056A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses an X-ray image bone splicing method, which comprises the following steps: acquiring X-ray images to be spliced; preprocessing the X-ray image, and removing preset information in the X-ray image; converting the processed X-ray image into a binary image; extracting skeleton contours in the binary images; dividing the skeleton outline with muscles and other tissues to obtain a skeleton outline; extracting characteristic points in skeleton contour lines; extracting key points in the feature points; the Euclidean distance of the feature vector of the key point is used as the similarity interpretation measure of the key points in the two images to be spliced; removing the error matching points by using an improved RANSAC algorithm to obtain a transformation matrix, registering the images by the transformation matrix to obtain a corresponding coordinate system, and splicing the images to obtain registered images; judging whether the accuracy of the registered images meets the requirement, and if the accuracy of the registered images does not meet the requirement, adjusting the positions of the spliced images.

Description

X-ray image bone splicing method
Technical Field
The invention relates to the technical field of medical treatment, in particular to an X-ray image bone splicing method.
Background
X-ray equipment has limited scanning FOV (field of view), and can completely scan bones with long human body by multiple times of scanning. When a doctor performs diagnosis or operation planning, the doctor needs to check different images respectively to completely know the condition of the whole bone, so that the doctor is inconvenient to use.
For the splicing of different images, the image splicing is realized by verifying the gray correlation mainly based on the gray theory.
The existing method can realize the splicing of different image images, but has difficulty in solving the problems of large image distortion, discontinuity and large light and shade change of the image splicing.
Disclosure of Invention
In view of the foregoing drawbacks or shortcomings of the prior art, it is desirable to provide a method for bone stitching of X-ray images.
The embodiment of the invention provides an X-ray image bone splicing method, which comprises the following steps:
s1: acquiring X-ray images to be spliced;
s2: preprocessing the X-ray image, and removing preset information in the X-ray image;
s3: converting the processed X-ray image into a binary image;
s4: extracting skeleton contours in the binary images;
s5: dividing the skeleton outline with muscles and other tissues to obtain a skeleton outline;
s6: extracting characteristic points in skeleton contour lines;
s7: extracting key points in the feature points;
s8: the Euclidean distance of the feature vector of the key point is used as the similarity interpretation measure of the key points in the two images to be spliced, when the distance between the two key points in the two images to be spliced is smaller than a threshold value, the two images to be spliced are judged to be a pair of matching points, otherwise, the two images to be spliced are regarded as mismatching points, and the mismatching points are removed;
s9: removing the error matching points by using an improved RANSAC algorithm to obtain a transformation matrix, registering the images by the transformation matrix to obtain a corresponding coordinate system, and splicing the images to obtain registered images;
s10: judging whether the accuracy of the registered images meets the requirement, and if the accuracy of the registered images does not meet the requirement, adjusting the positions of the spliced images.
In one embodiment, the removing the preset information in the X-ray image includes: image noise and artifacts in the X-ray image are removed.
In one embodiment, the extracting bone contours in the binary image includes: bone contours are extracted from the binary images by edge detection and connectivity analysis algorithms.
In one embodiment, the segmenting the bone contour from the muscle and other tissue to obtain the bone contour line includes:
and separating the extracted skeleton outline from other tissues by adopting threshold segmentation and edge detection to obtain a skeleton outline.
In one embodiment, the extracting feature points in the bone contour line includes: constructing DOG scale space of the skeleton contour line through a SIFT algorithm;
and detecting extreme points of the DOG scale space, and judging the extreme points as characteristic points when one extreme point is the maximum or minimum value in the preset fields of the DOG scale space layer and the upper and lower layers.
In one embodiment, the extracting key points in the feature points includes: establishing a spatial scale function according to the DOG scale space, deriving the spatial scale function, and enabling the spatial scale function to be zero to obtain the position of the key point;
removing key points with low contrast and unstable edge response points;
and precisely determining the position and the scale of the key point through a three-dimensional quadratic function.
In one embodiment, the modified RANSAC algorithm comprises: s901: randomly selecting four pairs of nonlinear characteristic point matching pairs from the coarse matching result to form a set M;
s902: calculating a homography matrix H according to the set M;
s903: verifying all matching pairs in the coarse matching result through a homography matrix H, and adding point pairs smaller than a threshold value into a set M;
s904: determining whether the point logarithm of the set M increases, and if so, returning to step S902 to continue execution; if the set M is kept unchanged, executing the next step;
s905: if the point logarithm of the set M is larger than the point logarithm of the current optimal homography matrix, updating the current homography matrix, otherwise, not updating;
s906: and updating the total iteration times according to the number of the inner points of the current optimal homography matrix, if the number of the current iteration times is smaller than the total iteration times, returning to the execution step S901, otherwise, taking the current optimal homography matrix as a final result.
In one embodiment, in step S9, the stitching the images to obtain the registered image includes:
and mapping the obtained coordinate system to the image coordinates to splice, so as to obtain a complete skeleton image.
In one embodiment, the determining whether the accuracy of the registered images meets the requirement, and if the accuracy of the registered images does not meet the requirement, adjusting the position of the spliced images includes:
respectively displaying the spliced images, wherein the upper layer image is adjusted to 50% -55% transparency for display;
and observing the accuracy between the upper layer image and the lower layer image, and manually adjusting the position of the spliced image if the accuracy does not meet the requirement.
The beneficial effects of the invention include:
according to the X-ray image bone stitching method provided by the invention, the skeleton outline and the characteristic points are extracted by processing the images to be stitched, so that the matching complexity is reduced; the error matching is removed by improving the RANSAC algorithm, so that the accuracy of the feature point matching is further improved; the transformation relation of the images is solved by calculating the characteristic point relation of the images, and then image stitching is realized according to the transformation relation, so that the stitching of discontinuous images is effectively solved. The method is suitable for imaging, orthopedics and radiology in the medical field, can quickly and accurately splice a complete bone image, and brings great convenience to doctors.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
fig. 1 is a schematic flow chart of an X-ray image bone stitching method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another method for splicing X-ray image bones according to an embodiment of the present invention;
fig. 3 shows a schematic diagram of searching extreme points in a scale space according to an embodiment of the present invention;
fig. 4 is a schematic diagram showing a SIFT feature point extraction result provided by an embodiment of the present invention;
FIG. 5 is a schematic flow chart of an improved RANSAC algorithm according to an embodiment of the invention;
fig. 6 shows a schematic diagram of a matching splicing result provided by the embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be embodied in many other forms than described herein and similarly modified by those skilled in the art without departing from the spirit of the invention, whereby the invention is not limited to the specific embodiments disclosed below.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", "axial", "radial", "circumferential", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "up" or "down" a second feature may be the first and second features in direct contact, or the first and second features in indirect contact via an intervening medium. Moreover, a first feature being "above," "over" and "on" a second feature may be a first feature being directly above or obliquely above the second feature, or simply indicating that the first feature is level higher than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is less level than the second feature.
It will be understood that when an element is referred to as being "fixed" or "disposed" on another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "upper," "lower," "left," "right," and the like are used herein for illustrative purposes only and are not meant to be the only embodiment.
Referring to fig. 1, fig. 1 shows an X-ray image bone stitching method according to an embodiment of the present invention, where the method includes:
s1: acquiring X-ray images to be spliced;
s2: preprocessing the X-ray image, and removing preset information in the X-ray image;
s3: converting the processed X-ray image into a binary image;
s4: extracting skeleton contours in the binary images;
s5: dividing the skeleton outline with muscles and other tissues to obtain a skeleton outline;
s6: extracting characteristic points in skeleton contour lines;
s7: extracting key points in the feature points;
s8: the Euclidean distance of the feature vector of the key point is used as the similarity interpretation measure of the key points in the two images to be spliced, when the distance between the two key points in the two images to be spliced is smaller than a threshold value, the two images to be spliced are judged to be a pair of matching points, otherwise, the two images to be spliced are regarded as mismatching points, and the mismatching points are removed;
s9: removing the error matching points by using an improved RANSAC algorithm to obtain a transformation matrix, registering the images by the transformation matrix to obtain a corresponding coordinate system, and splicing the images to obtain registered images;
s10: judging whether the accuracy of the registered images meets the requirement, and if the accuracy of the registered images does not meet the requirement, adjusting the positions of the spliced images.
Specifically, as shown in fig. 1 and in combination with fig. 2, in step S1, an X-ray image to be stitched is obtained, including: the use of DICOM format to obtain X-ray flat image data, such as left and right images, wherein DICOM (Digital Imaging and Communications in Medicine), digital imaging and communication in medicine, is an international standard for medical images and related information, which defines a medical image format of quality that can be used for data exchange that meets clinical needs.
In step S2, the X-ray image is preprocessed to remove preset information in the X-ray image, including processing the obtained X-ray flat image data, and removing image noise and artifacts in the X-ray flat image. Since the technique of removing image noise and artifacts from X-ray flat images is prior art, it is not further described here.
In step S3, the processed X-ray image is converted into a binary image, i.e., each pixel point in the X-ray image conversion is converted into a black or white image.
In step S4, the bone contours in the binary image are extracted, including extracting bone contours from the binary image by an edge detection and connectivity analysis algorithm. Among them, edge detection is a fundamental problem in image processing and computer vision, and the purpose of edge detection is to identify points in a digital image where brightness changes are significant.
In step S5, the bone contour is segmented from muscle and other tissue to obtain a bone contour line, including separating the extracted bone contour from other tissue using threshold segmentation and edge detection to obtain a bone contour line.
In step S6, extracting feature points in the skeleton contour line, where the feature point extraction result is shown in fig. 4, and the extraction method includes constructing a DOG scale space of the skeleton contour line by using SIFT algorithm; and detecting extreme points of the DOG scale space, and judging the extreme points as characteristic points when one extreme point is the maximum or minimum value in the preset fields of the DOG scale space layer and the upper and lower layers.
Specifically, (1) generation of a scale space
The scale space theory aims at simulating multi-scale characteristics of an image, the Gaussian convolution kernel is the only linear kernel for realizing scale transformation, and the scale space of a two-dimensional image is defined as: l (x, y, σ) =g (x, y, σ) ×i (x, y), where G (x, y, σ) is a variable-scale gaussian function,(x, y) is a spatial coordinate, σ is a scale coordinate, and the degree of smoothness of the image is determined.
A gaussian differential scale-space (DOG scale-space) is constructed as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)
(2) Detecting extreme points of scale space
To find the extreme points of the scale space, each sample point is compared with neighboring points to see if it is larger or smaller than the connected image domain and scale domain. As shown in FIG. 3, the middle detection point is compared with 26 points which are 8 adjacent points in the same scale and 9X2 points corresponding to the upper and lower adjacent scales, so that the extreme points can be detected in the scale space and the two-dimensional image space. If a point is a maximum or minimum value in 26 fields of the DOG scale space layer and the upper layer and the lower layer, the characteristic point of the image at that scale is considered.
For example, a neighborhood of 16×16 is obtained with the feature point as the center as a sampling window, the relative direction between the sampling point and the feature point is classified into a direction histogram containing 8 bins after being weighted by gaussian, and a 128-dimensional feature descriptor is obtained, and the algorithm process is as follows.
Determining the image area required for computing descriptors
The descriptor gradient direction histogram is generated by computing a blurred image of the scale in which the keypoints are located, and the radius of the image region is computed by:
(1) Principal directions of key points consistent with coordinates
The coordinates after rotation are expressed as follows:
(2) And (3) solving gradient amplitude values and directions for each pixel point in an image radius area, multiplying each gradient amplitude value by a Gaussian weight parameter after a person, and generating a direction diagram, wherein the formula is as follows:
(3) Calculating 8-direction histograms in the region with the window width of 2X2, drawing gradient direction cumulative values to form seeds, and sequentially calculating to generate 16 seed points.
(4) Descriptive subvector element thresholding and thresholding-performed descriptive subvector normalization.
Descriptive subvector element normalization:
W=(w 1 ,w 2 ,…,w 128 ) In order to obtain the 128 descriptor vectors,
L=(l 1 ,l 2 ,…,l 128 ) Is normalized vector
In step S7, extracting key points among the feature points includes: establishing a spatial scale function according to the DOG scale space, deriving the spatial scale function, and enabling the spatial scale function to be zero to obtain the position of the key point; removing key points with low contrast and unstable edge response points; and precisely determining the position and the scale of the key point through a three-dimensional quadratic function.
Exemplary:
a. spatial scale function
Deriving and making it 0 to obtain accurate position
b. Among the detected feature points, the low-contrast feature points and the unstable edge response points are removed, and (2) is substituted into (1), and the first two items are taken:
if it isThis feature is preserved, otherwise discarded.
c. The removal of the edge response, principal curvature, is found by a 2X2 Hessian matrix H:
the derivative is estimated by sampling point adjacent interpolation, and finally, the position and the scale of the key point are accurately determined by fitting a three-dimensional quadratic function.
In step S9, a transformation matrix is obtained after the error matching points are removed by using an improved RANSAC algorithm, the images are registered through the transformation matrix, a corresponding coordinate system is obtained, and the images are spliced to obtain a registered image, and referring to fig. 5, the method includes randomly selecting four pairs of nonlinear feature point matching pairs from the rough matching result to form a set M;
calculating a homography matrix H according to the set M;
verifying all matching pairs in the coarse matching result through a homography matrix H, and adding point pairs smaller than a threshold value into a set M;
determining whether the point logarithm of the set M increases, and if so, returning to step S902 to continue execution; if the set M is kept unchanged, executing the next step;
if the point logarithm of the set M is larger than the point logarithm of the current optimal homography matrix, updating the current homography matrix, otherwise, not updating;
and updating the total iteration times according to the number of the inner points of the current optimal homography matrix, if the number of the current iteration times is smaller than the total iteration times, returning to the execution step S901, otherwise, taking the current optimal homography matrix as a final result.
In order to improve the accuracy of the homography matrix, a suitable threshold value must be selected. Since the feature points obtained in the feature point detection SIFT are approximate values of pixel values rather than accurate values, and all allowable errors exist, as known by the SIFT algorithm, the difference between the approximate values and the accurate values is within one pixel, so that the threshold value can be set to be 1, and the original method for judging the interior points is changed from euclidean distance to area judgment, namely:
d 2 x+d 2 y<=1
the modification is as follows:
wherein: dx is the horizontal coordinate difference between the actual value and the calculated value, and dy is the vertical coordinate difference between the actual value and the calculated value.
And finally, obtaining an optimal matching point and a corresponding H transformation matrix, and for adjacent images, registering the two images by using the obtained H transformation matrix, and obtaining that the two images correspond to the same position under the same coordinate system by bilinear interpolation.
In step S9, it is determined whether the accuracy of the registered image meets the requirement, and if the accuracy of the registered image does not meet the requirement, the position of the stitched image is adjusted, including mapping the registered coordinate system to the image coordinates for stitching, so as to obtain a complete bone image, as shown in fig. 6. And then respectively displaying the spliced images, and adjusting the transparency of the upper layer image by 50% for displaying so as to observe the registration accuracy. If the splicing accuracy is not satisfied, performing manual fine adjustment until the final satisfaction is reached.
By adopting the technical scheme, the skeleton outline and the characteristic points are extracted by processing the images to be spliced, so that the matching complexity is reduced; the error matching is removed by improving the RANSAC algorithm, so that the accuracy of the feature point matching is further improved; the transformation relation of the images is solved by calculating the characteristic point relation of the images, and then image stitching is realized according to the transformation relation, so that the stitching of discontinuous images is effectively solved. The method is suitable for imaging, orthopedics and radiology in the medical field, can quickly and accurately splice a complete bone image, and brings great convenience to doctors.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (9)

1. An X-ray image bone stitching method, which is characterized by comprising the following steps:
s1: acquiring X-ray images to be spliced;
s2: preprocessing the X-ray image, and removing preset information in the X-ray image;
s3: converting the processed X-ray image into a binary image;
s4: extracting skeleton contours in the binary images;
s5: dividing the skeleton outline with muscles and other tissues to obtain a skeleton outline;
s6: extracting characteristic points in skeleton contour lines;
s7: extracting key points in the feature points;
s8: the Euclidean distance of the feature vector of the key point is used as the similarity interpretation measure of the key points in the two images to be spliced, when the distance between the two key points in the two images to be spliced is smaller than a threshold value, the two images to be spliced are judged to be a pair of matching points, otherwise, the two images to be spliced are regarded as mismatching points, and the mismatching points are removed;
s9: removing the error matching points by using an improved RANSAC algorithm to obtain a transformation matrix, registering the images by the transformation matrix to obtain a corresponding coordinate system, and splicing the images to obtain registered images;
s10: judging whether the accuracy of the registered images meets the requirement, and if the accuracy of the registered images does not meet the requirement, adjusting the positions of the spliced images.
2. The method for bone stitching according to claim 1, wherein the removing the predetermined information from the X-ray image comprises: image noise and artifacts in the X-ray image are removed.
3. The X-ray image bone stitching method according to claim 1, wherein the extracting bone contours in the binary image comprises: bone contours are extracted from the binary images by edge detection and connectivity analysis algorithms.
4. The X-ray image bone stitching method according to claim 1, wherein the segmenting the bone contour from muscle and other tissue to obtain bone contours comprises:
and separating the extracted skeleton outline from other tissues by adopting threshold segmentation and edge detection to obtain a skeleton outline.
5. The X-ray image bone stitching method according to claim 1, wherein the extracting feature points in a bone contour line comprises:
constructing DOG scale space of the skeleton contour line through a SIFT algorithm;
and detecting extreme points of the DOG scale space, and judging the extreme points as characteristic points when one extreme point is the maximum or minimum value in the preset fields of the DOG scale space layer and the upper and lower layers.
6. The method for bone stitching according to claim 5, wherein the extracting key points of the feature points comprises:
establishing a spatial scale function according to the DOG scale space, deriving the spatial scale function, and enabling the spatial scale function to be zero to obtain the position of the key point;
removing key points with low contrast and unstable edge response points;
and precisely determining the position and the scale of the key point through a three-dimensional quadratic function.
7. The X-ray image bone stitching method according to claim 1, wherein the modified RANSAC algorithm comprises:
s901: randomly selecting four pairs of nonlinear characteristic point matching pairs from the coarse matching result to form a set M;
s902: calculating a homography matrix H according to the set M;
s903: verifying all matching pairs in the coarse matching result through a homography matrix H, and adding point pairs smaller than a threshold value into a set M;
s904: determining whether the point logarithm of the set M increases, and if so, returning to step S902 to continue execution; if the set M is kept unchanged, executing the next step;
s905: if the point logarithm of the set M is larger than the point logarithm of the current optimal homography matrix, updating the current homography matrix, otherwise, not updating;
s906: and updating the total iteration times according to the number of the inner points of the current optimal homography matrix, if the number of the current iteration times is smaller than the total iteration times, returning to the execution step S901, otherwise, taking the current optimal homography matrix as a final result.
8. The method according to claim 1, wherein in step S9, the stitching the images to obtain registered images includes:
and mapping the obtained coordinate system to the image coordinates to splice, so as to obtain a complete skeleton image.
9. The method for bone stitching according to claim 1, wherein determining whether the accuracy of the registered images meets the requirement, and if not, adjusting the position of the stitched images comprises:
respectively displaying the spliced images, wherein the upper layer image is adjusted to 50% -55% transparency for display;
and observing the accuracy between the upper layer image and the lower layer image, and manually adjusting the position of the spliced image if the accuracy does not meet the requirement.
CN202310960111.6A 2023-08-01 2023-08-01 X-ray image bone splicing method Pending CN117058056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310960111.6A CN117058056A (en) 2023-08-01 2023-08-01 X-ray image bone splicing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310960111.6A CN117058056A (en) 2023-08-01 2023-08-01 X-ray image bone splicing method

Publications (1)

Publication Number Publication Date
CN117058056A true CN117058056A (en) 2023-11-14

Family

ID=88667057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310960111.6A Pending CN117058056A (en) 2023-08-01 2023-08-01 X-ray image bone splicing method

Country Status (1)

Country Link
CN (1) CN117058056A (en)

Similar Documents

Publication Publication Date Title
Chan et al. Effective pneumothorax detection for chest X‐ray images using local binary pattern and support vector machine
US5657362A (en) Automated method and system for computerized detection of masses and parenchymal distortions in medical images
CN110338841B (en) Three-dimensional imaging data display processing method and three-dimensional ultrasonic imaging method and system
CN107886508B (en) Differential subtraction method and medical image processing method and system
US8170306B2 (en) Automatic partitioning and recognition of human body regions from an arbitrary scan coverage image
Lim et al. Introducing Willmore flow into level set segmentation of spinal vertebrae
CN103249358B (en) Medical image-processing apparatus
EP1315125A2 (en) Method and system for lung disease detection
WO2008032317A2 (en) Efficient border extraction of image feature
CN110782428B (en) Method and system for constructing clinical brain CT image ROI template
US20230005140A1 (en) Automated detection of tumors based on image processing
Maitra et al. Automated digital mammogram segmentation for detection of abnormal masses using binary homogeneity enhancement algorithm
US20140376798A1 (en) Rib enhancement in radiographic images
CN116580068B (en) Multi-mode medical registration method based on point cloud registration
US8644608B2 (en) Bone imagery segmentation method and apparatus
CN116630762A (en) Multi-mode medical image fusion method based on deep learning
Schilham et al. Multi-scale nodule detection in chest radiographs
CN116309647B (en) Method for constructing craniocerebral lesion image segmentation model, image segmentation method and device
Zheng et al. Adaptive segmentation of vertebral bodies from sagittal MR images based on local spatial information and Gaussian weighted chi-square distance
Indumathi et al. Effect of co-occurrence filtering for recognizing abnormality from breast thermograms
CN117058056A (en) X-ray image bone splicing method
CN113838557A (en) Medical image three-dimensional reconstruction simulation method and system
CN113822904B (en) Image labeling device, method and readable storage medium
CN114663384B (en) Shoulder joint dislocation recognition system and storage medium
Supriyanti et al. Comparison of conventional edge detection methods performance in lung segmentation of COVID19 patients

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination