CN110473143B - Three-dimensional MRA medical image stitching method and device and electronic equipment - Google Patents

Three-dimensional MRA medical image stitching method and device and electronic equipment Download PDF

Info

Publication number
CN110473143B
CN110473143B CN201910666640.9A CN201910666640A CN110473143B CN 110473143 B CN110473143 B CN 110473143B CN 201910666640 A CN201910666640 A CN 201910666640A CN 110473143 B CN110473143 B CN 110473143B
Authority
CN
China
Prior art keywords
image
region
overlapping region
overlapping
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910666640.9A
Other languages
Chinese (zh)
Other versions
CN110473143A (en
Inventor
李宝林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910666640.9A priority Critical patent/CN110473143B/en
Priority to PCT/CN2019/118065 priority patent/WO2021012520A1/en
Publication of CN110473143A publication Critical patent/CN110473143A/en
Application granted granted Critical
Publication of CN110473143B publication Critical patent/CN110473143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The invention particularly relates to a three-dimensional MRA medical image stitching method and device and electronic equipment. The method comprises the following steps: carrying out Laplace denoising, enhancement and irregular smoothing on two adjacent three-dimensional MRA medical images to be spliced, which are received in real time, so as to obtain a first image and a second image; performing overlapping layer detection on the first image and the second image, and determining a first overlapping region in the first image and a second overlapping region in the second image; and carrying out fusion splicing treatment on the first overlapping region and the second overlapping region by using a weighted average method to obtain a fused and spliced third image. Therefore, in the embodiment of the invention, the overlapping layer detection is carried out on the two adjacent three-dimensional MRA medical images to be spliced after the pretreatment, so that the overlapping areas in the two three-dimensional MRA medical images to be spliced are determined, and then the two overlapping areas are fused and spliced, so that the determination efficiency of the overlapping areas in the images can be improved, and the image splicing efficiency is further improved.

Description

Three-dimensional MRA medical image stitching method and device and electronic equipment
Technical Field
The invention relates to the technical field of medical image processing, in particular to a three-dimensional MRA medical image stitching method and device and electronic equipment.
Background
Magnetic resonance angiography (Magnetic Resonance Angiography, MRA) is an inspection method that utilizes electromagnetic waves to generate three-dimensional medical images that are descriptive of human information, by which panoramic three-dimensional medical images can be obtained, better helping doctors to make a comprehensive, intuitive assessment of the condition.
However, in the MRA inspection, due to the limitation of the MRA equipment, the panoramic three-dimensional medical image cannot be obtained by one-time scanning, and generally, the panoramic three-dimensional medical image can be obtained by performing segmented imaging and then splicing every two adjacent segments. Therefore, the three-dimensional medical image stitching has wide application in medical image research, and becomes a new application method even in life insurance wind control in the financial industry, for example, the stitched panoramic three-dimensional medical image of the applicant can be intelligently analyzed, so that the applicant can be helped to review the application of the applicant.
In the prior art, when three-dimensional medical image stitching is performed, a doctor usually manually fuses overlapping areas of three-dimensional medical images to be stitched in a visual manner through operations such as translation, so as to determine overlapping areas of the images to perform image stitching. In this way, the efficiency of manually determining the image overlapping region is too low, the accuracy is not high, and the time is long, so that the image stitching efficiency is too low.
Disclosure of Invention
In order to solve the problem that image stitching efficiency is too low in the related art, the invention provides a three-dimensional MRA medical image stitching method and device and electronic equipment.
The embodiment of the invention discloses a three-dimensional MRA medical image stitching method, which comprises the following steps:
receiving two adjacent three-dimensional MRA medical images to be spliced, which are sent by scanning equipment, in real time;
carrying out Laplace denoising, enhancement and irregular smoothing on the two adjacent three-dimensional MRA medical images to be spliced to obtain a first image and a second image;
performing overlapping layer detection on the first image and the second image respectively to determine a first overlapping region in the first image and a second overlapping region in the second image;
and carrying out fusion splicing treatment on the first overlapping region and the second overlapping region by using a weighted average method to obtain a third image after fusion splicing.
In an optional implementation manner, in a first aspect of the embodiment of the present invention, the fusing and stitching processing is performed on the first overlapping area and the second overlapping area by using a weighted average method, and after obtaining a third image after fusing and stitching, the method further includes:
And carrying out smoothing filtering processing on the third image by using a low-pass filter to obtain a target smooth image.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before the performing overlapping layer detection on the first image and the second image to determine a first overlapping area in the first image and a second overlapping area in the second image, the method further includes:
comparing the maximum intensity projection imaging of each of the first image and the second image to determine a first coincident segment in the first image and a second coincident segment in the second image;
and performing overlapping layer detection on the first image and the second image respectively to determine a first overlapping region in the first image and a second overlapping region in the second image, including:
and performing overlapping layer detection on the first overlapping segment and the second overlapping segment to determine a first overlapping region in the first image and a second overlapping region in the second image.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the performing overlapping layer detection on the first overlapping segment and the second overlapping segment to determine a first overlapping area in the first image and a second overlapping area in the second image includes:
Dividing the first coincident segment into a plurality of first areas to be detected, and dividing the second coincident segment into a plurality of second areas to be detected, wherein the first areas to be detected and the second areas to be detected are in one-to-one correspondence;
sequentially judging whether the difference value of the number of the overlapped layers of each first region to be detected and the corresponding second region to be detected is smaller than a preset value;
and if the difference value is smaller than the preset value, taking the first region to be detected as a component part of a first overlapping region in the first image, and taking the corresponding second region to be detected as a component part of a second overlapping region in the second image so as to determine the first overlapping region and the second overlapping region.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the first area to be detected is used as a component of a first overlapping area in the first image, the corresponding second area to be detected is used as a component of a second overlapping area in the second image, so as to determine the first overlapping area and the second overlapping area, and before the fusion splicing processing is performed on the first overlapping area and the second overlapping area by using a weighted average method, the method further includes:
Determining sampling points according to the image position information of the first image and the second image; wherein the image position information is used to describe the positions of the first image and the second image corresponding to a human anatomy coordinate system;
obtaining a registration transformation matrix according to human anatomy coordinates of the sampling points, first image coordinates of the sampling points in the first image and second image coordinates of the sampling points in the second image;
and registering the first overlapping region and the second overlapping region according to the registration transformation matrix.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, registering the first overlapping area and the second overlapping area according to the registration transformation matrix includes:
registering points with the same human anatomy coordinates in the second overlapping region to the same position of the first overlapping region through the registration transformation matrix by taking the first overlapping region as a reference region; or,
and taking the second overlapping region as a reference region, and registering points with the same human anatomical coordinates in the first overlapping region to the same position of the second overlapping region through the registration transformation matrix.
In an optional implementation manner, in a first aspect of the embodiment of the present invention, the performing, by using a weighted average method, fusion stitching processing on the first overlapping area and the second overlapping area to obtain a third image after fusion stitching includes:
dividing the first overlapping region into a plurality of first to-be-fused column regions, and dividing the second overlapping region into a plurality of second to-be-fused column regions, wherein the first to-be-fused column regions and the second to-be-fused column regions are in one-to-one correspondence;
sequentially obtaining a first preset weight coefficient of each first to-be-fused column region according to the sequence from the small distance to the large distance of each first to-be-fused column region and the second overlapping region, wherein the first preset weight coefficient becomes smaller as the distance between the corresponding first to-be-fused column region and the second overlapping region becomes larger;
obtaining a second preset weight coefficient of the second column region to be fused corresponding to the first column region to be fused according to the first preset weight coefficient, wherein the sum of the first preset weight coefficient and the second preset weight coefficient is equal to one;
and according to the first preset weight coefficient and the second preset weight coefficient, carrying out pixel value addition calculation on each first to-be-fused column region and the corresponding second to-be-fused column region to obtain fused pixel values so as to obtain a fused and spliced third image.
The second aspect of the embodiment of the invention discloses a three-dimensional MRA medical image stitching device, which comprises:
the receiving unit is used for receiving two adjacent three-dimensional MRA medical images to be spliced, which are sent by the scanning equipment, in real time;
the denoising unit is used for carrying out Laplace denoising, enhancement and irregular smoothing on the two adjacent three-dimensional MRA medical images to be spliced to obtain a first image and a second image;
the detection unit is used for respectively carrying out overlapping layer detection on the first image and the second image so as to determine a first overlapping region in the first image and a second overlapping region in the second image;
and the splicing unit is used for carrying out fusion splicing treatment on the first overlapping region and the second overlapping region by using a weighted average method to obtain a fused and spliced third image.
A third aspect of the embodiment of the present invention discloses an electronic device, including:
a processor;
and the memory is stored with computer readable instructions which, when executed by the processor, implement the three-dimensional MRA medical image stitching method disclosed in the first aspect of the embodiment of the invention.
A fourth aspect of the embodiments of the present invention discloses a computer-readable storage medium storing a computer program, which causes a computer to execute the three-dimensional MRA medical image stitching method disclosed in the first aspect of the embodiments of the present invention.
The technical scheme provided by the embodiment of the invention can comprise the following beneficial effects:
the technical scheme comprises the following steps: receiving two adjacent three-dimensional MRA medical images to be spliced, which are sent by scanning equipment, in real time; carrying out Laplace denoising, enhancement and irregular smoothing on two adjacent three-dimensional MRA medical images to be spliced to obtain a first image and a second image; respectively carrying out overlapping layer detection on the first image and the second image to determine a first overlapping region in the first image and a second overlapping region in the second image; and carrying out fusion splicing treatment on the first overlapping region and the second overlapping region by using a weighted average method to obtain a fused and spliced third image.
According to the method, the overlapping layer detection is carried out after the pretreatment of the two adjacent three-dimensional MRA medical images to be spliced, which are received in real time, is carried out so as to determine the overlapping areas in the two three-dimensional MRA medical images to be spliced, and the weighted average method is utilized to carry out fusion splicing on the two overlapping areas, so that the determination efficiency of the overlapping areas in the images can be improved, and the image splicing efficiency is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic structural diagram of a three-dimensional MRA medical image stitching device according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a three-dimensional MRA medical image stitching method disclosed in an embodiment of the invention;
FIG. 3 is a flow chart of another method for stitching three-dimensional MRA medical images according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method for stitching three-dimensional MRA medical images according to an embodiment of the present invention;
FIG. 5 is a schematic structural view of another three-dimensional MRA medical image stitching device according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of another three-dimensional MRA medical image stitching device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another three-dimensional MRA medical image stitching device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Example 1
The implementation environment of the invention can be an electronic device, such as a smart phone, a tablet computer, a desktop computer.
Under an application scene, the method disclosed by the embodiment of the invention is suitable for life insurance wind control in the financial industry, and particularly, the method can help to audit the insurance application of the applicant by intelligently analyzing the panoramic three-dimensional medical image of the applicant after splicing. Under another application scene, the method disclosed by the embodiment of the invention is suitable for the magnetic resonance imaging equipment in the medical field, and the sectional three-dimensional MRA medical images obtained by scanning by the scanning equipment are spliced, so that panoramic three-dimensional medical images are obtained, and a doctor can be helped to comprehensively and intuitively evaluate the illness state.
Fig. 1 is a schematic structural diagram of a three-dimensional MRA medical image stitching device according to an embodiment of the present invention. The apparatus 100 may be the electronic device described above. As shown in fig. 1, the apparatus 100 may include one or more of the following components: a processing component 102, a memory 104, a power supply component 106, a multimedia component 108, an audio component 110, a sensor component 114, and a communication component 116.
The processing component 102 generally controls overall operation of the device 100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations, among others. The processing component 102 may include one or more processors 118 to execute instructions to perform all or part of the steps of the methods described below. Further, the processing component 102 can include one or more modules to facilitate interactions between the processing component 102 and other components. For example, the processing component 102 may include a multimedia module for facilitating interaction between the multimedia component 108 and the processing component 102.
The memory 104 is configured to store various types of data to support operations at the apparatus 100. Examples of such data include instructions for any application or method operating on the device 100. The Memory 104 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as static random access Memory (Static RandomAccess Memory, SRAM), electrically erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. Also stored in the memory 104 are one or more modules configured to be executed by the one or more processors 118 to perform all or part of the steps in the methods shown below.
The power supply assembly 106 provides power to the various components of the device 100. The power components 106 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 100.
The multimedia component 108 includes a screen between the device 100 and the user that provides an output interface. In some embodiments, the screen may include a liquid crystal display (Liquid Crystal Display, LCD for short) and a touch panel. If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. The screen may also include an organic electroluminescent display (Organic Light Emitting Display, OLED for short).
The audio component 110 is configured to output and/or input audio signals. For example, the audio component 110 includes a Microphone (MIC) configured to receive external audio signals when the device 100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 104 or transmitted via the communication component 116. In some embodiments, the audio component 110 further comprises a speaker for outputting audio signals.
The sensor assembly 114 includes one or more sensors for providing status assessment of various aspects of the device 100. For example, the sensor assembly 114 may detect an on/off state of the device 100, a relative positioning of the assemblies, the sensor assembly 114 may also detect a change in position of the device 100 or a component of the device 100, and a change in temperature of the device 100. In some embodiments, the sensor assembly 114 may also include a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 116 is configured to facilitate communication between the apparatus 100 and other devices in a wired or wireless manner. The device 100 may access a Wireless network based on a communication standard, such as WiFi (Wireless-Fidelity). In an embodiment of the present invention, the communication component 116 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an embodiment of the present invention, the communication component 116 further includes a near field communication (Near Field Communication, abbreviated as NFC) module for facilitating short range communications. For example, the NFC module may be implemented based on radio frequency identification (Radio Frequency Identification, RFID) technology, infrared data association (Infrared DataAssociation, irDA) technology, ultra Wideband (UWB) technology, bluetooth technology, and other technologies.
In an exemplary embodiment, the apparatus 100 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated ASIC), digital signal processors, digital signal processing devices, programmable logic devices, field programmable gate arrays, controllers, microcontrollers, microprocessors or other electronic components for executing the methods described below.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of a three-dimensional MRA medical image stitching method according to an embodiment of the present invention. The three-dimensional MRA medical image stitching method as shown in FIG. 2 may include the steps of:
201. and receiving two adjacent three-dimensional MRA medical images to be spliced, which are sent by the scanning equipment, in real time.
It should be noted that, the format of the three-dimensional MRA medical image to be stitched should be a digital imaging and communications in medicine (Digital Imaging and Communications in Medicine, DICOM) format, i.e. a medical image format that can be used for data exchange.
202. And carrying out Laplace denoising, enhancement and irregular smoothing on two adjacent three-dimensional MRA medical images to be spliced to obtain a first image and a second image.
In the embodiment of the invention, due to the limitation of MRA equipment, the obtained three-dimensional MRA medical images to be spliced inevitably have noise. Therefore, it is necessary to pre-process the three-dimensional MRA medical images to be stitched. Firstly, laplacian denoising is carried out on two adjacent three-dimensional MRA medical images to be spliced, redundant useless interference signals are removed, then enhancement processing is carried out on the denoised images, finally irregular image smoothing processing is carried out on the enhanced images, and a high-quality first image and a high-quality second image are provided for the next step as far as possible.
As another alternative implementation manner, a weighted neighborhood averaging method can be further adopted to carry out smooth denoising on the three-dimensional MRA medical images to be spliced. The weighted neighborhood averaging method is to multiply each pixel in the neighborhood with different coefficients and multiply the more important pixel with a larger weight. For example, assuming that the medical image is f (x, y), if the neighborhood S is taken, the calculation formula of the weighted neighborhood average is:
wherein Σ is a sum symbol for indicating a summing operation; a is the upper bound of the first summing operation, -a is the lower bound of the first summing operation, and a may be a specified constant to indicate that the range of values of s is [ -a, a ], thereby defining the range of independent values of the first summing operation. Similarly, b is the upper bound of the second summing operation, -b is the lower bound of the second summing operation, and b may be a specified constant to indicate that t has a range of values [ -b, b ], thereby defining the range of independent values of the second summing operation. Wherein w (s, t) is a weight function, belongs to a common weight function, and is a function taking the distance between each point in the neighborhood and the center point as a variable, wherein the center point has the largest weight in the function, which indicates that the decision contribution degree of the point to the weighted neighborhood average value is inversely proportional to the distance between the point and the center point. Where (s, t) is the coordinates of each point in the neighborhood, and w is the weight corresponding to that point.
By implementing this embodiment, the denoising processing speed can be increased.
203. And respectively performing overlapping layer detection on the first image and the second image to determine a first overlapping region in the first image and a second overlapping region in the second image.
In the embodiment of the invention, due to the limitation of MRA equipment, a panoramic three-dimensional medical image cannot be obtained through one-time scanning, and the panoramic three-dimensional medical image can be obtained through stitching every two adjacent segments. It should be noted that, in the embodiment of the present invention, the first image and the second image are both three-dimensional MRA medical images essentially, but with respect to a panoramic three-dimensional medical image, the first image and the second image both refer to segments obtained by segment imaging.
It will be appreciated that, since the three-dimensional MRA medical image is used to display the lesion condition of the patient, in order to avoid missing information from the scan, there is a region of overlap between the first image and the second image, and the two regions of overlap are respectively present at both ends of the junction of the two images. The joint of the two images can be the boundary of any side of any one image.
The first image and the second image may be images acquired by the same scanning device under different conditions, and the different conditions may include different climates, illuminance, imaging positions, angles, and the like.
Since the overlapping region generally has the same or a close number of overlapping layers, the overlapping region can be determined by performing overlapping layer detection on the first image and the second image.
204. And carrying out fusion splicing treatment on the first overlapping region and the second overlapping region by using a weighted average method to obtain a fused and spliced third image.
In the embodiment of the invention, the pixels in the first overlapping area and the second overlapping area are weighted respectively by a weighted average method and then are overlapped and averaged, and each pixel is endowed with different weights according to the importance degree of the pixel in the whole image, so that the smooth transition of the image can be realized, and the seam line in the image is effectively eliminated.
Therefore, by implementing the method described in fig. 2, the overlapping layer detection is performed after the preprocessing is performed on the two adjacent three-dimensional MRA medical images to be spliced, so as to determine the overlapping areas in the two three-dimensional MRA medical images to be spliced, and the weighted average method is used for fusion splicing of the two overlapping areas, so that the determination efficiency of the overlapping areas in the images can be improved, and the image splicing efficiency is further improved.
Example III
Referring to fig. 3, fig. 3 is a schematic flow chart of another three-dimensional MRA medical image stitching method according to an embodiment of the present invention. As shown in fig. 3, the three-dimensional MRA medical image stitching method may include the steps of:
301-303. For the description of steps 301 to 303, please refer to the detailed descriptions of steps 201 to 203 in the second embodiment, and the description of the present invention is omitted here.
304. Dividing the first overlapping region into a plurality of first to-be-fused column regions, and dividing the second overlapping region into a plurality of second to-be-fused column regions, wherein the first to-be-fused column regions and the second to-be-fused column regions are in one-to-one correspondence.
The first to-be-fused column areas and the second to-be-fused column areas which are in one-to-one correspondence are in a coincidence relation.
305. And sequentially acquiring a first preset weight coefficient of each first to-be-fused column region according to the sequence from small to large of the distance between each first to-be-fused column region and the second overlapping region, wherein the first preset weight coefficient becomes smaller as the distance between the corresponding first to-be-fused column region and the second overlapping region becomes larger.
It will be appreciated that in the first overlapping region, the closer to the second overlapping region the decision contribution of the first to-be-fused column region is, and thus the larger the preset weight coefficient should be. Similarly, in the second overlapping region, the decision contribution degree of the second column region to be fused, which is closer to the first overlapping region, is larger, so that the preset weight coefficient should be larger. Optionally, a first preset weight coefficient may be preset for the first to-be-fused column region according to the distance between the first to-be-fused column region and the second overlapping region.
For example, if there are 2 first to-be-fused column regions in the first overlapping region, namely a and B, respectively, the first to-be-fused column region B is closer to the edge of the first overlapping region than the first to-be-fused column region a and also closer to the second overlapping region, then the first preset weight coefficient of the first to-be-fused column region a may be set to 0.8, and the first preset weight coefficient of the first to-be-fused column region B may be set to 0.6. Of course, other values, such as 0.4 or 0.5, etc., may be set, and are not limited herein.
306. And obtaining a second preset weight coefficient of a second column region to be fused corresponding to the first column region to be fused according to the first preset weight coefficient, wherein the sum of the first preset weight coefficient and the second preset weight coefficient is equal to one.
Based on the above example, there are 2 second to-be-fused column regions in the second overlapping region, respectively, C and D, where the second to-be-fused column region C is closer to the edge of the second overlapping region than the second to-be-fused column region D, and also closer to the first overlapping region, then the second preset weight coefficient of the second to-be-fused column region C corresponding to the first to-be-fused column region a in the second overlapping region is 1-0.8=0.2, and the second preset weight coefficient of the second to-be-fused column region D corresponding to the first to-be-fused column region B in the second overlapping region is 1-0.6=0.4.
307. And according to the first preset weight coefficient and the second preset weight coefficient, carrying out pixel value addition calculation on each first to-be-fused column region and the corresponding second to-be-fused column region to obtain fused pixel values so as to obtain a fused and spliced third image.
Steps 304-307 are implemented, the overlapping area is divided into a plurality of to-be-fused column areas, different weight coefficients are configured for the to-be-fused column areas according to the importance degree of the to-be-fused column areas, and the preset weight coefficients of any two adjacent to-be-fused column areas are different, so that the first image and the second image can be subjected to smooth and seamless splicing, the images are more natural in transition, the splicing effect is improved, and the visual effect is improved.
308. And performing smoothing filtering processing on the third image by using a low-pass filter to obtain a target smooth image.
Therefore, by implementing the method described in fig. 3, the determination efficiency of the overlapping area in the image can be improved, the image stitching efficiency can be improved, two adjacent three-dimensional MRA medical images to be stitched sent by the scanning device can be received in real time, and the three-dimensional MRA medical images to be stitched are preprocessed, so that a high-quality first image and a high-quality second image are provided for the next step. In addition, the overlapping area is divided into a plurality of to-be-fused column areas, different weight coefficients are configured for the to-be-fused column areas according to the importance degree of the to-be-fused column areas, the preset weight coefficients of any two adjacent to-be-fused column areas are different, and the first image and the second image can be smoothly and seamlessly spliced, so that the images are more natural in transition, the splicing effect is improved, and the visual effect is improved.
Example IV
Referring to fig. 4, fig. 4 is a schematic flow chart of another three-dimensional MRA medical image stitching method according to an embodiment of the present invention. The three-dimensional MRA medical image stitching method as shown in FIG. 4 may include the steps of:
401-402. For the description of steps 401 to 402, please refer to the detailed descriptions of steps 201 to 202 in the second embodiment, and the description of the present invention is omitted here.
403. And comparing the maximum density projection imaging of each of the first image and the second image to determine a first coincident segment in the first image and a second coincident segment in the second image.
Among them, maximum intensity projection (Maximal Intensity Projection, MIP) is a widely used three-dimensional MRA medical image processing technique. MIP uses fluoroscopy to obtain a two-dimensional image, i.e. by computing the maximum density of pixels encountered along each ray of the scanned object. As the fiber optic bundle passes through the original image of a section of tissue, the most dense pixels in the image are preserved and projected onto a two-dimensional plane, thereby forming a MIP reconstructed image. MIP can reflect X-ray attenuation values of corresponding pixels, small density changes can be displayed on MIP images, and stenosis, dilation and filling defects of blood vessels can be well displayed, and calcification on blood vessel walls and contrast agents in blood vessel cavities can be distinguished. Thus, by maximum intensity projection imaging, a first coincident segment in the first image and a second coincident segment in the second image can be initially determined.
404. Dividing the first coincident segment into a plurality of first regions to be detected, and dividing the second coincident segment into a plurality of second regions to be detected, wherein the first regions to be detected and the second regions to be detected are in one-to-one correspondence.
405. And judging whether the difference value of the number of the overlapped layers of each first region to be detected and the corresponding second region to be detected is smaller than a preset value or not. If yes, go to step 406; otherwise, the process is ended.
The preset value may be preset by a developer according to actual situations.
406. And taking the first region to be detected as a component part of a first overlapping region in the first image, and taking the corresponding second region to be detected as a component part of a second overlapping region in the second image so as to determine the first overlapping region and the second overlapping region.
407. And determining sampling points according to the image position information of the first image and the second image. Wherein the image position information is used to describe the positions of the first image and the second image corresponding to the human anatomy coordinate system.
The image position information may be obtained from device scan information provided in the header files of the first image and the second image, where the device scan information includes information such as image position, image direction, pixel resolution, layer thickness, patient position, and scan bed position.
It will be appreciated that, by the position of the origin of the image corresponding to the human anatomy coordinate system, any point in the image may be found to be located in the human anatomy coordinate system, and thus the sampling point may be the origin, or any coincident point other than the origin. Wherein an origin of the image is located at an upper left corner of the image, an image coordinate of the origin in the image coordinate system is zero, and a human anatomy coordinate of the origin in the human anatomy coordinate system is obtained from the image position information, so that a positional relationship of the first image and the second image can be described based on human anatomy coordinates of respective origins of the first image and the second image.
The human anatomy coordinate system refers to an anatomy space coordinate system in the technical field of medical image processing, and is also called a patient coordinate system. The human anatomy coordinate system is composed of three planes to describe the anatomical location of a standard human. Wherein the three body planes include a transverse plane, a coronal plane and a sagittal plane; wherein, the cross section is parallel to the ground, separating the head and the foot of the human body; the coronal plane is vertical to the ground and separates the front part and the rear part of the human body; the sagittal plane is perpendicular to the ground, separating the left and right parts of the human body.
408. And obtaining a registration transformation matrix according to the human anatomy coordinates of the sampling points, the first image coordinates of the sampling points in the first image and the second image coordinates of the sampling points in the second image.
Wherein, the human anatomy coordinates refer to coordinate information of the sampling points corresponding to a human anatomy coordinate system; the first image coordinates or the second image coordinates refer to coordinate information that the sampling point is located in an image coordinate system.
The registration transformation matrix is used for performing one or more of operations of translation, scale transformation, rotation and the like on the first image or the second image. In general, the registration transformation matrix can be determined when the points of the same human anatomy coordinates of the two images are known. For example, [ second image coordinates of sampling points in the second image ] = [ registration transformation matrix ] = [ first image coordinates of sampling points in the first image ].
409. And registering the first overlapping region and the second overlapping region according to the registration transformation matrix.
As an alternative embodiment, step 409 may include:
registering points with the same human anatomy coordinates in the second overlapping region to the same position of the first overlapping region through a registration transformation matrix by taking the first overlapping region as a reference region; or, using the second overlapping region as a reference region, registering the points with the same human anatomical coordinates in the first overlapping region to the same position of the second overlapping region through a registration transformation matrix.
Compared with the mode of manually determining the image overlapping area in the prior art, the implementation of the embodiment saves time, improves efficiency and simultaneously can increase the accuracy of image stitching.
As another alternative embodiment, before executing step 409, expansion processing may be further performed on the first overlapping area and the second overlapping area according to the size of the pre-selected image processing operator, so as to obtain a first area to be registered and a second area to be registered. Further alternatively, step 409 may include: and obtaining a registration coefficient based on a mutual information maximization method, and registering points of the same features in the first region to be registered and the second region to be registered to the same position through the registration coefficient according to the registration coefficient. Among them, the image processing operators include, but are not limited to, a lobez operator (also called Roberts operator) that finds edges using a local differential operator, a Sobel operator for edge detection, a Prewitt operator for edge detection of a first-order differential operator, a Laplacian operator or a gaussian-laplace operator for second-order differentiation, and the like.
By implementing the embodiment, the first overlapping region and the second overlapping region are determined to be subjected to expansion processing, and the registration is performed based on the first to-be-registered region and the second to-be-registered region after expansion, so that the accuracy of image registration can be improved, and meanwhile, the effect of image processing is improved.
410. And carrying out fusion splicing treatment on the first overlapping region and the second overlapping region by using a weighted average method to obtain a fused and spliced third image.
Therefore, by implementing the method described in fig. 4, the determination efficiency of the overlapping region in the image can be improved, and further the image stitching efficiency is improved. In addition, by taking one overlapping region as a reference region, points with the same human anatomy coordinates in the other overlapping region are registered to the same position through a registration transformation matrix, compared with the mode of manually determining the image overlapping region in the prior art, the time is saved, the efficiency is improved, and meanwhile, the accuracy of image stitching can be improved.
Example five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another three-dimensional MRA medical image stitching apparatus according to an embodiment of the present invention. As shown in fig. 5, the three-dimensional MRA medical image stitching apparatus may include a receiving unit 501, a denoising unit 502, a detecting unit 503, and a stitching unit 504, wherein,
and the receiving unit 501 is used for receiving the adjacent two three-dimensional MRA medical images to be spliced, which are sent by the scanning equipment, in real time.
And the denoising unit 502 is used for carrying out Laplacian denoising, enhancement and irregular smoothing on two adjacent three-dimensional MRA medical images to be spliced to obtain a first image and a second image.
And a detection unit 503, configured to perform overlay detection on the first image and the second image, so as to determine a first overlapping region in the first image and a second overlapping region in the second image.
And the stitching unit 504 is configured to perform fusion stitching processing on the first overlapping region and the second overlapping region by using a weighted average method, so as to obtain a third image after fusion stitching.
Therefore, the device shown in fig. 5 is implemented, and the overlapping layer detection is performed after the pretreatment is performed on the two adjacent three-dimensional MRA medical images to be spliced, so as to determine the overlapping areas in the two three-dimensional MRA medical images to be spliced, and the weighted average method is used for fusion splicing of the two overlapping areas, so that the determination efficiency of the overlapping areas in the images can be improved, and the image splicing efficiency is further improved.
Example six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another three-dimensional MRA medical image stitching apparatus according to an embodiment of the present invention. The three-dimensional MRA medical image stitching device shown in FIG. 6 is optimized by the three-dimensional MRA medical image stitching device shown in FIG. 5. In comparison with the three-dimensional MRA medical image stitching apparatus shown in fig. 5, in the three-dimensional MRA medical image stitching apparatus shown in fig. 6:
The denoising unit 502 is further configured to perform fusion splicing processing on the first overlapping region and the second overlapping region by using a weighted average method in the splicing unit 504, obtain a fused and spliced third image, and then perform smoothing filtering processing on the third image by using a low-pass filter to obtain a target smoothed image.
As an alternative embodiment, in the apparatus shown in fig. 6, the splicing unit 504 may include:
the dividing subunit 5041 is configured to divide the first overlapping area into a plurality of first to-be-fused column areas, and divide the second overlapping area into a plurality of second to-be-fused column areas, where the first to-be-fused column areas and the second to-be-fused column areas are in one-to-one correspondence.
The first obtaining subunit 5042 is configured to sequentially obtain, in order from a smaller distance to a larger distance, a first preset weight coefficient of each first to-be-fused column region, where the first preset weight coefficient becomes smaller as a distance between the corresponding first to-be-fused column region and the second overlapping region becomes larger.
The second obtaining subunit 5043 is configured to obtain a second preset weight coefficient of a second column area to be fused corresponding to the first column area to be fused according to the first preset weight coefficient, where a sum of the first preset weight coefficient and the second preset weight coefficient is equal to one.
And the stitching subunit 5044 is configured to perform a pixel value addition calculation on each first to-be-fused column region and the corresponding second to-be-fused column region according to the first preset weight coefficient and the second preset weight coefficient, so as to obtain a fused pixel value, so as to obtain a fused and stitched third image.
According to the embodiment, the overlapping area is divided into the plurality of to-be-fused column areas, different weight coefficients are configured for the to-be-fused column areas according to the importance degree of the to-be-fused column areas, the preset weight coefficients of any two adjacent to-be-fused column areas are different, and the first image and the second image can be subjected to smooth and seamless splicing, so that the images are more natural in transition, the splicing effect is improved, and the visual effect is improved.
As another optional implementation manner, the denoising unit 502 is further configured to perform smooth denoising on the three-dimensional MRA medical image to be stitched by using a weighted neighborhood averaging method. The weighted neighborhood averaging method is to multiply each pixel in the neighborhood with different coefficients and multiply the more important pixel with a larger weight. For example, assuming that the medical image is f (x, y), if the neighborhood S is taken, the calculation formula of the weighted neighborhood average is:
wherein Σ is a sum symbol for indicating a summing operation; a is the upper bound of the first summing operation, -a is the lower bound of the first summing operation, and a may be a specified constant to indicate that the range of values of s is [ -a, a ], thereby defining the range of independent values of the first summing operation. Similarly, b is the upper bound of the second summing operation, -b is the lower bound of the second summing operation, and b may be a specified constant to indicate that t has a range of values [ -b, b ], thereby defining the range of independent values of the second summing operation. Wherein w (s, t) is a weight function, belongs to a common weight function, and is a function taking the distance between each point in the neighborhood and the center point as a variable, wherein the center point has the largest weight in the function, which indicates that the decision contribution degree of the point to the weighted neighborhood average value is inversely proportional to the distance between the point and the center point. Where (s, t) is the coordinates of each point in the neighborhood, and w is the weight corresponding to that point.
By implementing this embodiment, the denoising processing speed can be increased.
Therefore, the device shown in fig. 6 is implemented, so that the determination efficiency of the overlapping area in the image can be improved, the image splicing efficiency is further improved, the overlapping area can be divided into a plurality of to-be-fused column areas, different weight coefficients are configured according to the importance degree of the to-be-fused column areas, the preset weight coefficients of any two adjacent to-be-fused column areas are different, the first image and the second image can be smoothly and seamlessly spliced, the image is enabled to be more natural, the splicing effect is improved, and the visual effect is improved.
Example seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of another three-dimensional MRA medical image stitching apparatus according to an embodiment of the present invention. The three-dimensional MRA medical image stitching device shown in FIG. 7 is optimized by the three-dimensional MRA medical image stitching device shown in FIG. 6. In comparison with the three-dimensional MRA medical image stitching apparatus shown in fig. 6, the three-dimensional MRA medical image stitching apparatus shown in fig. 7 may further include: a comparison unit 505, a determination unit 506, an acquisition unit 507 and a registration unit 508, wherein,
the comparing unit 505 is configured to compare maximum density projection images of the first image and the second image before the detecting unit 503 detects the overlapping layer of the first image and the second image to determine a first overlapping region in the first image and a second overlapping region in the second image, so as to determine a first overlapping segment in the first image and a second overlapping segment in the second image.
Accordingly, the above-mentioned detecting unit 503 is configured to detect the overlapping layers of the first image and the second image, so as to determine the first overlapping area in the first image and the second overlapping area in the second image, which may specifically be:
the above-mentioned detecting unit 503 is configured to perform overlapping layer detection on the first overlapping segment and the second overlapping segment to determine a first overlapping region in the first image and a second overlapping region in the second image.
Further alternatively, the above-mentioned detecting unit 503 is configured to perform overlapping layer detection on the first overlapping segment and the second overlapping segment, so as to determine a first overlapping area in the first image and a second overlapping area in the second image, which may specifically be:
the detecting unit 503 is configured to divide the first overlapping segment into a plurality of first regions to be detected, and divide the second overlapping segment into a plurality of second regions to be detected, where the first regions to be detected and the second regions to be detected are in one-to-one correspondence; sequentially judging whether the difference value of the number of the overlapped layers of each first region to be detected and the corresponding second region to be detected is smaller than a preset value; if the difference value is smaller than the preset value, the first region to be detected is used as a component part of a first overlapping region in the first image, and the corresponding second region to be detected is used as a component part of a second overlapping region in the second image, so that the first overlapping region and the second overlapping region are determined.
A determining unit 506, configured to determine, after the detecting unit 503 uses the first area to be detected as a component of the first overlapping area in the first image and the corresponding second area to be detected as a component of the second overlapping area in the second image to determine the first overlapping area and the second overlapping area, and before the splicing unit 504 performs fusion splicing processing on the first overlapping area and the second overlapping area by using a weighted average method to obtain a fused and spliced third image, determine a sampling point according to image position information of the first image and the second image. Wherein the image position information is used to describe the positions of the first image and the second image corresponding to the human anatomy coordinate system.
An obtaining unit 507, configured to obtain a registration transformation matrix according to the anatomical coordinates of the sampling points, the first image coordinates of the sampling points in the first image, and the second image coordinates of the sampling points in the second image.
A registration unit 508, configured to register the first overlapping region and the second overlapping region according to the registration transformation matrix.
As an optional implementation manner, the registration unit 508 is configured to register the first overlapping region and the second overlapping region according to a registration transformation matrix, which may specifically be:
A registration unit 508, configured to register points with the same anatomical coordinates of the human body in the second overlapping region to the same position of the first overlapping region through a registration transformation matrix with the first overlapping region as a reference region; or, using the second overlapping region as a reference region, registering the points with the same human anatomical coordinates in the first overlapping region to the same position of the second overlapping region through a registration transformation matrix.
Compared with the mode of manually determining the image overlapping area in the prior art, the implementation of the embodiment saves time, improves efficiency and simultaneously can increase the accuracy of image stitching.
As another alternative embodiment, the registration unit 508 is configured to register the first overlapping region and the second overlapping region according to a registration transformation matrix, which may specifically be:
the registration unit 508 is configured to perform expansion processing on the first overlapping region and the second overlapping region according to the size of the pre-selected image processing operator, so as to obtain a first region to be registered and a second region to be registered; and obtaining a registration coefficient based on the mutual information maximization method, and registering points of the same features in the first region to be registered and the second region to be registered to the same position through the registration coefficient according to the registration coefficient. Among them, the image processing operators include, but are not limited to, a lobez operator (also called Roberts operator) that finds edges using a local differential operator, a Sobel operator for edge detection, a Prewitt operator for edge detection of a first-order differential operator, a Laplacian operator or a gaussian-laplace operator for second-order differentiation, and the like.
By implementing the embodiment, the first overlapping region and the second overlapping region are determined to be subjected to expansion processing, and the registration is performed based on the first to-be-registered region and the second to-be-registered region after expansion, so that the accuracy of image registration can be improved, and meanwhile, the effect of image processing is improved.
Therefore, the device shown in fig. 7 can be implemented to improve the efficiency of determining the overlapping area in the image, and further improve the image stitching efficiency. In addition, by taking one overlapping region as a reference region, points with the same human anatomy coordinates in the other overlapping region are registered to the same position through a registration transformation matrix, compared with the mode of manually determining the image overlapping region in the prior art, the time is saved, the efficiency is improved, and meanwhile, the accuracy of image stitching can be improved.
The invention also provides an electronic device, comprising:
a processor;
and a memory having stored thereon computer readable instructions which, when executed by the processor, implement the three-dimensional MRA medical image stitching method as previously described.
The electronic device may be the apparatus 100 shown in fig. 1.
In an exemplary embodiment, the invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a three-dimensional MRA medical image stitching method as previously indicated.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (6)

1. A method for stitching three-dimensional MRA medical images, the method comprising:
receiving two adjacent three-dimensional MRA medical images to be spliced, which are sent by scanning equipment, in real time;
carrying out Laplace denoising, enhancement and irregular smoothing on the two adjacent three-dimensional MRA medical images to be spliced to obtain a first image and a second image;
comparing the maximum intensity projection imaging of each of the first image and the second image to determine a first coincident segment in the first image and a second coincident segment in the second image;
performing overlapping layer detection on the first image and the second image respectively to determine a first overlapping region in the first image and a second overlapping region in the second image; it comprises the following steps: dividing the first coincident segment into a plurality of first areas to be detected, and dividing the second coincident segment into a plurality of second areas to be detected, wherein the first areas to be detected and the second areas to be detected are in one-to-one correspondence; sequentially judging whether the difference value of the number of the overlapped layers of each first region to be detected and the corresponding second region to be detected is smaller than a preset value; if the difference value is smaller than the preset value, taking the first region to be detected as a component part of a first overlapping region in the first image, and taking the corresponding second region to be detected as a component part of a second overlapping region in the second image so as to determine the first overlapping region and the second overlapping region;
Determining sampling points according to the image position information of the first image and the second image; wherein the image position information is used to describe the positions of the first image and the second image corresponding to a human anatomy coordinate system;
obtaining a registration transformation matrix according to human anatomy coordinates of the sampling points, first image coordinates of the sampling points in the first image and second image coordinates of the sampling points in the second image;
registering points with the same human anatomy coordinates in the second overlapping region to the same position of the first overlapping region through the registration transformation matrix by taking the first overlapping region as a reference region; or, using the second overlapping area as a reference area, registering points with the same human anatomical coordinates in the first overlapping area to the same position of the second overlapping area through the registration transformation matrix;
and carrying out fusion splicing treatment on the first overlapping region and the second overlapping region by using a weighted average method to obtain a third image after fusion splicing.
2. The method according to claim 1, wherein the fusion-splicing process is performed on the first overlapping region and the second overlapping region by using a weighted average method, and after obtaining the third image after the fusion-splicing, the method further comprises:
And carrying out smoothing filtering processing on the third image by using a low-pass filter to obtain a target smooth image.
3. The method according to claim 1 or 2, wherein the performing fusion splicing processing on the first overlapping region and the second overlapping region by using a weighted average method to obtain a fused and spliced third image includes:
dividing the first overlapping region into a plurality of first to-be-fused column regions, and dividing the second overlapping region into a plurality of second to-be-fused column regions, wherein the first to-be-fused column regions and the second to-be-fused column regions are in one-to-one correspondence;
sequentially obtaining a first preset weight coefficient of each first to-be-fused column region according to the sequence from the small distance to the large distance of each first to-be-fused column region and the second overlapping region, wherein the first preset weight coefficient becomes smaller as the distance between the corresponding first to-be-fused column region and the second overlapping region becomes larger;
obtaining a second preset weight coefficient of the second column region to be fused corresponding to the first column region to be fused according to the first preset weight coefficient, wherein the sum of the first preset weight coefficient and the second preset weight coefficient is equal to one;
And according to the first preset weight coefficient and the second preset weight coefficient, carrying out pixel value addition calculation on each first to-be-fused column region and the corresponding second to-be-fused column region to obtain fused pixel values so as to obtain a fused and spliced third image.
4. A three-dimensional MRA medical image stitching device, characterized in that it is adapted to perform the method of any one of claims 1 to 3, said device comprising:
the receiving unit is used for receiving two adjacent three-dimensional MRA medical images to be spliced, which are sent by the scanning equipment, in real time;
the denoising unit is used for carrying out Laplace denoising, enhancement and irregular smoothing on the two adjacent three-dimensional MRA medical images to be spliced to obtain a first image and a second image;
the detection unit is used for respectively carrying out overlapping layer detection on the first image and the second image so as to determine a first overlapping region in the first image and a second overlapping region in the second image;
and the splicing unit is used for carrying out fusion splicing treatment on the first overlapping region and the second overlapping region by using a weighted average method to obtain a fused and spliced third image.
5. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the three-dimensional MRA medical image stitching method according to any one of claims 1-3.
6. A computer-readable storage medium, characterized in that it stores a computer program that causes a computer to execute the three-dimensional MRA medical image stitching method according to any one of claims 1 to 3.
CN201910666640.9A 2019-07-23 2019-07-23 Three-dimensional MRA medical image stitching method and device and electronic equipment Active CN110473143B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910666640.9A CN110473143B (en) 2019-07-23 2019-07-23 Three-dimensional MRA medical image stitching method and device and electronic equipment
PCT/CN2019/118065 WO2021012520A1 (en) 2019-07-23 2019-11-13 Three-dimensional mra medical image splicing method and apparatus, and electronic device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910666640.9A CN110473143B (en) 2019-07-23 2019-07-23 Three-dimensional MRA medical image stitching method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110473143A CN110473143A (en) 2019-11-19
CN110473143B true CN110473143B (en) 2023-11-10

Family

ID=68508969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910666640.9A Active CN110473143B (en) 2019-07-23 2019-07-23 Three-dimensional MRA medical image stitching method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN110473143B (en)
WO (1) WO2021012520A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145092A (en) * 2019-12-16 2020-05-12 华中科技大学鄂州工业技术研究院 Method and device for processing infrared blood vessel image on leg surface
CN111612690B (en) * 2019-12-30 2023-04-07 苏州纽迈分析仪器股份有限公司 Image splicing method and system
CN113902657A (en) * 2021-08-26 2022-01-07 北京旷视科技有限公司 Image splicing method and device and electronic equipment
CN116958104A (en) * 2023-07-28 2023-10-27 上海感图网络科技有限公司 Material surface image processing method, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102551717A (en) * 2010-12-31 2012-07-11 深圳迈瑞生物医疗电子股份有限公司 Method and device for removing blood vessel splicing image artifacts in magnetic resonance imaging
CN108633312A (en) * 2015-11-18 2018-10-09 光学实验室成像公司 X-ray image feature detects and registration arrangement and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855613B (en) * 2011-07-01 2016-03-02 株式会社东芝 Image processing equipment and method
US20140267267A1 (en) * 2013-03-15 2014-09-18 Toshiba Medical Systems Corporation Stitching of volume data sets
CN104318604A (en) * 2014-10-21 2015-01-28 四川华雁信息产业股份有限公司 3D image stitching method and apparatus
CN107146201A (en) * 2017-05-08 2017-09-08 重庆邮电大学 A kind of image split-joint method based on improvement image co-registration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102551717A (en) * 2010-12-31 2012-07-11 深圳迈瑞生物医疗电子股份有限公司 Method and device for removing blood vessel splicing image artifacts in magnetic resonance imaging
CN108633312A (en) * 2015-11-18 2018-10-09 光学实验室成像公司 X-ray image feature detects and registration arrangement and method

Also Published As

Publication number Publication date
CN110473143A (en) 2019-11-19
WO2021012520A1 (en) 2021-01-28

Similar Documents

Publication Publication Date Title
CN110473143B (en) Three-dimensional MRA medical image stitching method and device and electronic equipment
CN106056537B (en) A kind of medical image joining method and device
CN111161270B (en) Vascular segmentation method for medical image, computer device and readable storage medium
CN107665486B (en) Automatic splicing method and device applied to X-ray images and terminal equipment
CN110889005B (en) Searching medical reference images
CN107886508B (en) Differential subtraction method and medical image processing method and system
Maier-Hein et al. Towards mobile augmented reality for on-patient visualization of medical images
Weibel et al. Graph based construction of textured large field of view mosaics for bladder cancer diagnosis
CN108648192A (en) A kind of method and device of detection tubercle
US9754366B2 (en) Computer-aided identification of a tissue of interest
US20150003702A1 (en) Processing and displaying a breast image
CN112967291B (en) Image processing method and device, electronic equipment and storage medium
CN100592336C (en) System and method for registration of medical images
CN113034354B (en) Image processing method and device, electronic equipment and readable storage medium
US20180064409A1 (en) Simultaneously displaying medical images
US20210118551A1 (en) Device to enhance and present medical image using corrective mechanism
US20090154782A1 (en) Dual-magnify-glass visualization for soft-copy mammography viewing
KR20200120311A (en) Determination method for stage of cancer based on medical image and analyzing apparatus for medical image
JP2016197377A (en) Computer program for image correction, image correction device, and image correction method
US10832420B2 (en) Dynamic local registration system and method
JP6642048B2 (en) Medical image display system, medical image display program, and medical image display method
JP7114003B1 (en) Medical image display system, medical image display method and program
WO2022113798A1 (en) Medical image display system, medical image display method, and program
CN117522673A (en) Medical image conversion method and system
WO2009072050A1 (en) Automatic landmark placement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant