CN118229555A - Image fusion method, device, equipment and computer readable storage medium - Google Patents

Image fusion method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN118229555A
CN118229555A CN202410641876.8A CN202410641876A CN118229555A CN 118229555 A CN118229555 A CN 118229555A CN 202410641876 A CN202410641876 A CN 202410641876A CN 118229555 A CN118229555 A CN 118229555A
Authority
CN
China
Prior art keywords
image
fusion
visible light
component
fluorescence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410641876.8A
Other languages
Chinese (zh)
Inventor
孙楠
伍超杰
周帅骏
甘小方
王圣运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Weigao Group Medical Polymer Co Ltd
Original Assignee
Shandong Weigao Group Medical Polymer Co Ltd
Filing date
Publication date
Application filed by Shandong Weigao Group Medical Polymer Co Ltd filed Critical Shandong Weigao Group Medical Polymer Co Ltd
Publication of CN118229555A publication Critical patent/CN118229555A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses an image fusion method, an image fusion device, image fusion equipment and a computer readable storage medium, which are applied to the technical field of image processing and comprise the following steps: based on the fluorescence image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescence approximation component and the visible light approximation component, carrying out image fusion to obtain a fusion approximation component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, and the intermediate variable is a variable determined based on a preset weight matrix, a fluorescence approximation component and a visible light approximation component; and carrying out inverse two-dimensional discrete wavelet transform on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image. According to the invention, the fusion weighting coefficients of the fluorescent image and the visible light image are calculated based on the weight matrix respectively, and the fusion weighting coefficients are dynamically changed, so that the image fusion result has good stability and high precision.

Description

Image fusion method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image fusion method, apparatus, device, and computer readable storage medium.
Background
Fluorescent endoscopes are a new type of diagnostic device that excite human tissue through a specific spectrum to emit fluorescent light of a specific wavelength, which is then received and imaged by a fluorescent sensor. Medical staff can distinguish normal tissues from abnormal tissues according to the intensity of fluorescence emitted by human tissues in the fluorescence image, and is beneficial to improving the discovery rate and the diagnosis rate of early cancers.
Because the fluorescent endoscope can only receive the intensity information of fluorescence and display the intensity information in the form of gray level images, texture information and environment information of tissues in a visual field cannot be represented, and the visual habit of human eyes cannot be met. The existing scheme without image fusion is that a visible light image and a fluorescent image are displayed on a display at the same time, medical staff needs to manually compare the two images, and diagnosis efficiency is reduced. Or extracting the effective fluorescence area by setting a fixed threshold value for the fluorescence image, and adding the effective fluorescence area into the visible light image by overlapping with the fixed coefficient as the weight. Although the method is high in calculation efficiency, the fixed threshold value can cause the boundary of the fluorescence region corresponding to the fusion image to be too prominent, so that the observation of texture information is affected. And the fixed coefficient is taken as weight, so that the fluctuation of the image intensity is easy to cause, the misjudgment condition can possibly occur, and the stability of the fused image is reduced.
Disclosure of Invention
Accordingly, the present invention is directed to a method, apparatus, device and computer readable storage medium for image fusion, which solve the technical problem of poor image fusion effect in the prior art.
In order to solve the technical problems, the invention provides an image fusion method, which comprises the following steps:
Processing the normalized fluorescence image and the normalized visible light image based on the wavelet transform function respectively to obtain a fluorescence approximation component, a visible light approximation component and a plurality of visible light detail components;
Performing image fusion based on the fluorescence image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescence approximation component and the visible light approximation component to obtain a fusion approximation component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, wherein the intermediate variable is a variable determined based on a preset weight matrix, the fluorescence approximation component and the visible light approximation component;
And carrying out inverse two-dimensional discrete wavelet transform on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image.
Optionally, the processing the normalized fluorescent image and the normalized visible light image based on the wavelet transform function to obtain a fluorescent approximate component, a visible light approximate component, and a plurality of visible light detail components includes:
and respectively processing the normalized fluorescence image and the normalized visible light image based on approximately symmetrical tightly-supported orthogonal wavelet functions.
Optionally, before the normalized fluorescence image and the normalized visible light image are processed based on the wavelet transform function to obtain a fluorescence approximation component, a visible light approximation component, and a plurality of visible light detail components, the method further includes:
The method comprises the steps that a fluorescence image acquisition module and a visible light image acquisition module which are arranged on an endoscope are respectively utilized to acquire the fluorescence image and the visible light image;
and respectively carrying out normalization processing on the fluorescence image and the visible light image to obtain the normalized fluorescence image and the normalized visible light image.
Optionally, the acquiring the fluorescent image and the visible light image includes:
and acquiring the fluorescence image and the visible light image of the same imaging light path.
Optionally, before the image fusion is performed on the fluorescence image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescence approximation component and the visible light approximation component, obtaining a fusion approximation component, the method further includes:
Calculating the intermediate variable; wherein the intermediate variable ;/>Representing a two-dimensional convolution calculation,/>Representing a preset known weight matrix of 3 rows and 3 columns,/>An element product representing the fluorescence approximation component and the visible approximation component;
Determining the fluorescent image fusion weighting coefficient and the visible light image fusion weighting coefficient based on the intermediate variable; the fluorescent image fuses the weighting coefficient The visible light image fuses the weighting coefficient/>
Optionally, the performing inverse two-dimensional discrete wavelet transform on the fusion approximation component and the visible light detail component to obtain a normalized fusion image, and fusing the normalized fusion image and a visible light maximum value to obtain a target fusion image, including:
Acquiring the fusion approximation component; wherein the fused approximation component ,/>Representing the element product; /(I)Representing the fluorescence approximation component,/>Representing the visible light approximation component;
Fusing the approximation components The visible detail component/>、/>And/>Performing the inverse two-dimensional discrete wavelet transform to obtain the normalized fusion image/>
Performing element product operation by using the normalized fusion image and the visible light maximum value to obtain the target fusion image, wherein the target fusion image,/>Representing the visible maximum.
Optionally, after performing inverse two-dimensional discrete wavelet transformation on the fusion approximation component and the visible light detail component to obtain a normalized fusion image and performing fusion by using the normalized fusion image and a visible light maximum value, obtaining a target fusion image, the method further includes:
And analyzing by using the target fusion image to determine normal tissues and abnormal tissues.
The invention also provides an image fusion device, which comprises:
The wavelet transformation processing module is used for respectively processing the normalized fluorescent image and the normalized visible light image based on a wavelet transformation function to obtain a fluorescent approximate component, a visible light approximate component and a plurality of visible light detail components;
the fusion approximate component determining module is used for carrying out image fusion on the basis of the fluorescence image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescence approximate component and the visible light approximate component to obtain a fusion approximate component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, wherein the intermediate variable is a variable determined based on a preset weight matrix, the fluorescence approximation component and the visible light approximation component;
And the image fusion module is used for carrying out inverse two-dimensional discrete wavelet transformation on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image.
Optionally, the wavelet transformation processing module includes:
and the wavelet transformation unit is used for respectively processing the normalized fluorescent image and the normalized visible light image based on the approximately symmetrical tight-support orthogonal wavelet function.
The invention also provides an electronic device comprising a memory and a processor, wherein:
The memory is used for storing a computer program;
The processor is configured to execute the computer program to implement the steps of the image fusion method as described above.
The invention also provides a computer readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the steps of the image fusion method as described above.
The method comprises the steps of respectively processing a normalized fluorescence image and a normalized visible light image based on a wavelet transformation function to obtain a fluorescence approximation component, a visible light approximation component and a plurality of visible light detail components; based on the fluorescence image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescence approximation component and the visible light approximation component, carrying out image fusion to obtain a fusion approximation component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, and the intermediate variable is a variable determined based on a preset weight matrix, a fluorescence approximation component and a visible light approximation component; and carrying out inverse two-dimensional discrete wavelet transform on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image. Compared with the current method that the effective fluorescence area is extracted by setting a fixed threshold value for the fluorescence image and adding the fixed coefficient as the weight overlap into the visible light image, the method has the advantages that the fusion weighting coefficients of the fluorescence image and the visible light image are calculated based on the weight matrix respectively, and the fusion weighting coefficients are dynamically changed, so that the fusion result has good stability and high precision.
In addition, the invention also provides an image fusion device, equipment and a computer readable storage medium, which also have the beneficial effects.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image fusion method according to an embodiment of the present invention;
Fig. 2 is a flowchart illustrating an image fusion method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating another image fusion method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image fusion system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present invention;
Fig. 6 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present invention;
In fig. 4-6, the reference numerals are as follows:
1-a fluorescent image acquisition module;
2-a visible light image acquisition module;
A 3-fusion module;
4-an endoscope;
A 100-wavelet transformation processing module;
200-fusing the approximate component determination module;
300-an image fusion module;
10-memory;
A 20-processor;
30-a communication interface;
40-communication bus.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fluorescent endoscopes are a new type of diagnostic device that excite human tissue through a specific spectrum to emit fluorescent light of a specific wavelength, which is then received and imaged by a fluorescent sensor. Medical staff can distinguish normal tissues from abnormal tissues according to the intensity of fluorescence emitted by human tissues in the fluorescence image, and is beneficial to improving the discovery rate and the diagnosis rate of early cancers. Because the fluorescent endoscope can only receive the intensity information of fluorescence and display the intensity information in the form of gray level images, texture information and environment information of tissues in a visual field cannot be represented, and the visual habit of human eyes cannot be met. The conventional endoscope irradiates the tissue to be measured with visible light, then receives reflected light of human tissue through a visible light sensor and images the reflected light to obtain a color visible light image of the tissue to be measured and the surrounding thereof. Since the color of abnormal tissue in the human body is very close to that of the surrounding environment, it is difficult to quickly and accurately identify the abnormal tissue by the conventional visible light image.
In the existing scheme without image fusion, the visible light image and the fluorescent image are displayed on a display at the same time, and medical staff needs to manually compare the two images, so that diagnosis efficiency is reduced. Or the scheme of partial image fusion is that the effective fluorescence area is extracted by setting a fixed threshold value for the fluorescence image, and then the visible light image is added by overlapping with the fixed coefficient as the right. Although the method is high in calculation efficiency, the fixed threshold value can cause the boundary of the fluorescence region corresponding to the fusion image to be too prominent, so that the observation of texture information is affected. And the fixed coefficient is taken as weight, so that the fluctuation of the image intensity is easy to cause, the misjudgment condition can possibly occur, and the stability of the fused image is reduced.
Therefore, there is a need for a high-efficiency, high-stability and high-precision visible light image and fluorescence image fusion method that effectively combines the advantages of fluorescence imaging and visible light imaging.
Referring to fig. 1, fig. 1 is a flowchart of an image fusion method according to an embodiment of the present invention. The method may include:
S101, respectively processing the normalized fluorescence image and the normalized visible light image based on the wavelet transformation function to obtain a fluorescence approximation component, a visible light approximation component and a plurality of visible light detail components.
The execution subject of this embodiment is an electronic device. The embodiment is not limited to a specific wavelet transform function. For example, the wavelet transform function in this embodiment may be a Symlet wavelet, which is a class of approximately symmetrical tightly-packed orthogonal wavelets with specific support ranges and vanishing moments while maintaining good regularities. Symlet wavelets are generally denoted symN, where N represents the order of the wavelet; or the wavelet transform function in this embodiment may also be a Coiflets wavelet basis (coif) that has compact support and good frequency localization characteristics, which may perform better than Daubechies wavelets in some applications. Common Coiflets wavelet bases include coif1, coif2, coif3, and the like; alternatively, the wavelet transform function in this embodiment may also be Biorthogonal wavelet bases (bior), biorthogonal being a set of paired wavelet base functions with variable support lengths and frequency responses. The embodiment can obtain a fluorescence approximation component by normalizing the fluorescence image based on a wavelet transform function; the embodiment processes the normalized visible light image based on a wavelet transform function to obtain a visible light approximation component and a plurality of visible light detail components.
It should be further noted that, in order to improve the effect of image processing, the above-mentioned processing the normalized fluorescence image and the normalized visible light image based on the wavelet transform function to obtain a fluorescence approximation component, a visible light approximation component, and a plurality of visible light detail components may include: and respectively processing the normalized fluorescence image and the normalized visible light image based on the approximately symmetrical tightly-supported orthogonal wavelet function. This embodiment has the following advantages when processing normalized fluorescence images and normalized visible light images based on approximately symmetric tightly-packed orthogonal wavelet functions, such as Symlet wavelet (symN): frequency localization: the tightly-supported orthogonal wavelet has good frequency localization characteristics, can effectively distinguish high-frequency and low-frequency components in an image, and is beneficial to extracting image details and texture features. Edge detection: approximately symmetric wavelet functions can better capture edge information of images because they have better smoothness in the spatial domain, which is particularly useful for object boundary identification in fluorescence and visible light images. Denoising ability: the tightly-packed wavelet transform can effectively concentrate noise components in an image in a high-frequency region, thereby removing noise by a thresholding method or the like while retaining main features of the image. The computational complexity is reduced: since the support range of the tightly-packed wavelets is limited, which means that they have only a limited number of non-zero coefficients in the spatial domain, the computational complexity can be reduced and the processing speed increased. In summary, the normalized fluorescence image and the normalized visible light image are processed by using the tightly-supported orthogonal wavelet function based on approximate symmetry, so that the effective denoising, edge detection and image compression can be realized while the important characteristics of the image are maintained, and the efficiency and effect of image processing are improved.
It should be further noted that, in order to improve the image quality, before the normalized fluorescence image and the normalized visible light image are processed based on the wavelet transform function to obtain the fluorescence approximation component, the visible light approximation component, and the plurality of visible light detail components, the method may further include: the method comprises the steps of respectively utilizing a fluorescent image acquisition module and a visible light image acquisition module which are arranged on an endoscope to acquire a fluorescent image and a visible light image; and respectively carrying out normalization processing on the fluorescence image and the visible light image to obtain a normalized fluorescence image and the normalized visible light image. This embodiment improves the visual effect of the image and improves the image quality by performing normalization processing.
It should be further noted that, to improve the efficiency of image fusion, the acquiring the fluorescence image and the visible light image may include: and collecting fluorescence images and visible light images of the same imaging light path. This embodiment ensures that the fluorescence image is perfectly aligned spatially with the visible image by using the same imaging light path, which is crucial for subsequent image fusion and analysis, eliminating errors that may be caused by additional registration steps. Also, different imaging paths may introduce inconsistent artifacts due to optical distortion. The use of the same optical path can reduce these artifacts, ensuring fidelity of both image types.
S102, performing image fusion based on a fluorescence image fusion weighting coefficient and a visible light image fusion weighting coefficient, and a fluorescence approximation component and a visible light approximation component to obtain a fusion approximation component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, and the intermediate variable is a variable determined based on a preset weight matrix, a fluorescence approximation component and a visible light approximation component.
In the embodiment, the fusion weighting coefficients corresponding to the fluorescent image and the visible light image are calculated based on the preset weight matrix respectively, and the fusion weighting coefficients of the fluorescent image and the visible light image in the invention are dynamically changed according to the images and are not fixed and unchanged because the fluorescent approximation component and the visible light approximation component are changed according to the difference of the fluorescent image and the visible light image.
It should be further noted that, in order to improve the accuracy of the calculation of the fusion weighting coefficient and improve the image fusion effect, the image fusion based on the fluorescence image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescence approximation component and the visible light approximation component may further include:
S1021, calculating an intermediate variable; wherein the intermediate variable ;/>Representing a two-dimensional convolution calculation,/>Representing a preset known weight matrix of 3 rows and 3 columns,/>An element product representing the fluorescence approximation component and the visible approximation component;
s1022, determining a fluorescent image fusion weighting coefficient and a visible light image fusion weighting coefficient based on the intermediate variable; fluorescent image fusion weighting coefficient Visible light image fusion weighting coefficient/>
According to the embodiment, the intermediate variable is determined based on the two-dimensional convolution, so that the fluorescent image fusion weighting coefficient and the visible light image fusion weighting coefficient are determined based on the intermediate variable, and important features can be effectively extracted from the image through convolution operation, so that the image fusion effect can be improved.
S103, carrying out inverse two-dimensional discrete wavelet transformation on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image.
The visible light maximum value in this embodiment is the value of the lattice maximum in the visible light image. It will be appreciated that a frame of digital image is made up of discrete lattices, each dot being represented by a numerical value, the maximum value of a frame of image being the largest of the lattices.
It should be further noted that, in order to improve the feasibility of image fusion, the above-mentioned performing inverse two-dimensional discrete wavelet transformation on the fusion approximation component and the visible light detail component to obtain a normalized fusion image, and fusing the normalized fusion image with the visible light maximum value to obtain a target fusion image may include:
S1031, obtaining a fusion approximation component; wherein the approximation components are fused ,/>Representing the element product; /(I)Representing the fluorescence approximation component,/>Representing the visible light approximation component;
S1032, the approximation components are fused Visible detail component/>、/>And/>Performing inverse two-dimensional discrete wavelet transformation to obtain normalized fusion image/>
S1033, performing element product operation by using the normalized fusion image and the visible light maximum value to obtain a target fusion image, wherein the target fusion image,/>Indicating the visible maximum.
In the embodiment, the normalized fusion image and the visible light maximum value are subjected to element product operation to obtain the target fusion image, so that image fusion is a practical scheme.
It should be further noted that, in order to improve the applicability of the image fusion method, performing inverse two-dimensional discrete wavelet transformation on the fusion approximation component and the visible light detail component to obtain a normalized fusion image, and fusing the normalized fusion image with a visible light maximum value to obtain a target fusion image, the method may further include: and analyzing by using the target fusion image to determine normal tissues and abnormal tissues. The embodiment can analyze the target fusion image and determine normal tissues and abnormal tissues, so that the target fusion image can be used for detecting the abnormal tissues, and the applicability of the image fusion method is improved. It will be appreciated that this embodiment may use the labeled data sets (containing images of normal and abnormal tissue) to train a classification model, such as a Convolutional Neural Network (CNN). Segmentation and classification: and (3) segmenting and classifying the new endoscope image by using the trained model, and identifying the areas of normal tissues and abnormal tissues. Post-treatment: post-processing, such as morphological operations, are performed on the classification results to remove noise and clearly define the boundaries of the anomaly regions. Evaluation of results: and evaluating the classification result, checking the accuracy of the model, and adjusting according to the requirement. Clinical decision support: the analysis results are used as part of an assisted diagnosis, helping the physician to make more accurate clinical decisions.
The image fusion method provided by the embodiment of the invention can comprise the following steps: s101, respectively processing a normalized fluorescence image and a normalized visible light image based on a wavelet transformation function to obtain a fluorescence approximation component, a visible light approximation component and a plurality of visible light detail components; s102, performing image fusion based on a fluorescence image fusion weighting coefficient and a visible light image fusion weighting coefficient, and a fluorescence approximation component and a visible light approximation component to obtain a fusion approximation component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, and the intermediate variable is a variable determined based on a preset weight matrix, a fluorescence approximation component and a visible light approximation component; s103, carrying out inverse two-dimensional discrete wavelet transformation on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image. Compared with the current method that the effective fluorescence area is extracted by setting a fixed threshold value for the fluorescence image and adding the fixed coefficient as the weight overlap into the visible light image, the method provided by the invention is used for respectively calculating the fusion weighting coefficient of the fluorescence image and the visible light image based on the weight matrix, and the fusion result has good stability and high precision. Moreover, the normalized fluorescent image and the normalized visible light image are processed by using the tightly-supported orthogonal wavelet function based on approximate symmetry, so that the effective denoising, edge detection and image compression can be realized while the important characteristics of the image are maintained, and the efficiency and the effect of image processing are improved; in addition, the embodiment improves the visual effect of the image and improves the image quality by carrying out normalization processing; in addition, the embodiment determines the intermediate variable based on the two-dimensional convolution, so that the fluorescent image fusion weighting coefficient and the visible light image fusion weighting coefficient are determined based on the intermediate variable, and the convolution operation can effectively extract important features from the image, so that the image fusion effect can be improved; in addition, in the embodiment, the normalized fusion image and the visible light maximum value are subjected to element product operation to obtain the target fusion image, so that the image fusion is an implementation scheme; in addition, the embodiment can analyze the target fusion image to determine normal tissues and abnormal tissues, so that the target fusion image can be used for detecting the abnormal tissues, and the applicability of the image fusion method is improved.
In order to facilitate understanding of the present invention, referring to fig. 2 in detail, fig. 2 is a flowchart illustrating an image fusion method according to an embodiment of the present invention, which may specifically include:
S201, carrying out normalization processing on the fluorescence image to obtain a normalized fluorescence image, and processing the normalized fluorescence image based on a first wavelet transformation function to obtain a fluorescence approximation component.
S202, carrying out normalization processing on the visible light image to obtain a normalized visible light image, determining the maximum value in the visible light image to obtain a visible light maximum value, and processing the normalized visible light image based on a second wavelet transformation function to obtain a visible light approximate component and a plurality of visible light detail components.
S203, performing image fusion based on the fluorescent image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescent approximation component and the visible light approximation component to obtain a fusion approximation component; the fused weighting coefficients are coefficients determined based on the weight matrix, the fluorescent approximation component, and the visible approximation component.
S204, performing inverse two-dimensional discrete wavelet transformation on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and fusing the normalized fusion image with the visible light maximum value to obtain a target fusion image.
According to the embodiment of the invention, different fluorescent images and visible light images are respectively processed, so that fusion weighting coefficients of the fluorescent images and the visible light images are respectively calculated based on the weight matrix, and because the fluorescent images and the visible light images are different, the fluorescent approximate components, the visible light approximate components and a plurality of visible light detail components are also changed, so that the fusion weighting coefficients of the fluorescent images and the visible light images are changed, and the image fusion stability is good and the image fusion precision is high.
For better understanding of the present invention, please refer to fig. 3, fig. 3 is a flowchart illustrating another image fusion method according to an embodiment of the present invention, which may include:
Aiming at the clinical diagnosis requirements of fluorescence and visible light bimodal endoscope images, the invention realizes the high-efficiency, high-stability and high-precision visible light image and fluorescence image fusion method and system through an image processing method. Fig. 4 is a schematic structural diagram of an image fusion system according to an embodiment of the present invention, as shown in fig. 4. The visible light image and fluorescent image fusion system at least comprises a fluorescent image acquisition module 1, a visible light image acquisition module 2 and a fusion module 3. The fluorescence image acquisition module 1 is mounted on the endoscope 4 for acquiring fluorescence images. The visible light image acquisition module 2 is mounted on the endoscope 4 for acquiring visible light images. The fluorescent image acquisition module 1 and the visible light acquisition module 2 share one imaging light path, and the fields of view are the same. The fusion module 3 is connected with the fluorescent image acquisition module 1 and the visible light image acquisition module 2 in a wired mode, and respectively obtains a fluorescent image and a visible light image from the fluorescent image acquisition module 1 and the visible light image acquisition module 2.
The automatic alignment of the endoscope to be tested comprises the following specific steps:
step 1, a fusion module controls a fluorescent image acquisition module to acquire 1 frame of fluorescent image The visible light image acquisition module is controlled to acquire 1 frame of visible light image/>Wherein/>Indicating the point in time.
Step 2, the fusion module fuses the fluorescent imagesNormalization processing is carried out to obtain normalized fluorescence image/>Normalized fluorescence image/>, based on Symlet wavelet pairPerforming two-dimensional discrete wavelet transformation to obtain fluorescence approximate component/>And detail component/>、/>And/>; The fusion module fuses the visible light image/>Normalization processing is carried out to obtain normalized visible light image/>Simultaneously recording visible light image/>Maximum value/>Then normalized visible light image/>, based on Symlet wavelet pairPerforming two-dimensional discrete wavelet transformation to obtain visible light approximate component/>And visible detail component/>、/>And/>
Step 3, calculating a fusion weighting coefficient of the fluorescent imageAnd visible light image fusion weighting coefficient/>
This embodiment requires first calculating intermediate variables
In the middle ofRepresenting a two-dimensional convolution calculation,/>Representing a preset known weight matrix of 3 rows and 3 columns.
Representing normalized fluorescence image approximation component/>And normalizing the visible image approximation component/>Is defined as the sum of the product of the elements,
In the middle ofRepresenting the product of the elements, which represents the multiplication of the elements at each corresponding position in the two matrices, the dimensions of the two approximate component matrices being the same, or the dimensions after multiplication.
Finally, the fluorescent image fusion weighting coefficient can be obtainedThe visible light image fusion weighting coefficient can be obtained
Step 4, fusing the weighted coefficients of the images according to the fluorescent imagesAnd visible light image fusion weighting coefficient/>Fusion is carried out to obtain approximate components/>, after fusionIn/>Representing the product of the elements.
Step 5, merging the approximate components after fusionVisible light image detail component/>、/>And/>Performing inverse two-dimensional discrete wavelet transformation to obtain normalized fusion image/>Finally, a fused image/>, is obtainedAnd finishing the fusion process of the fluorescent image and the visible light image.
And 6, repeating the steps 1 to 5 until the image acquisition is stopped.
The embodiment of the invention realizes the automatic fusion of the fluorescent image and the visible light image based on the two-dimensional discrete wavelet transformation, and has high fusion efficiency; and the fusion weighting coefficients of the fluorescent image and the visible light image are calculated based on the weight matrix respectively, so that the fusion result has good stability and high precision.
The following describes an image fusion apparatus provided in an embodiment of the present invention, and the image fusion apparatus described below and the image fusion method described above may be referred to correspondingly.
Referring to fig. 5 specifically, fig. 5 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present invention, which may include:
the wavelet transformation processing module 100 is configured to process the normalized fluorescence image and the normalized visible light image based on the wavelet transformation function, respectively, to obtain a fluorescence approximation component, a visible light approximation component, and a plurality of visible light detail components;
The fusion approximate component determining module 200 is configured to perform image fusion based on a fluorescence image fusion weighting coefficient and a visible light image fusion weighting coefficient, and the fluorescence approximate component and the visible light approximate component to obtain a fusion approximate component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, wherein the intermediate variable is a variable determined based on a preset weight matrix, the fluorescence approximation component and the visible light approximation component;
and the image fusion module 300 is used for carrying out inverse two-dimensional discrete wavelet transform on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image.
Further, based on the above embodiment, the above wavelet transform processing module 100 may include:
and the wavelet transformation unit is used for respectively processing the normalized fluorescent image and the normalized visible light image based on the approximately symmetrical tight-support orthogonal wavelet function.
Further, based on the above embodiment, the above image fusion apparatus may further include:
the system comprises a fluorescence image acquisition module and a visible light image acquisition module, wherein the fluorescence image acquisition module and the visible light image acquisition module are respectively used for acquiring the fluorescence image and the visible light image by utilizing the fluorescence image acquisition module and the visible light image acquisition module which are arranged on an endoscope;
And the normalization processing module is used for respectively carrying out normalization processing on the fluorescent image and the visible light image to obtain the normalized fluorescent image and the normalized visible light image.
Further, based on the above embodiment, the above fluorescence image and visible light image acquisition module may include:
and the fluorescence image and visible light image acquisition unit is used for acquiring the fluorescence image and the visible light image of the same imaging light path.
Further, based on any of the above embodiments, the image fusion apparatus may further include:
An intermediate variable determining module for calculating the intermediate variable; wherein the intermediate variable ;/>Representing a two-dimensional convolution calculation,/>Representing a preset known weight matrix of 3 rows and 3 columns,/>An element product representing the fluorescence approximation component and the visible approximation component;
the image fusion weighting coefficient determining module is used for determining the fluorescent image fusion weighting coefficient and the visible light image fusion weighting coefficient based on the intermediate variable; the fluorescent image fuses the weighting coefficient The visible light image fuses the weighting coefficient/>
Further, based on the above embodiment, the image fusion module 300 may include:
A fusion approximation component acquisition unit configured to acquire the fusion approximation component; wherein the fused approximation component ,/>Representing the element product; /(I)Representing the fluorescence approximation component,/>Representing the visible light approximation component;
A normalized fused image calculation unit for calculating the fused approximation component The visible detail component/>And/>Performing the inverse two-dimensional discrete wavelet transform to obtain the normalized fusion image/>
A target fusion image calculation unit, configured to perform an element product operation by using the normalized fusion image and the visible light maximum value, to obtain the target fusion image, where the target fusion image,/>Representing the visible maximum.
Further, based on any of the above embodiments, the image fusion apparatus may further include:
and the analysis module is used for analyzing by using the target fusion image and determining normal tissues and abnormal tissues.
It should be noted that, the modules and units in the image fusion apparatus can change the order of the modules and units before and after each other without affecting the logic.
The image fusion device provided by the embodiment of the invention can comprise: the wavelet transformation processing module 100 is configured to process the normalized fluorescence image and the normalized visible light image based on the wavelet transformation function, respectively, to obtain a fluorescence approximation component, a visible light approximation component, and a plurality of visible light detail components; the fusion approximate component determining module 200 is configured to perform image fusion based on a fluorescence image fusion weighting coefficient and a visible light image fusion weighting coefficient, and the fluorescence approximate component and the visible light approximate component to obtain a fusion approximate component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, wherein the intermediate variable is a variable determined based on a preset weight matrix, the fluorescence approximation component and the visible light approximation component; and the image fusion module 300 is used for carrying out inverse two-dimensional discrete wavelet transform on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image. Compared with the current method that the effective fluorescence area is extracted by setting a fixed threshold value for the fluorescence image and adding the fixed coefficient as the weight overlap into the visible light image, the method provided by the invention is used for respectively calculating the fusion weighting coefficient of the fluorescence image and the visible light image based on the weight matrix, and the fusion result has good stability and high precision. Moreover, the normalized fluorescent image and the normalized visible light image are processed by using the tightly-supported orthogonal wavelet function based on approximate symmetry, so that the effective denoising, edge detection and image compression can be realized while the important characteristics of the image are maintained, and the efficiency and the effect of image processing are improved; in addition, the embodiment improves the visual effect of the image and improves the image quality by carrying out normalization processing; in addition, the embodiment determines the intermediate variable based on the two-dimensional convolution, so that the fluorescent image fusion weighting coefficient and the visible light image fusion weighting coefficient are determined based on the intermediate variable, and the convolution operation can effectively extract important features from the image, so that the image fusion effect can be improved; in addition, in the embodiment, the normalized fusion image and the visible light maximum value are subjected to element product operation to obtain the target fusion image, so that the image fusion is an implementation scheme; in addition, the embodiment can analyze the target fusion image to determine normal tissues and abnormal tissues, so that the target fusion image can be used for detecting the abnormal tissues, and the applicability of the image fusion method is improved.
The following describes an image fusion apparatus provided in an embodiment of the present invention, where the image fusion apparatus described below and the image fusion method described above may be referred to correspondingly.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present invention, which may include:
A memory 10 for storing a computer program;
A processor 20 for executing a computer program to implement the steps of the image fusion method described above.
The memory 10, the processor 20, and the communication interface 30 all communicate with each other via a communication bus 40.
In the embodiment of the present invention, the memory 10 is used for storing one or more programs, the programs may include program codes, the program codes include computer operation instructions, and in the embodiment of the present invention, the memory 10 may store programs for implementing the following functions:
Processing the normalized fluorescence image and the normalized visible light image based on the wavelet transform function respectively to obtain a fluorescence approximation component, a visible light approximation component and a plurality of visible light detail components;
Based on the fluorescence image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescence approximation component and the visible light approximation component, carrying out image fusion to obtain a fusion approximation component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, and the intermediate variable is a variable determined based on a preset weight matrix, a fluorescence approximation component and a visible light approximation component;
and carrying out inverse two-dimensional discrete wavelet transform on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image.
In one possible implementation, the memory 10 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, and at least one application program required for functions, etc.; the storage data area may store data created during use.
In addition, memory 10 may include read only memory and random access memory and provide instructions and data to the processor. A portion of the memory may also include NVRAM. The memory stores an operating system and operating instructions, executable modules or data structures, or a subset thereof, or an extended set thereof, where the operating instructions may include various operating instructions for performing various operations. The operating system may include various system programs for implementing various basic tasks as well as handling hardware-based tasks.
The processor 20 may be a central processing unit (Central Processing Unit, CPU), an asic, a dsp, a fpga or other programmable logic device, and the processor 20 may be a microprocessor or any conventional processor. The processor 20 may call a program stored in the memory 10.
The communication interface 30 may be an interface of a communication module for connecting with other devices or systems.
Of course, it should be noted that the structure shown in fig. 6 is not limited to the image fusion apparatus in the embodiment of the present invention, and the image fusion apparatus may include more or less components than those shown in fig. 6 or may combine some components in practical applications.
The following describes a computer readable storage medium provided in an embodiment of the present invention, where the computer readable storage medium described below and the image fusion method described above may be referred to correspondingly.
The present invention also provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of the image fusion method described above.
The computer readable storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Finally, it is further noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The foregoing has outlined rather broadly the principles and embodiments of the present invention in order that the detailed description of the method, apparatus, device and computer readable storage medium that is provided herein may be better understood, and in order that the present invention may be better understood; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. An image fusion method, comprising:
Processing the normalized fluorescence image and the normalized visible light image based on the wavelet transform function respectively to obtain a fluorescence approximation component, a visible light approximation component and a plurality of visible light detail components;
Performing image fusion based on the fluorescence image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescence approximation component and the visible light approximation component to obtain a fusion approximation component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, wherein the intermediate variable is a variable determined based on a preset weight matrix, the fluorescence approximation component and the visible light approximation component;
And carrying out inverse two-dimensional discrete wavelet transform on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image.
2. The image fusion method according to claim 1, wherein the processing the normalized fluorescence image and the normalized visible light image based on the wavelet transform function to obtain a fluorescence approximation component, a visible light approximation component, and a plurality of visible light detail components, respectively, includes:
and respectively processing the normalized fluorescence image and the normalized visible light image based on approximately symmetrical tightly-supported orthogonal wavelet functions.
3. The image fusion method of claim 1, further comprising, before processing the normalized fluorescence image and the normalized visible light image based on the wavelet transform function to obtain a fluorescence approximation component, a visible light approximation component, and a plurality of visible light detail components, respectively:
The method comprises the steps that a fluorescence image acquisition module and a visible light image acquisition module which are arranged on an endoscope are respectively utilized to acquire the fluorescence image and the visible light image;
and respectively carrying out normalization processing on the fluorescence image and the visible light image to obtain the normalized fluorescence image and the normalized visible light image.
4. The image fusion method of claim 3, wherein acquiring the fluorescence image and the visible light image comprises:
and acquiring the fluorescence image and the visible light image of the same imaging light path.
5. The image fusion method according to any one of claims 1 to 4, characterized by further comprising, before the image fusion based on the fluorescent image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescent approximation component and the visible light approximation component, obtaining a fusion approximation component:
Calculating the intermediate variable; wherein the intermediate variable ;/>Representing a two-dimensional convolution calculation,/>Representing a preset known weight matrix of 3 rows and 3 columns,/>An element product representing the fluorescence approximation component and the visible approximation component;
Determining the fluorescent image fusion weighting coefficient and the visible light image fusion weighting coefficient based on the intermediate variable; the fluorescent image fuses the weighting coefficient The visible light image fuses the weighting coefficient/>
6. The method of image fusion according to claim 5, wherein the performing inverse two-dimensional discrete wavelet transform on the fusion approximation component and the visible light detail component to obtain a normalized fusion image, and performing fusion with the normalized fusion image and a visible light maximum value to obtain a target fusion image, comprises:
Acquiring the fusion approximation component; wherein the fused approximation component ,/>Representing the element product; /(I)Representing the fluorescence approximation component,/>Representing the visible light approximation component;
Fusing the approximation components The visible detail component/>、/>And/>Performing the inverse two-dimensional discrete wavelet transform to obtain the normalized fusion image/>
Performing element product operation by using the normalized fusion image and the visible light maximum value to obtain the target fusion image, wherein the target fusion image,/>Representing the visible maximum.
7. The image fusion method according to claim 1, wherein after performing inverse two-dimensional discrete wavelet transform on the fusion approximation component and the visible light detail component to obtain a normalized fusion image, and performing fusion by using the normalized fusion image and a visible light maximum value, obtaining a target fusion image, further comprising:
And analyzing by using the target fusion image to determine normal tissues and abnormal tissues.
8. An image fusion apparatus, comprising:
The wavelet transformation processing module is used for respectively processing the normalized fluorescent image and the normalized visible light image based on a wavelet transformation function to obtain a fluorescent approximate component, a visible light approximate component and a plurality of visible light detail components;
the fusion approximate component determining module is used for carrying out image fusion on the basis of the fluorescence image fusion weighting coefficient and the visible light image fusion weighting coefficient, and the fluorescence approximate component and the visible light approximate component to obtain a fusion approximate component; the fusion weighting coefficient is a coefficient determined based on an intermediate variable, wherein the intermediate variable is a variable determined based on a preset weight matrix, the fluorescence approximation component and the visible light approximation component;
And the image fusion module is used for carrying out inverse two-dimensional discrete wavelet transformation on the fusion approximate component and the visible light detail component to obtain a normalized fusion image, and carrying out fusion by utilizing the normalized fusion image and the visible light maximum value to obtain a target fusion image.
9. An electronic device comprising a memory and a processor, wherein:
The memory is used for storing a computer program;
The processor for executing the computer program to implement the steps of the image fusion method according to any one of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the steps of the image fusion method according to any one of claims 1 to 7.
CN202410641876.8A 2024-05-23 Image fusion method, device, equipment and computer readable storage medium Pending CN118229555A (en)

Publications (1)

Publication Number Publication Date
CN118229555A true CN118229555A (en) 2024-06-21

Family

ID=

Similar Documents

Publication Publication Date Title
JP6900581B1 (en) Focus-weighted machine learning classifier error prediction for microscope slide images
CN107330949B (en) Artifact correction method and system
JP6265588B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program
CN107644420B (en) Blood vessel image segmentation method based on centerline extraction and nuclear magnetic resonance imaging system
CN114820494B (en) Speckle Contrast Analysis Using Machine Learning for Visualizing Flow
JP6273291B2 (en) Image processing apparatus and method
EP1994880A1 (en) Image analyzing device
Rajan et al. Diagnosis of cardiovascular diseases using retinal images through vessel segmentation graph
US9811904B2 (en) Method and system for determining a phenotype of a neoplasm in a human or animal body
Wu et al. Detection of blur artifacts in histopathological whole-slide images of endomyocardial biopsies
CN116630762A (en) Multi-mode medical image fusion method based on deep learning
Jeevakala et al. A novel segmentation of cochlear nerve using region growing algorithm
CN114332132A (en) Image segmentation method and device and computer equipment
CN111640097B (en) Dermatological image recognition method and dermatological image recognition equipment
CN107529962B (en) Image processing apparatus, image processing method, and recording medium
JP5740403B2 (en) System and method for detecting retinal abnormalities
CN114862799B (en) Full-automatic brain volume segmentation method for FLAIR-MRI sequence
CN118229555A (en) Image fusion method, device, equipment and computer readable storage medium
EP3154026B1 (en) Image processing apparatus, control method thereof, and computer program
Iacoviello et al. Parametric characterization of the form of the human pupil from blurred noisy images
CN114663424A (en) Endoscope video auxiliary diagnosis method, system, equipment and medium based on edge cloud cooperation
KR102384083B1 (en) Apparatus and Method for Diagnosing Sacroiliac Arthritis and Evaluating the Degree of Inflammation using Magnetic Resonance Imaging
CN111028219B (en) Colon image recognition method and device and related equipment
Gautam et al. Implementation of NLM and PNLM for de-noising of MRI images
Darıcı et al. A comparative study on denoising from facial images using convolutional autoencoder

Legal Events

Date Code Title Description
PB01 Publication