CN115641487A - Neutron and X-ray based multi-stage judgment fusion method and system - Google Patents

Neutron and X-ray based multi-stage judgment fusion method and system Download PDF

Info

Publication number
CN115641487A
CN115641487A CN202211033724.7A CN202211033724A CN115641487A CN 115641487 A CN115641487 A CN 115641487A CN 202211033724 A CN202211033724 A CN 202211033724A CN 115641487 A CN115641487 A CN 115641487A
Authority
CN
China
Prior art keywords
fusion
image
neutron
ray
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211033724.7A
Other languages
Chinese (zh)
Other versions
CN115641487B (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Yuandongxin Energy Technology Co ltd
Original Assignee
Qingdao Yuandongxin Energy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Yuandongxin Energy Technology Co ltd filed Critical Qingdao Yuandongxin Energy Technology Co ltd
Priority to CN202211033724.7A priority Critical patent/CN115641487B/en
Publication of CN115641487A publication Critical patent/CN115641487A/en
Application granted granted Critical
Publication of CN115641487B publication Critical patent/CN115641487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E30/00Energy generation of nuclear origin
    • Y02E30/10Nuclear fusion reactors

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a neutron and X-ray based multi-stage judgment fusion method and system, and relates to the field of nondestructive ray detection. The method comprises the following steps: the method comprises the steps of obtaining multi-stage advantage information based on a neutron image and an X-ray image of a target to be detected, respectively calculating fusion weights of the neutron image and the X-ray image in each stage according to the multi-stage advantage information, and performing fusion processing on the neutron image and the X-ray image according to the fusion weights in each stage to obtain a target fusion image, wherein the image quality of the fusion image in the image such as geometric definition, signal-to-noise ratio and edge gradient is higher than that of a traditional fusion mode.

Description

Neutron and X-ray based multi-stage judgment fusion method and system
Technical Field
The invention relates to the field of nondestructive ray detection, in particular to a neutron and X-ray based multi-stage judgment fusion method and system.
Background
In the nondestructive radiation detection industry, in order to solve the detection problem of characteristic equipment, conventional means include neutron radiography, X-ray, gamma-ray and the like, but the action mechanisms of the X-ray and the neutron and a substance are different, the X-ray acts on electrons outside the nucleus of the substance, the action section of the X-ray has a determined functional relation with the atomic number of a nuclide, the neutron directly interacts with the nucleus, and the size of the scattering section has no direct relation with the atomic number. The traditional fusion scheme is only to extract single characteristic information (gray difference, boundary gradient and the like) of a target image for fusion, and only the gray value difference is used as a threshold condition in the fusion judgment, and the top-level information (abstract information) of a multi-source image with larger difference can not be obtained through single simple fusion calculation, so that the advantage information of the general fusion algorithm can not be accurately positioned.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a neutron and X-ray based multi-stage judgment fusion method and system.
The technical scheme for solving the technical problems is as follows:
a neutron and X-ray based multi-stage decision fusion method comprises the following steps:
s1, obtaining multi-level advantage information based on a neutron image and an X-ray image of a target to be detected, wherein the multi-level advantage information represents a characteristic level, a signal level and a pixel level, and comprises: confidence, boundary gradient values, and pixel values;
s2, fusion weights of the neutron image and the X-ray image in each stage are respectively calculated according to the multi-stage advantage information;
and S3, performing fusion processing on the neutron image and the X-ray image according to the fusion weight in each stage to obtain a target fusion image.
The invention has the beneficial effects that: according to the scheme, multi-level advantage information of a neutron image and multi-level advantage information of an X-ray image are extracted from a characteristic level, a signal level and a pixel level, fusion weights of the neutron image and the X-ray image in each level are respectively calculated according to the multi-level advantage information, the fusion processing is carried out on the neutron image and the X-ray image according to the fusion weights in each level, so that a target fusion image is obtained, and the image quality of the fusion image in the geometric definition, the signal-to-noise ratio, the edge gradient and the like is higher than that of a traditional fusion algorithm.
Further, the S2 specifically includes:
extracting confidence coefficient of an X-ray image at a characteristic level, and distributing a first X-ray fusion weight according to the confidence coefficient;
extracting a boundary gradient value of the X-ray image at a signal level, and distributing a second X-ray fusion weight according to the boundary gradient value;
pixel values of the X-ray image are extracted at the pixel level, and a third X-ray fusion weight is assigned according to the pixel values.
Further, the S2 further specifically includes:
extracting the confidence coefficient of the neutron image at the characteristic level, and distributing a first neutron fusion weight according to the confidence coefficient;
extracting a boundary gradient value of the neutron image at a signal level, and distributing a second neutron fusion weight according to the boundary gradient value;
and extracting the pixel value of the neutron image at the pixel level, and distributing a third neutron fusion weight according to the pixel value.
Further, the S3 specifically includes:
preprocessing the X-ray image and the neutron image;
performing feature level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the first X-ray fusion weight and the first neutron fusion weight to obtain a first fusion image;
performing signal level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the second X-ray fusion weight and the second neutron fusion weight to obtain a second fusion image;
performing pixel-level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the third X-ray fusion weight and the third neutron fusion weight to obtain a third fusion image;
and reconstructing according to the first fusion image, the second fusion image and the third fusion image to obtain a target fusion image.
Further, the pre-processing comprises: denoising, scattering correction and contrast adjustment.
Another technical solution of the present invention for solving the above technical problems is as follows:
a neutron and X-ray based multi-level decision fusion system comprising: the system comprises an information extraction module, a fusion weight calculation module and a fusion module;
the information extraction module is used for obtaining multi-stage advantage information based on a neutron image and an X-ray image of a target to be detected, the multi-stage advantage information represents a characteristic level, a signal level and a pixel level, and comprises: confidence, boundary gradient values, and pixel values;
the fusion weight calculation module is used for calculating fusion weights of the neutron image and the X-ray image in each stage according to the multi-stage advantage information;
and the fusion module is used for carrying out fusion processing on the neutron image and the X-ray image according to the fusion weight in each stage so as to obtain a target fusion image.
The invention has the beneficial effects that: according to the scheme, multi-level advantage information of a neutron image and multi-level advantage information of an X-ray image are extracted from a characteristic level, a signal level and a pixel level, fusion weights of the neutron image and the X-ray image in each level are respectively calculated according to the multi-level advantage information, the fusion processing is carried out on the neutron image and the X-ray image according to the fusion weights in each level, so that a target fusion image is obtained, and the image quality of the fusion image in the geometric definition, the signal-to-noise ratio, the edge gradient and the like is higher than that of a traditional fusion algorithm.
Further, the fusion weight calculation module is specifically configured to extract a confidence level of the X-ray image at a feature level, and assign a first X-ray fusion weight according to the confidence level;
extracting a boundary gradient value of the X-ray image at a signal level, and distributing a second X-ray fusion weight according to the boundary gradient value;
pixel values of the X-ray image are extracted at the pixel level, and a third X-ray fusion weight is assigned according to the pixel values.
Further, the fusion weight calculation module is specifically configured to extract a confidence coefficient of the neutron image at a feature level, and assign a first neutron fusion weight according to the confidence coefficient;
extracting a boundary gradient value of the neutron image at a signal level, and distributing a second neutron fusion weight according to the boundary gradient value;
and extracting the pixel value of the neutron image at the pixel level, and distributing a third neutron fusion weight according to the pixel value.
Further, the fusion module is specifically configured to pre-process the X-ray image and the neutron image;
performing feature level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the first X-ray fusion weight and the first neutron fusion weight to obtain a first fusion image;
performing signal level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the second X-ray fusion weight and the second neutron fusion weight to obtain a second fusion image;
performing pixel-level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the third X-ray fusion weight and the third neutron fusion weight to obtain a third fusion image;
and reconstructing according to the first fusion image, the second fusion image and the third fusion image to obtain a target fusion image.
Further, the pre-processing comprises: denoising, scattering correction and contrast adjustment.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
Fig. 1 is a schematic flowchart of a neutron-X-ray-based multi-stage decision fusion method according to an embodiment of the present invention;
fig. 2 is a block diagram of a neutron and X-ray based multi-stage decision fusion system according to an embodiment of the present invention;
fig. 3 is a flow chart of multi-level decision provided by other embodiments of the present invention.
Detailed Description
The principles and features of the present invention will be described with reference to the following drawings, which are illustrative only and are not intended to limit the scope of the invention.
As shown in fig. 1, a multi-stage decision fusion method based on neutrons and X-rays provided by an embodiment of the present invention includes:
s1, obtaining multi-stage advantage information based on a neutron image and an X-ray image of a target to be detected, wherein the multi-stage advantage information represents a characteristic level, a signal level and a pixel level, and comprises: confidence, boundary gradient values, and pixel values;
s2, respectively calculating fusion weights of the neutron image and the X-ray image in each stage according to the multi-stage advantage information;
it should be noted that, in an embodiment, as shown in fig. 3, the multi-stage determination of the feature level, the signal level, and the pixel level may include:
the method mainly comprises the steps of calculating weight factors of different regions of different images according to features of different levels, wherein feature level information is most abstract information, utilizing a segmentation neural network to obtain a most interesting region mask1, distributing the highest basic weight, and multiplying segmentation confidence coefficients by the highest basic weight as a left threshold value and a right threshold value. And secondly, calculating a secondary region of interest of the signal level by utilizing edge detection, and setting a medium weight for the mask2, wherein the threshold value of an XRAY image (X-ray image) is higher than that of a ZRAY image. And finally, extracting the lowest layer pixel level information under large, medium and small views by using FPN, and distributing the lowest weight according to the difference of different level gray values, wherein the threshold value of the ZRAY image is higher than the XRAY. The segmented neural network may be a split network such as a jet b l endmask. The highest base weight may be greater than 0.5; the segmentation confidence coefficient can be output after being detected according to the neural network and is a number between 0 and 1; the edge detection calculating the secondary region of interest of the signal level may include: extracting edges by using an extractor for edge detection such as canny and sob l e; wherein the medium weight may be greater than 0.2 based on the weight of the edge mean after neutron and x-ray edge extraction, similar to the last basis weight. Wherein, the lowest weight distributed according to the difference of the gray values of different levels can be set according to the gray mean value of the neutron x-ray image, and the calculation formula is as follows: the neutron roi mean value/x-ray roi mean value + neutron ro mean value is less than 0.2.
And S3, performing fusion processing on the neutron image and the X-ray image according to the fusion weight in each stage to obtain a target fusion image.
It should be noted that the fusion processing procedure may include:
firstly, preprocessing an x-ray and a neutron image, (respectively processing the x-ray image and the neutron image in the following steps), then carrying out Laplace transform on the processed image in the second step to obtain images with different times of visual fields such as down-sampling 2, 4, 8, 16, 32 and the like, reducing the images with the same visual field by using linear interpolation up-sampl e, subtracting two images (images representing neutrons and images of the x-ray) with the same visual field to obtain high-frequency information parts of the images with different visual fields, and then linearly superposing the high-frequency information of the x-ray and a low-frequency image (down-sampling image) of the neutrons according to the multilevel weight factor obtained in the previous step, wherein the low-frequency image of the neutrons can be obtained by carrying out pyramid down-sampling on the images or by using a Fourier transform or a low-pass filter such as Butterworth, gauss and the like; and finally, carrying out Laplace inverse transformation on the obtained images under a plurality of views to obtain a final fusion image. Wherein, the feature level, the pixel level and the signal level are based on a deep learning feature level graph: a schematic diagram of a convolutional neural network, which realizes a feature extraction function; based on the FPN pixel level: carrying out gold tower layering on the FPN image, and designing a low-pass convolution kernel for pixel level extraction; based on the signal level: and performing feature fusion on the edges of the two images for the schematic edge detection. Wherein the pre-processing may comprise: denoising, scattering correction, contrast adjustment and the like, and is used for improving the image quality and reducing the operation of image noise. The linear superposition of the high-frequency information of the x-ray and the low-frequency image of the neutron by the multilevel weight factor specifically comprises the following steps: the X-ray image and the middle sub-image are respectively multiplied by the weight obtained in the previous paragraph, and then are added for normalization.
According to the scheme, multi-level advantage information of a neutron image and multi-level advantage information of an X-ray image are extracted from a characteristic level, a signal level and a pixel level, fusion weights of the neutron image and the X-ray image in each level are respectively calculated according to the multi-level advantage information, the fusion processing is carried out on the neutron image and the X-ray image according to the fusion weights in each level, so that a target fusion image is obtained, and the image quality of the fusion image in the geometric definition, the signal-to-noise ratio, the edge gradient and the like is higher than that of a traditional fusion algorithm.
In one embodiment, more levels of fusion or two feature level fusion methods are used in addition to the feature level signal level pixel level described above. Secondly, the characteristic level signals can be extracted by a deep learning segmentation algorithm and can also be extracted by other deep learning algorithms, and the signal level can also be extracted by a scheme such as frequency domain calculation (Fourier transform) besides edge detection. In addition, the secondary fusion scheme can be used in the fusion of neutron X-ray dual-source images and other multi-source images, such as infrared + RGB + X-ray, on the premise that different source images have larger characteristic signal difference.
In another embodiment, the hierarchical processing may include:
when 0-l-n, then for the l-th layer image subjected to laplacian pyramid decomposition, first calculate its area energy:
Figure BDA0003818061790000071
wherein AER represents the total energy of the region of the image to be fused; LA N Represents the nth layer of the gold pyramid; ω denotes a low pass filter.
Figure BDA0003818061790000081
Where p =1, q =1,
Figure BDA0003818061790000082
the fusion result of the other hierarchical images is:
Figure BDA0003818061790000083
wherein, LA i Pixel values representing the ith layer of the image pyramid; AER represents the energy value of the gold tower lower i layer; the two terms are multiplied into pixel-level features;
mask represents a target area detected by deep learning; confdence represents a confidence coefficient obtained by deep learning calculation; the two terms are multiplied to be characteristic level characteristics;
canny represents a binary image obtained by canny edge detection; thresh represents a threshold for edge detection signal level assignment; this two-term multiplication is a signal level feature.
Obtaining a fused image LF of each pyramid layer 1 、LF 2 、LF N And (4) finally. Through the previous reconstruction, the final fused image can be obtained.
Optionally, in any of the above embodiments, the S2 specifically includes:
extracting confidence coefficient of an X-ray image at a characteristic level, and distributing a first X-ray fusion weight according to the confidence coefficient;
extracting a boundary gradient value of the X-ray image at a signal level, and distributing a second X-ray fusion weight according to the boundary gradient value;
pixel values of the X-ray image are extracted at the pixel level, and a third X-ray fusion weight is assigned according to the pixel values.
Optionally, in any of the above embodiments, the S2 further specifically includes:
extracting the confidence coefficient of the neutron image at the characteristic level, and distributing a first neutron fusion weight according to the confidence coefficient;
extracting a boundary gradient value of the neutron image at a signal level, and distributing a second neutron fusion weight according to the boundary gradient value;
and extracting the pixel value of the neutron image at the pixel level, and distributing a third neutron fusion weight according to the pixel value.
Optionally, in any of the above embodiments, the S3 specifically includes:
preprocessing the X-ray image and the neutron image;
performing feature level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the first X-ray fusion weight and the first neutron fusion weight to obtain a first fusion image;
performing signal level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the second X-ray fusion weight and the second neutron fusion weight to obtain a second fusion image;
performing pixel-level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the third X-ray fusion weight and the third neutron fusion weight to obtain a third fusion image;
and reconstructing according to the first fusion image, the second fusion image and the third fusion image to obtain a target fusion image.
Optionally, in any embodiment above, the pre-processing comprises: denoising, scattering correction and contrast adjustment.
In one embodiment, a neutron and X-ray based multi-level decision fusion system includes: an information extraction module 1101, a fusion weight calculation module 1102 and a fusion module 1103;
the information extraction module 1101 is configured to obtain multi-level advantage information based on a neutron image and an X-ray image of a target to be detected, where the multi-level advantage information includes: confidence, boundary gradient values, and pixel values;
the fusion weight calculation module 1102 is configured to calculate fusion weights of the neutron image and the X-ray image in each stage according to the multi-stage dominance information;
the fusion module 1103 is configured to perform fusion processing on the neutron image and the X-ray image according to the fusion weights in each stage to obtain a target fusion image.
According to the scheme, multi-level advantage information of a neutron image and multi-level advantage information of an X-ray image are extracted from a characteristic level, a signal level and a pixel level, fusion weights of the neutron image and the X-ray image in each level are respectively calculated according to the multi-level advantage information, the neutron image and the X-ray image are subjected to fusion processing according to the fusion weights in each level, so that a target fusion image is obtained, and the image quality of the fusion image in terms of geometric definition, signal-to-noise ratio, edge gradient and the like is higher than that of a traditional fusion algorithm.
Optionally, in any embodiment described above, the fusion weight calculation module 1102 is specifically configured to extract a confidence of the X-ray image at a feature level, and assign a first X-ray fusion weight according to the confidence;
extracting a boundary gradient value of the X-ray image at a signal level, and distributing a second X-ray fusion weight according to the boundary gradient value;
pixel values of the X-ray image are extracted at the pixel level, and a third X-ray fusion weight is assigned according to the pixel values.
Optionally, in any embodiment above, the fusion weight calculation module 1102 is specifically configured to extract a confidence level of the neutron image at a feature level, and assign a first neutron fusion weight according to the confidence level;
extracting a boundary gradient value of the neutron image at a signal level, and distributing a second neutron fusion weight according to the boundary gradient value;
and extracting the pixel value of the neutron image at the pixel level, and distributing a third neutron fusion weight according to the pixel value.
Optionally, in any embodiment above, the fusion module 1103 is specifically configured to perform preprocessing on the X-ray image and the neutron image;
performing feature level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the first X-ray fusion weight and the first neutron fusion weight to obtain a first fusion image;
performing signal level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the second X-ray fusion weight and the second neutron fusion weight to obtain a second fusion image;
performing pixel-level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the third X-ray fusion weight and the third neutron fusion weight to obtain a third fusion image;
and reconstructing according to the first fusion image, the second fusion image and the third fusion image to obtain a target fusion image.
Optionally, in any embodiment above, the pre-processing comprises: denoising, scattering correction and contrast adjustment.
It is to be understood that some or all of the alternative implementations described above in various embodiments may be included in some embodiments.
It should be noted that the above embodiments are product embodiments corresponding to the previous method embodiments, and for the description of each optional implementation in the product embodiments, reference may be made to corresponding descriptions in the above method embodiments, and details are not described here again.
The reader should understand that in the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described method embodiments are merely illustrative, and for example, the division of steps into only one logical functional division may be implemented in practice in another way, for example, multiple steps may be combined or integrated into another step, or some features may be omitted, or not implemented.
The above method, if implemented in the form of software functional units and sold or used as a stand-alone product, can be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A neutron and X-ray based multi-stage judgment fusion method is characterized by comprising the following steps:
s1, obtaining multi-level advantage information based on a neutron image and an X-ray image of a target to be detected; the multi-level represents a feature level, a signal level, and a pixel level, and the multi-level superiority information includes: confidence, boundary gradient values, and pixel values;
s2, respectively calculating fusion weights of the neutron image and the X-ray image in each stage according to the multi-stage advantage information;
and S3, performing fusion processing on the neutron image and the X-ray image according to the fusion weight in each stage to obtain a target fusion image.
2. The neutron and X-ray based multi-stage decision fusion method according to claim 1, wherein the S2 specifically comprises:
extracting confidence coefficient of an X-ray image at a characteristic level, and distributing a first X-ray fusion weight according to the confidence coefficient;
extracting a boundary gradient value of the X-ray image at a signal level, and distributing a second X-ray fusion weight according to the boundary gradient value;
pixel values of the X-ray image are extracted at the pixel level, and a third X-ray fusion weight is assigned according to the pixel values.
3. The neutron and X-ray based multi-stage decision fusion method according to claim 1 or 2, wherein the S2 further specifically includes:
extracting the confidence coefficient of the neutron image at the characteristic level, and distributing a first neutron fusion weight according to the confidence coefficient;
extracting a boundary gradient value of the neutron image at a signal level, and distributing a second neutron fusion weight according to the boundary gradient value;
and extracting the pixel value of the neutron image at the pixel level, and distributing a third neutron fusion weight according to the pixel value.
4. The neutron and X-ray based multi-stage decision fusion method according to claim 3, wherein S3 specifically comprises:
preprocessing the X-ray image and the neutron image;
performing feature level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the first X-ray fusion weight and the first neutron fusion weight to obtain a first fusion image;
performing signal level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the second X-ray fusion weight and the second neutron fusion weight to obtain a second fusion image;
performing pixel-level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the third X-ray fusion weight and the third neutron fusion weight to obtain a third fusion image;
and reconstructing according to the first fusion image, the second fusion image and the third fusion image to obtain a target fusion image.
5. The neutron and X-ray based multi-stage decision fusion method of claim 4, wherein the preprocessing comprises: denoising, scattering correction and contrast adjustment.
6. A neutron and X-ray based multi-level decision fusion system, comprising: the system comprises an information extraction module, a fusion weight calculation module and a fusion module;
the information extraction module is used for obtaining multi-level advantage information based on a neutron image and an X-ray image of a target to be detected, wherein the multi-level advantage information represents a characteristic level, a signal level and a pixel level, and comprises: confidence, boundary gradient values, and pixel values;
the fusion weight calculation module is used for calculating fusion weights of the neutron image and the X-ray image in each stage according to the multi-stage advantage information;
and the fusion module is used for carrying out fusion processing on the neutron image and the X-ray image according to the fusion weight in each stage so as to obtain a target fusion image.
7. The neutron and X-ray based multi-stage decision fusion system of claim 6, wherein the fusion weight calculation module is specifically configured to extract confidence of the X-ray image at a feature level, and assign a first X-ray fusion weight according to the confidence;
extracting a boundary gradient value of the X-ray image at a signal level, and distributing a second X-ray fusion weight according to the boundary gradient value;
pixel values of the X-ray image are extracted at the pixel level, and a third X-ray fusion weight is assigned according to the pixel values.
8. The neutron and X-ray based multi-stage decision fusion system according to claim 6 or 7, wherein the fusion weight calculation module is specifically configured to extract a confidence of a neutron image at a feature level, and assign a first neutron fusion weight according to the confidence;
extracting a boundary gradient value of the neutron image at a signal level, and distributing a second neutron fusion weight according to the boundary gradient value;
and extracting the pixel value of the neutron image at the pixel level, and distributing a third neutron fusion weight according to the pixel value.
9. The neutron and X-ray based multi-stage decision fusion system of claim 8, wherein the fusion module is specifically configured to pre-process the X-ray image and the neutron image;
performing feature level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the first X-ray fusion weight and the first neutron fusion weight to obtain a first fusion image;
performing signal level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the second X-ray fusion weight and the second neutron fusion weight to obtain a second fusion image;
performing pixel-level fusion processing on the preprocessed X-ray image and the preprocessed neutron image according to the third X-ray fusion weight and the third neutron fusion weight to obtain a third fusion image;
and reconstructing according to the first fusion image, the second fusion image and the third fusion image to obtain a target fusion image.
10. The neutron and X-ray based multi-stage decision fusion system of claim 9, wherein the preprocessing comprises: denoising, scattering correction and contrast adjustment.
CN202211033724.7A 2022-08-26 2022-08-26 Multi-stage judgment fusion method and system based on neutrons and X rays Active CN115641487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211033724.7A CN115641487B (en) 2022-08-26 2022-08-26 Multi-stage judgment fusion method and system based on neutrons and X rays

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211033724.7A CN115641487B (en) 2022-08-26 2022-08-26 Multi-stage judgment fusion method and system based on neutrons and X rays

Publications (2)

Publication Number Publication Date
CN115641487A true CN115641487A (en) 2023-01-24
CN115641487B CN115641487B (en) 2023-06-27

Family

ID=84940249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211033724.7A Active CN115641487B (en) 2022-08-26 2022-08-26 Multi-stage judgment fusion method and system based on neutrons and X rays

Country Status (1)

Country Link
CN (1) CN115641487B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020007320A1 (en) * 2018-07-03 2020-01-09 清华-伯克利深圳学院筹备办公室 Method for fusing multi-visual angle images, apparatus, computer device, and storage medium
US20200342580A1 (en) * 2019-04-25 2020-10-29 Megvii (Beijing) Technology Co., Ltd. A method, apparatus and electric device for image fusion
CN113313661A (en) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 Image fusion method and device, electronic equipment and computer readable storage medium
CN114882332A (en) * 2022-06-06 2022-08-09 江苏富士特电气技术有限公司 Target detection system based on image fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020007320A1 (en) * 2018-07-03 2020-01-09 清华-伯克利深圳学院筹备办公室 Method for fusing multi-visual angle images, apparatus, computer device, and storage medium
US20200342580A1 (en) * 2019-04-25 2020-10-29 Megvii (Beijing) Technology Co., Ltd. A method, apparatus and electric device for image fusion
CN113313661A (en) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 Image fusion method and device, electronic equipment and computer readable storage medium
CN114882332A (en) * 2022-06-06 2022-08-09 江苏富士特电气技术有限公司 Target detection system based on image fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李秋华,李吉成,沈振康: "一种基于多传感器多级信息融合的红外目标检测方法", 电子与信息学报, no. 11, pages 1701 - 1705 *

Also Published As

Publication number Publication date
CN115641487B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN102203826B (en) Denoising medical images
CN103069432A (en) Non-linear resolution reduction for medical imagery
CN103295190A (en) Method of noise reduction in digital x-rayograms
Zhang et al. Decision-based non-local means filter for removing impulse noise from digital images
CN112819739B (en) Image processing method and system for scanning electron microscope
Bal et al. An efficient method for PET image denoising by combining multi-scale transform and non-local means
Seo et al. The effects of total variation (TV) technique for noise reduction in radio-magnetic X-ray image: Quantitative study
CN105427255A (en) GRHP based unmanned plane infrared image detail enhancement method
Lu et al. An adaptive detail equalization for infrared image enhancement based on multi-scale convolution
Diwakar et al. Inter-and intra-scale dependencies-based CT image denoising in curvelet domain
CN109035228A (en) A kind of radioscopic image processing method of non-uniform thickness component
HosseinKhani et al. Real‐time removal of impulse noise from MR images for radiosurgery applications
Chen et al. A fractional-order variational residual CNN for low dose CT image denoising
Aedla et al. Satellite image contrast enhancement algorithm based on plateau histogram equalization
CN115641487B (en) Multi-stage judgment fusion method and system based on neutrons and X rays
Rama Lakshmi et al. A Review on Image Denoising Algorithms for Various Applications
Bhonsle et al. Medical Image De-Noising Using Combined Bayes Shrink and Total Variation Techniques
US20230133074A1 (en) Method and Apparatus for Noise Reduction
Shankar et al. Object oriented fuzzy filter for noise reduction of Pgm images
Zia et al. Rician noise removal from MR images using novel adapted selective non-local means filter
CN114529518A (en) Image pyramid and NLM-based image enhancement method for cryoelectron microscope
Paul et al. MR image enhancement using an extended neighborhood filter
Sai et al. Advanced Image Processing techniques based model for Brain Tumour detection
Chen et al. A pilot study on a new image enhancement method using spatial statistics
Storozhilova et al. 2.5 D extension of neighborhood filters for noise reduction in 3D medical CT images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 266043 No. 1, Loushan Road, Licang District, Qingdao, Shandong

Applicant after: Neutron Times (Qingdao) Innovation Technology Co.,Ltd.

Address before: 266043 No. 1, Loushan Road, Licang District, Qingdao, Shandong

Applicant before: Qingdao Yuandongxin Energy Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant