CN112807004A - Mu imaging method - Google Patents

Mu imaging method Download PDF

Info

Publication number
CN112807004A
CN112807004A CN202110019425.7A CN202110019425A CN112807004A CN 112807004 A CN112807004 A CN 112807004A CN 202110019425 A CN202110019425 A CN 202110019425A CN 112807004 A CN112807004 A CN 112807004A
Authority
CN
China
Prior art keywords
sub
reconstructed
matrix
tracks
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110019425.7A
Other languages
Chinese (zh)
Other versions
CN112807004B (en
Inventor
王晓冬
季选韬
魏鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanhua University
Original Assignee
Nanhua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanhua University filed Critical Nanhua University
Priority to CN202110019425.7A priority Critical patent/CN112807004B/en
Publication of CN112807004A publication Critical patent/CN112807004A/en
Application granted granted Critical
Publication of CN112807004B publication Critical patent/CN112807004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Veterinary Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine (AREA)

Abstract

The invention discloses a mu sub imaging method, which comprises the following steps: step A, leading muons to enter an object to be imaged from the upper part of the object to be imaged; b, obtaining all reconstructed mu sub tracks, wherein each reconstructed mu sub track refers to a mu sub track section which is detected above the object to be imaged and satisfies the condition that T2-T1 is not more than delta T, T1 is a time point when incidence of mu is detected above the object to be imaged, T2 is a time point when gamma light is detected on the side periphery of the object to be imaged, and delta T is a set value; and C, respectively obtaining a top view, a front view and a side view of the object to be imaged based on the reconstructed mu sub-track. The invention makes up the technical gap that the existing mu-quantum imaging technology can not image medium-low Z and small-volume substances, realizes the clear imaging of the medium-low Z and small-volume substances, can be applied to the field of medical imaging, is used for carrying out 'nondestructive' imaging on tissues and bones of a human body, and can solve a series of problems generated by artificial radioactive sources.

Description

Mu imaging method
Technical Field
The invention particularly relates to a mu-sub imaging method.
Background
The imaging technology of the radioactive source cosmic ray muons is a novel nondestructive imaging technology, and the cosmic ray muons have the following 2 advantages:
1. and (4) strong penetrability. Cosmic ray muons are highly penetrating, energetic, charged particles that can easily penetrate thick shielding layers, such as radioactive shielding in customs containers, nuclear reactors, pyramids, craters, etc.
"lossless" property. The cosmic ray muons are natural radioactive sources, and extra radioactive sources are not introduced in the imaging detection process of the cosmic ray muons, so that real 'nondestructive detection' can be realized. The conventional detection mode introduces artificial radioactive sources such as gamma light and the like, which may cause irradiation damage to the object to be imaged.
Currently, the research on cosmic ray muon imaging technology focuses on two technologies, scattering imaging and transmission imaging, of muons:
1. the scatter imaging technology utilizes the scatter angle information carried by mu after passing through a detected object to be imaged to image, has the advantages of good radiation safety, high detection sensitivity to high-Z materials, environmental friendliness and the like, and is particularly practical in the aspects of nuclear anti-terrorism, nuclear smuggling and the like of important hubs such as ports, airports, inspection stations and the like (Ma L, Wang W, Zhou J, et al.
2. Transmission imaging technology utilizes attenuation imaging of energy and flux after MU passes through an object to be imaged, and is characterized in that deep detection can be carried out on large-volume substances, and the technology successfully and nondestructively detects Pyramid internal darkroom (Morishima. K, Kuno. M, Nishio. A, et al.discovery of a big void in Khufu's Pyramid by analysis of chemical-random microorganisms [ J ]. Nature,2017,552(7685): 386) and volcanic mouth internal three-dimensional structure (Ambrosi. G, Ambrosine. F, Battiston. R, et al.the MU-RAY project: audio radial with the audio radial texture of 2012-radial texture of Iwood, U-Iwood-Icould, Janus-Iwomano J. sample J. Pat. No. 5, J. 37, J. Pat. 5, J. A. No. 5, 2014,9(2) and C02029-C02029).
Both of the above two mu-imaging techniques have a common limitation: medium and low Z (atomic number) materials are poorly imaged, and the muon-based transmission imaging technique is not suitable for small volumes of material.
In summary, the conventional mu sub imaging technology focuses on imaging high-Z and large-volume substances, but has poor imaging effect on medium-low Z and small-volume substances. For example, in the medical imaging field, since the human body cannot be subjected to nondestructive detection imaging by using the mu-imaging technology, artificial radioactive sources are used, which increases the additional absorbed dose of the human body and increases the risk of radiation damage, and an imaging technology which is 'nondestructive' to the human body is lacked in the field.
Disclosure of Invention
The invention aims to provide a mu-imaging method aiming at the defect that the existing mu-imaging technology is only suitable for high-Z and large-volume substances, makes up the technical gap that the existing mu-imaging technology cannot image medium-low-Z and small-volume substances, realizes clear imaging of medium-low-Z and small-volume substances, can be applied to the field of medical imaging, is used for carrying out 'lossless' imaging on tissues and bones of a human body, and can solve a series of problems generated by artificial radioactive sources.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a mu sub imaging method is characterized by comprising the following steps:
step A, leading muons to enter an object to be imaged from the upper part of the object to be imaged;
b, obtaining all reconstructed mu sub tracks, wherein each reconstructed mu sub track refers to a mu sub track section which is detected above the object to be imaged and satisfies the condition that T2-T1 is not more than delta T, T1 is a time point when incidence of mu is detected above the object to be imaged, T2 is a time point when gamma light is detected on the side periphery of the object to be imaged, and delta T is a set value;
and C, respectively obtaining a top view, a front view and a side view of the object to be imaged based on the reconstructed mu sub-track.
In a preferred mode, in step C, a top view of the object to be imaged is obtained by:
step C101, according to method F, on a top plan XOY perpendicular to the top plan direction of the object to be imaged(1)1And method F(1)2Classifying the reconstructed mu sub-tracks, and dividing the reconstructed mu sub-tracks into MN types, wherein:
according to method F(1)1CounterweightEstablishing mu sub-tracks for classification means that: reconstructing the angle theta between the mu sub-track and the X axis on the top plane XOY1Dividing the reconstructed mu sub-tracks into M types for the basis;
according to method F(1)2Classifying the reconstructed mu sub-tracks means that: with d1Dividing the reconstructed mu sub-tracks into N types for the basis; wherein the content of the first and second substances,
Figure BDA0002887944470000031
A1、B1、C1satisfies the linear equation A of the reconstructed mu sub-track on the top view plane XOY1x+B1y+C1=0;
Step C102, obtaining K pixel units of the top view plane XOY, and constructing a driving matrix A at the same time(1)(MN)×KWherein A is(1)(MN)×KElement A of the ith row and the jth column(1)ijIs the relative contribution X of the ith reconstructed mu sub-track in the jth pixel unit in the XOY plane(1)ij/X(1)i,X(1)ijDenotes the length of the i-th reconstructed mu sub-track in the j-th pixel unit in the XOY plane, X(1)iRepresents the total length of the ith reconstructed mu sub-track in the XOY plane;
step C103, constructing an expected matrix P(1)(MN)×1Wherein P is(1)(MN)×1Element P of ith row and 1 st column(1)i1The value of (d) is the total number of i reconstructed mu sub-tracks in the XOY plane;
step C104, constructing a matrix f(1)K×1Wherein f is(1)K×1Element f of j (th) row and 1 (th) column(1)j×1Is 1;
step C105, matrix f is mapped according to the following method(1)K×1And (3) carrying out iterative solution:
calculating a ratio matrix R(1)(MN)×1Wherein R is(1)(MN)×1Element R of the ith row and 1 st column(1)i1Has a value of
Figure BDA0002887944470000032
If R is(1)(MN)×1Not equal to 1, then according to the formula
Figure BDA0002887944470000033
For matrix f(1)K×1The updating is carried out, and the updating is carried out,
Figure BDA0002887944470000034
to updated f(1)K×1The value of the element in row j and column 1,
Figure BDA0002887944470000035
is f before update(1)K×1Row j and column 1 element values;
if R is(1)(MN)×1If 1, the iteration is stopped and the matrix f obtained by the last iteration is calculated(1)K×1As a result matrix f of iterative calculations('1)K×1Outputting;
step C106, the matrix f output in step C105('1)K×1Conversion into a matrix of pixels
Figure BDA0002887944470000036
Step C107, the pixel matrix is formed
Figure BDA0002887944470000041
The converted picture is output as a top view.
Preferably, in the step C106, f('1)K×1And
Figure BDA0002887944470000042
the relationship of the middle element is: f. of('1)K×1To the 1 st column
Figure BDA0002887944470000043
E
Figure BDA0002887944470000044
Of rows
Figure BDA0002887944470000045
Each data is sequentially corresponded to
Figure BDA0002887944470000046
The nth row of data.
In a preferred mode, in step C, a front view of the object to be imaged is obtained by:
step C201, according to method F, on an orthographic plane XOZ perpendicular to the orthographic direction of the object to be imaged(2)1And method F(2)2Classifying the reconstructed mu sub-tracks, and dividing the reconstructed mu sub-tracks into MN types, wherein:
according to method F(2)1Classifying the reconstructed mu sub-tracks means that: reconstructing an included angle theta between a mu sub track and a Z axis on the front view plane XOZ2Dividing the reconstructed mu sub-tracks into M types for the basis;
according to method F(2)2Classifying the reconstructed mu sub-tracks means that: with d2Dividing the reconstructed mu sub-tracks into N types for the basis; wherein the content of the first and second substances,
Figure BDA0002887944470000047
A2、B2、C2satisfies the linear equation A of the reconstructed mu sub-track on the front view plane XOZ2x+B2z+C2=0;
Step C202, obtaining K pixel units of the front view plane XOZ, and constructing a driving matrix A at the same time(2)(MN)×KWherein A is(2)(MN)×KElement A of the ith row and the jth column(2)ijIs the relative contribution X of the ith reconstructed mu sub-track in the jth pixel unit in the XOZ plane(2)ij/X(2)i,X(2)ijDenotes the length of the ith reconstructed mu sub-track in the jth pixel unit in the XOZ plane, X(2)iRepresents the total length of the ith reconstructed mu sub-track in the XOZ plane;
step C203, constructing an expectation matrix P(2)(MN)×1Wherein P is(2)(MN)×1Element P of ith row and 1 st column(2)i1The value of (d) is the total number of i reconstructed mu sub-tracks in the XOZ plane;
step C204, constructing a matrix f(2)K×1Wherein f is(2)K×1Element f of j (th) row and 1 (th) column(2)j×1Is 1;
step C205, based on the expectation matrix P(2)(MN)×1Using ASD-POCS algorithm to pair matrix f(2)K×1Performing iterative computation, and outputting an iterative computation result matrix f obtained after the iterative computation('2)K×1
Step C206, the matrix f output by the step C205('2)K×1Conversion into a matrix of pixels
Figure BDA0002887944470000051
Step C207, the pixel matrix
Figure BDA0002887944470000052
The converted picture is output as a front view.
Preferably, in the step C206, f('2)K×1And
Figure BDA0002887944470000053
the relationship of the middle element is: f. of('2)K×1To the 1 st column
Figure BDA0002887944470000054
E
Figure BDA0002887944470000055
Of rows
Figure BDA0002887944470000056
Each data is sequentially corresponded to
Figure BDA0002887944470000057
The nth row of data.
In a preferred mode, in step C, a side view of the object to be imaged is obtained by:
step C301, according to method F, of imaging an object on a side viewing plane YOZ perpendicular to the side viewing direction of the object(3)1And method F(3)2Classifying the reconstructed mu sub-tracks, and dividing the reconstructed mu sub-tracks into MN types, wherein:
according to method F(3)1Classifying the reconstructed mu sub-tracks means that: reconstructing an angle theta between a mu sub track and a Z axis on a side view plane YOZ3Dividing the reconstructed mu sub-tracks into M types for the basis;
according to method F(3)2Classifying the reconstructed mu sub-tracks means that: with d3Dividing the reconstructed mu sub-tracks into N types for the basis; wherein the content of the first and second substances,
Figure BDA0002887944470000058
A3、B3、C3satisfies the linear equation A of the reconstructed mu sub-path on the side view plane YOZ3y+B3z+C3=0;
Step C302, obtaining K pixel units of the side plane YOZ, and constructing a driving matrix A at the same time(3)(MN)×KWherein A is(3)(MN)×KElement A of the ith row and the jth column(3)ijIs the relative contribution X of the ith reconstructed mu sub-track in the jth pixel unit in the YOZ plane(3)ij/X(3)i,X(3)ijDenotes the length of the i-th reconstructed mu sub-track in the j-th pixel unit in the YOZ plane, X(3)iRepresents the total length of the i-th reconstructed mu-sub track in the YOZ plane;
step C303, constructing an expected matrix P(3)(MN)×1Wherein P is(3)(MN)×1Element P of ith row and 1 st column(3)i1The value of (d) is the total number of i reconstructed mu sub tracks in the YOZ plane;
step C304, constructing a matrix f(3)K×1Wherein f is(3)K×1Element f of j (th) row and 1 (th) column(3)j×1Is 1;
step C305, based on the expectation matrix P(3)(MN)×1Using ASD-POCS algorithm to pair matrix f(3)K×1Performing iterative computation, and outputting an iterative computation result matrix f obtained after the iterative computation('3)K×1
Step C306, the matrix f output in step C305('3)K×1Conversion into a matrix of pixels
Figure BDA0002887944470000061
Step C307, the pixel matrix
Figure BDA0002887944470000062
The converted picture is output as a side view.
Preferably, in step C306, f('3)K×1And
Figure BDA0002887944470000063
the relationship of the middle element is: f. of('3)K×1To the 1 st column
Figure BDA0002887944470000064
E
Figure BDA0002887944470000065
Of rows
Figure BDA0002887944470000066
Each data is sequentially corresponded to
Figure BDA0002887944470000067
The nth row of data.
Compared with the prior art, the invention makes up the technical gap that the existing mu-imaging technology can not image medium-low Z and small-volume substances, realizes the clear imaging of the medium-low Z and small-volume substances, can be applied to the field of medical imaging, is used for carrying out 'nondestructive' imaging on tissues and bones of a human body, and can solve a series of problems generated by artificial radioactive sources.
Drawings
Fig. 1 is a schematic diagram of the present invention.
Fig. 2 is a schematic diagram of a top view imaging algorithm.
Figure 3 is a diagram of limited angle reconstructed μ sub-tracks in a front view.
Fig. 4 is a graph of reconstructed mu sub-trace density as a function of thickness of an object to be imaged under a simulation of Geant 4.
Fig. 5 is a geometric model diagram defined in Geant 4.
Figure 6 is a three-view image effect on CaO using the present invention. Wherein FIG. 6a is a front view imaging effect; FIG. 6b is a side view imaging effect; FIG. 6c is a top view imaging effect; fig. 6d is a geometric modeling of the object to be imaged.
Fig. 7 is a comparison of three-view imaging effects on water, CaO and Pb using the present invention. Wherein, FIG. 7a is a front view image effect comparison chart; FIG. 7b is a side view image contrast; FIG. 7c is a top view image contrast plot; fig. 7d is a geometric modeling of 3 objects to be imaged.
Detailed Description
The imaging principle of the invention is as follows:
the radioactive source cosmic ray mu is a highly penetrating, highly energetic charged particle that deflects the mu's trajectory almost without electromagnetic Interaction with the medium and low Z materials when passing through the small volume of medium and low Z materials, losing energy, and producing secondary gamma radiation (Bogdannov A G, Burkhardt H, Ivanchenko V N, et al. Geant4 Simulation of Production and Interaction of Muons [ J ]. IEEE Transactions on Nuclear Science, vol.53, No.2(2006): 513-.
In fig. 1, the middle rectangle is the middle and low Z material to be imaged, the lines from top to bottom are the incident mu-sub tracks, the incident tracks of mu-sub are determined by two layers of mu-sub track detectors, the lines from left to right are the secondary radiation gamma light, and the secondary radiation gamma light is detected by four scintillator detectors surrounding the side circumference of the object to be imaged.
When two layers of mu sub-track detectors detect incidence of mu sub, if a scintillator detector detects generation of gamma light within a very short time, the mu sub can be considered to pass through an object to be imaged, the track of the mu sub meeting the time requirement of T2-T1 ≤ delta T (T1 is a time point of incidence of the mu sub detected above the object to be imaged, T2 is a time point of detection of the gamma light at the periphery of the object to be imaged, and delta T is a set minimum value) is defined as a reconstructed mu sub-track, and the number of the reconstructed mu sub-tracks along a certain straight line is defined as the reconstructed mu sub-track concentration along the straight line. Since the change in the trajectory of the mu-particles as they pass through the medium and low Z objects is small, the present application assumes that the incident mu-particle trajectory does not change during the interaction, i.e. the trajectory is a straight line.
Through simulation research, it is found that for medium and low Z materials, the reconstructed mu sub-trace density along a certain straight line linearly increases with the thickness of the detected object to be imaged, for example, assuming that the detected object to be imaged is a cube, the reconstructed mu sub-trace density along the diagonal direction of the cube is theoretically the maximum. By utilizing the density of the reconstructed mu sub-tracks, the imaging of the object to be imaged can be realized.
Specifically, the mu sub-imaging method provided by the invention comprises the following steps:
step A, mu is incident to the object to be imaged from the upper part of the object to be imaged.
And B, obtaining all reconstructed mu sub tracks, wherein each reconstructed mu sub track refers to a mu sub track section which is detected above the object to be imaged and satisfies the condition that T2-T1 is not more than delta T, T1 is a time point when incidence of mu is detected above the object to be imaged, T2 is a time point when gamma light is detected on the side periphery of the object to be imaged, and delta T is a set value.
And C, respectively obtaining a top view, a front view and a side view of the object to be imaged based on the reconstructed mu sub-track.
Reconstructing a top view:
the principle of reconstructing a top view is shown in fig. 2. Fig. 2 is a top view, in which a thick line segment is a reconstructed μ sub track, the total length of the XOY plane is X, small squares are pixel units, and K (10 × 10 — 100 in the embodiment) blocks are provided. The reconstructed mu sub-track passes through a plurality of pixel units, in fig. 2, the reconstructed mu sub-track passes through 5 pixel units, X1 is a length of a line of the reconstructed mu sub-track in a first pixel unit of the XOY plane, X1/X is defined as a contribution degree of the first pixel unit, and so on, the contribution degree of each reconstructed mu sub-track passing through the pixel units is X1/X, X2/X, X3/X, X4/X, X5/X, respectively.
Because the traditional MLEM algorithm uses projection values, unlike the situation that a reconstructed mu sub-track is a line segment, the traditional MLEM algorithm needs to be improved.
The operation steps for obtaining the top view of the object to be imaged by using the improved version of the MLEM algorithm are as follows:
step C101, according to method F, on a top plan XOY perpendicular to the top plan direction of the object to be imaged(1)1And method F(1)2Classifying the reconstructed mu sub-tracks, and dividing the reconstructed mu sub-tracks into MN types, wherein:
according to method F(1)1Classifying the reconstructed mu sub-tracks means that: reconstructing the angle theta between the mu sub-track and the X axis on the top plane XOY1Dividing the reconstructed mu sub-tracks into M types for the basis;
according to method F(1)2Classifying the reconstructed mu sub-tracks means that: with d1Dividing the reconstructed mu sub-tracks into N types for the basis; wherein the content of the first and second substances,
Figure BDA0002887944470000081
A1、B1、C1satisfies the linear equation A of the reconstructed mu sub-track on the top view plane XOY1x+B1y+C1=0。
In this embodiment, M is 180 (i.e., 180 subintervals are obtained by dividing at 1 ° intervals within an angular interval of 0 ° to 180 °), N is 100 (i.e., 100 subintervals are obtained by dividing at 10 intervals within a length interval of-50 to 50 °), and K is 100 (i.e., 10 × 10 pixel units).
Step C102, obtaining K pixel units of the top view plane XOY, and constructing a driving matrix A at the same time(1)(MN)×KAdding the contribution degrees of each pixel unit of the reconstructed mu sub-tracks of the same kind, and normalizing to form a new driving matrix A(1)(MN)×K。A(1)(MN)×KElement A of the ith row and the jth column(1)ijRepresenting the relative contribution in this category of the i-th reconstructed mu-sub-track in the XOY plane within the j-th pixel cell.
Wherein A is(1)(MN)×KIn ith row and jth columnElement A(1)ijIs the relative contribution X of the ith reconstructed mu sub-track in the jth pixel unit in the XOY plane(1)ij/X(1)i,X(1)ijDenotes the length of the i-th reconstructed mu sub-track in the j-th pixel unit in the XOY plane, X(1)iThe total length of the i-th reconstructed mu-sub track in the XOY plane is indicated.
Step C103, constructing an expected matrix P(1)(MN)×1Counting the number of reconstructed mu sub-tracks of the same kind as an expected value, wherein P(1)(MN)×1Element P of ith row and 1 st column(1)i1The value of (d) is the total number of i-th reconstructed μ sub-tracks in the XOY plane.
Step C104, constructing an initial matrix f(1)K×1Wherein f is(1)K×1Element f of j (th) row and 1 (th) column(1)j×1Representing the degree of contribution, f, of the jth pixel element in the XOY plane(1)j×1Is 1.
Step C105, matrix f is mapped according to the following method(1)K×1And (3) carrying out iterative solution:
let drive matrix A(1)(MN)×KAnd the initial matrix f(1)K×1Multiplication of the result with the desired matrix P(1)(MN)×1The corresponding elements are divided to obtain a ratio matrix R(1)(MN)×1
Wherein R is(1)(MN)×1Element R of the ith row and 1 st column(1)i1Has a value of
Figure BDA0002887944470000091
i denotes the ith reconstructed mu sub-track and j denotes the jth pixel element.
If R is(1)(MN)×1Not equal to 1, then according to the formula
Figure BDA0002887944470000092
For matrix f(1)K×1The updating is carried out, and the updating is carried out,
Figure BDA0002887944470000093
to updated f(1)K×1The value of the element in row j and column 1,
Figure BDA0002887944470000094
is f before update(1)K×1Row j and column 1 element values; iterate until R(1)(MN)×11 or so.
If R is(1)(MN)×1If 1, the iteration is stopped and the matrix f obtained by the last iteration is calculated(1)K×1As a result matrix f of iterative calculations('1)K×1Outputting;
step C106, the matrix f output in step C105('1)K×1Conversion into a matrix of pixels
Figure BDA0002887944470000095
f('1)K×1And
Figure BDA0002887944470000096
the relationship of the middle element is: f. of('1)K×1To the 1 st column
Figure BDA0002887944470000097
E
Figure BDA0002887944470000098
Of rows
Figure BDA0002887944470000099
Each data is sequentially corresponded to
Figure BDA00028879444700000910
The nth row of data.
Step C107, the pixel matrix is formed
Figure BDA00028879444700000911
The converted picture is output as a top view.
Reconstruction of front and side views:
the reconstructed front view, side view and reconstructed top view differ somewhat: i.e. no 180 deg. reconstructed mu-sub track data can be obtained.
As shown in fig. 3, which is a front view, there are two limitations that result in only a limited number of available viewsData of angle: first, there is an angular distribution of cosmic ray muons, which follows the zenith angle theta2The corresponding incident mu-sub track is decreased; secondly, the size of the mu-sub track detector is limited, and a maximum angle is inevitably formed between the reconstructed mu-sub track passing through the mu-sub track detector and the object to be imaged. The side view case is the same as the front view case.
In this embodiment, the maximum detectable zenith angle θ is assumed2The angle is 45 degrees, namely the opening angle of the reconstructed mu-sub path on two sides of the Z axis can reach 90 degrees.
Because the traditional imaging algorithm needs 180-degree data and cannot obtain a good imaging effect only with 90-degree data, the application adopts a limited angle imaging algorithm ASD-POCS to image the object to be imaged. The ASD-POCS (adaptive feature projection on continuous sets) algorithm is an algorithm combining an ART algorithm and a TV algorithm (Han-Ming Z, Lin-Yuan W, Bin Y, et al. image recording based on total-variation minimization and optimization direction method in linear scan calculated total-variation [ J ]. Chinese Physics B,2013,22 (582): vibration 589), and imaging can be realized under sparse projection or limited angle projection conditions by adopting a Steepest Descent (SD) and projection to convex set (POCS) method to minimize the image TV and the data distance D alternately.
The operation steps for obtaining the front view of the object to be imaged by using the ASD-POCS algorithm are as follows:
step C201, according to method F, on an orthographic plane XOZ perpendicular to the orthographic direction of the object to be imaged(2)1And method F(2)2Classifying the reconstructed mu sub-tracks, and dividing the reconstructed mu sub-tracks into MN types, wherein:
according to method F(2)1Classifying the reconstructed mu sub-tracks means that: reconstructing an included angle theta between a mu sub track and a Z axis on the front view plane XOZ2Dividing the reconstructed mu sub-tracks into M types for the basis;
according to method F(2)2Classifying the reconstructed mu sub-tracks means that: with d2Dividing the reconstructed mu sub-tracks into N types for the basis; wherein the content of the first and second substances,
Figure BDA0002887944470000101
A2、B2、C2satisfies the linear equation A of the reconstructed mu sub-track on the front view plane XOZ2x+B2z+C2=0。
In this embodiment, M is 180 (i.e., 180 subintervals are obtained by dividing at 1 ° intervals within an angular interval of 0 ° to 180 °), N is 100 (i.e., 100 subintervals are obtained by dividing at 10 intervals within a length interval of-50 to 50 °), and K is 100 (i.e., 10 × 10 pixel units).
Step C202, obtaining K pixel units of the front view plane XOZ, and constructing a driving matrix A at the same time(2)(MN)×KAdding the contribution degrees of each pixel unit of the reconstructed mu sub-tracks of the same kind, and normalizing to form a new driving matrix A(2)(MN)×K。A(2)(MN)×KElement A of the ith row and the jth column(2)ijRepresenting the relative contribution in this category of the i-th reconstructed mu-sub-track in the XOZ plane within the j-th pixel cell.
Wherein A is(2)(MN)×KElement A of the ith row and the jth column(2)ijIs the relative contribution X of the ith reconstructed mu sub-track in the jth pixel unit in the XOZ plane(2)ij/X(2)i,X(2)ijDenotes the length of the ith reconstructed mu sub-track in the jth pixel unit in the XOZ plane, X(2)iThe total length of the i-th reconstructed mu-sub track in the XOZ plane is indicated.
Step C203, constructing an expectation matrix P(2)(MN)×1Counting the number of reconstructed mu sub-tracks of the same kind as an expected value, wherein P(2)(MN)×1Element P of ith row and 1 st column(2)i1The value of (d) is the total number of i-th reconstructed μ sub-tracks in the XOZ plane.
Step C204, constructing an initial matrix f(2)K×1Wherein f is(2)K×1Element f of j (th) row and 1 (th) column(2)j×1Representing the degree of contribution, f, of the jth pixel element in the XOZ plane(2)j×1Is 1.
Step C205, based on the expectation matrix P(2)(MN)×1Using ASD-POCS calculationNormal pair matrix f(2)K×1Performing iterative computation, and outputting an iterative computation result matrix f obtained after the iterative computation('2)K×1
Step C206, the matrix f output by the step C205('2)K×1Conversion into a matrix of pixels
Figure BDA0002887944470000111
f('2)K×1And
Figure BDA0002887944470000112
the relationship of the middle element is: f. of('2)K×1To the 1 st column
Figure BDA0002887944470000113
E
Figure BDA0002887944470000114
Of rows
Figure BDA0002887944470000115
Each data is sequentially corresponded to
Figure BDA0002887944470000116
The nth row of data.
Step C207, the pixel matrix
Figure BDA0002887944470000117
The converted picture is output as a front view.
Similarly, the operation steps of obtaining the side view of the object to be imaged by using the ASD-POCS algorithm are as follows:
step C301, according to method F, of imaging an object on a side viewing plane YOZ perpendicular to the side viewing direction of the object(3)1And method F(3)2Classifying the reconstructed mu sub-tracks, and dividing the reconstructed mu sub-tracks into MN types, wherein:
according to method F(3)1Classifying the reconstructed mu sub-tracks means that: reconstructing an angle theta between a mu sub track and a Z axis on a side view plane YOZ3Drawing reconstructed mu-sub track as basisDividing into M kinds;
according to method F(3)2Classifying the reconstructed mu sub-tracks means that: with d3Dividing the reconstructed mu sub-tracks into N types for the basis; wherein the content of the first and second substances,
Figure BDA0002887944470000121
A3、B3、C3satisfies the linear equation A of the reconstructed mu sub-path on the side view plane YOZ3y+B3z+C3=0。
In this embodiment, M is 180 (i.e., 180 subintervals are obtained by dividing at 1 ° intervals within an angular interval of 0 ° to 180 °), N is 100 (i.e., 100 subintervals are obtained by dividing at 10 intervals within a length interval of-50 to 50 °), and K is 100 (i.e., 10 × 10 pixel units).
Step C302, obtaining K pixel units of the side plane YOZ, and constructing a driving matrix A at the same time(3)(MN)×KAdding the contribution degrees of each pixel unit of the reconstructed mu sub-tracks of the same kind, and normalizing to form a new driving matrix A(3)(MN)×K。A(3)(MN)×KElement A of the ith row and the jth column(3)ijRepresenting the relative contribution in this category of the i-th reconstructed mu-sub-track in the YOZ plane within the j-th pixel cell.
Wherein A is(3)(MN)×KElement A of the ith row and the jth column(3)ijIs the relative contribution X of the ith reconstructed mu sub-track in the jth pixel unit in the YOZ plane(3)ij/X(3)i,X(3)ijDenotes the length of the i-th reconstructed mu sub-track in the j-th pixel unit in the YOZ plane, X(3)iThe total length of the i-th reconstructed mu-sub track in the YOZ plane is shown.
Step C303, constructing an expected matrix P(3)(MN)×1Counting the number of reconstructed mu sub-tracks of the same kind as an expected value, wherein P(3)(MN)×1Element P of ith row and 1 st column(3)i1The value of (d) is the total number of i-th reconstructed μ sub-tracks in the YOZ plane.
Step C304, constructing an initial matrix f(3)K×1Wherein f is(3)K×1Element f of j (th) row and 1 (th) column(3)j×1Representing the degree of contribution, f, of the jth pixel element in the YOZ plane(3)j×1Is 1.
Step C305, based on the expectation matrix P(3)(MN)×1Using ASD-POCS algorithm to pair matrix f(3)K×1Performing iterative computation, and outputting an iterative computation result matrix f obtained after the iterative computation('3)K×1
Step C306, the matrix f output in step C305('3)K×1Conversion into a matrix of pixels
Figure BDA0002887944470000122
f('3)K×1And
Figure BDA0002887944470000123
the relationship of the middle element is: f. of('3)K×1To the 1 st column
Figure BDA0002887944470000124
E
Figure BDA0002887944470000125
Of rows
Figure BDA0002887944470000126
Each data is sequentially corresponded to
Figure BDA0002887944470000127
The nth row of data.
Step C307, the pixel matrix
Figure BDA0002887944470000131
The converted picture is output as a side view.
Imaging principle it is mentioned that for medium and low Z material the reconstructed mu sub-trace density along a certain line grows linearly with the thickness of the object to be imaged, and the physical process is mainly studied below.
The variation of the reconstructed mu sub-track density with the thickness of an object to be imaged under different materials is simulated by Geant4 software, the object to be imaged is a cube of 30cm multiplied by 10cm multiplied by 20cm, the result is shown in FIG. 4, and the object to be imaged is gradually thickened along the Z-axis direction in the simulation, so the vertical coordinate is the reconstructed mu sub-track density along the Z-axis.
The image reconstruction effect and efficiency of the different materials can be seen from fig. 4: the better the linear degree of the reconstructed mu sub-track density along with the thickness change of the object to be imaged is, the more easily the thickness of the object to be imaged is distinguished, so that the imaging effect is better; the greater the reconstructed mu-sub trace density, the greater the amount of data and thus the greater the imaging efficiency. Low Z material such as water and bone has better linearity and smaller concentration because the ionization loss generated when mu passes through the low Z material is relatively smaller, so that the section generating the secondary gamma radiation is smaller, so that the reconstructed mu trace concentration is smaller, and thus the imaging effect of the low Z material is better, but the imaging efficiency is lower. The high-Z substance such as lead has a smaller slope and is less dense because the high-Z substance has a stronger self-absorption effect, secondary gamma radiation generated by mu ionization loss is increased along with the increase of the thickness, but the self-absorbed gamma radiation is also increased, so that the reconstructed mu trace density is not greatly changed and shows a slow rising trend, and mu can be scattered when passing through the high-Z substance, so that the high-Z substance is not suitable for imaging by the method. The medium Z substances such as Al, Cu and CaO have better linearity and larger concentration when the thickness is not more than 30 cm. Therefore, the method is suitable for imaging medium and low materials, and can image medium Z materials well and has high imaging efficiency.
Two specific application examples are given below.
Application example 1
The geometric model constructed by the software of the Geant4 is shown in fig. 5 and consists of two layers of mu-sub-track detectors above, four scintillator detectors surrounded by side faces and a cube to be imaged in the center. The material filled in the object to be imaged is CaO, and the radioactive source satisfies cos2Theta angle distribution, 2GeV energy, mu source. The volume of the object to be imaged is set to a cube of 30cm x 10cm x 20 cm. In FIG. 5, the nearly vertical ray is the incident mu-sub path, and the nearly horizontal rayThe line is the secondary gamma radiation generated. During the simulation, when both the two layers of mu-sub track detectors and the scintillator detector respond, the incident mu-sub track is recorded as a reconstructed mu-sub track.
FIG. 6 is a three-dimensional image effect of the present invention on CaO. Wherein, FIG. 6a is a front view reconstructed by ASD-POCS algorithm; FIG. 6b is a side view of a reconstruction using the ASD-POCS algorithm; FIG. 6c is a top view of a reconstruction using a modified version of the MLEM method; FIG. 6d is a geometric modeling of the object to be imaged under Geant 4. As can be seen from fig. 6, the size and contour of the images reconstructed by the two algorithms are matched with those of the geometric modeling, the top view reconstructed by the improved MLEM algorithm realizes sharp corners at the edges, but the images reconstructed by the ASD-POCS algorithm have slight distortion on the edge contour, and local high-density pixels exist in the images, because the solution of the ASD-POCS algorithm cannot be sufficiently converged by the Geant4 data, so that the reconstructed images have partial noise points.
Application example two
The geometric model constructed by the software of Geant4 is partially changed on the basis of FIG. 5, and the object to be imaged is changed into 3 cubes of 30cm by 10cm by 20 cm. The materials are filled with water, CaO and Pb, respectively. The comparison graph of the three-view imaging effect of the invention on water, CaO and Pb is shown in FIG. 6, and the top view FIG. 7c shows that the imaging effect of CaO is good; the water image is relatively dim, which is caused by low reconstruction efficiency and insufficient data volume and can be compensated by prolonging the measurement time; the Pb image has incomplete pixels and has certain distortion, because the slope of Pb in fig. 4 is small, it is difficult to accurately judge the thickness by reconstructing the density of the μ sub-track, and the imaging effect is poor. From the front view of FIG. 7a and the side view of FIG. 7b, the CaO in the middle is good and has little distortion; both water and Pb have certain image distortion, and Pb has large distortion.
In summary, the actual imaging effect of water, bone, Al and Pb meets the expectations summarized based on fig. 4, so the method of the present invention is suitable for imaging medium and low Z materials, and the medium Z material is better imaged, but the high Z material cannot be imaged because the image has some distortion.
In conclusion, the method realizes the imaging of the medium-low Z and small-volume substances. When a certain medium-low Z substance is imaged, the size of a reconstructed image is matched with that of an object to be imaged, and the edge angle is clear. When imaging a certain number of medium and low Z materials, although the noise points will increase, the reconstructed image can distinguish the medium Z from the low Z materials by brightness, and the sizes of the medium Z and the low Z materials are still matched.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

1. A method of μ sub-imaging, comprising the steps of:
step A, leading muons to enter an object to be imaged from the upper part of the object to be imaged;
b, obtaining all reconstructed mu sub tracks, wherein each reconstructed mu sub track refers to a mu sub track section which is detected above the object to be imaged and satisfies the condition that T2-T1 is not more than delta T, T1 is a time point when incidence of mu is detected above the object to be imaged, T2 is a time point when gamma light is detected on the side periphery of the object to be imaged, and delta T is a set value;
and C, respectively obtaining a top view, a front view and a side view of the object to be imaged based on the reconstructed mu sub-track.
2. The mu imaging method according to claim 1, wherein in the step C, a top view of the object to be imaged is obtained by:
step C101, according to method F, on a top plan XOY perpendicular to the top plan direction of the object to be imaged(1)1And method F(1)2Classifying the reconstructed mu sub-tracks, and dividing the reconstructed mu sub-tracks into MN types, wherein:
according to method F(1)1Classifying the reconstructed mu sub-tracks means that: weighted in top plan XOYEstablishing an angle theta between the mu-sub track and the X-axis1Dividing the reconstructed mu sub-tracks into M types for the basis;
according to method F(1)2Classifying the reconstructed mu sub-tracks means that: with d1Dividing the reconstructed mu sub-tracks into N types for the basis; wherein the content of the first and second substances,
Figure FDA0002887944460000011
A1、B1、C1satisfies the linear equation A of the reconstructed mu sub-track on the top view plane XOY1x+B1y+C1=0;
Step C102, obtaining K pixel units of the top view plane XOY, and constructing a driving matrix A at the same time(1)(MN)×KWherein A is(1)(MN)×KElement A of the ith row and the jth column(1)ijIs the relative contribution X of the ith reconstructed mu sub-track in the jth pixel unit in the XOY plane(1)ij/X(1)i,X(1)ijDenotes the length of the i-th reconstructed mu sub-track in the j-th pixel unit in the XOY plane, X(1)iRepresents the total length of the ith reconstructed mu sub-track in the XOY plane;
step C103, constructing an expected matrix P(1)(MN)×1Wherein P is(1)(MN)×1Element P of ith row and 1 st column(1)i1The value of (d) is the total number of i reconstructed mu sub-tracks in the XOY plane;
step C104, constructing a matrix f(1)K×1Wherein f is(1)K×1Element f of j (th) row and 1 (th) column(1)j×1Is 1;
step C105, matrix f is mapped according to the following method(1)K×1And (3) carrying out iterative solution:
calculating a ratio matrix R(1)(MN)×1Wherein R is(1)(MN)×1Element R of the ith row and 1 st column(1)i1Has a value of
Figure FDA0002887944460000021
If R is(1)(MN)×1Not equal to 1, then according to the formula
Figure FDA0002887944460000022
For matrix f(1)K×1The updating is carried out, and the updating is carried out,
Figure FDA0002887944460000023
to updated f(1)K×1The value of the element in row j and column 1,
Figure FDA0002887944460000024
is f before update(1)K×1Row j and column 1 element values;
if R is(1)(MN)×1If 1, the iteration is stopped and the matrix f obtained by the last iteration is calculated(1)K×1Calculating result matrix f 'as iteration'(1)K×1Outputting;
step C106, converting the matrix f 'output from step C105'(1)K×1Conversion into a matrix of pixels
Figure FDA0002887944460000025
Step C107, the pixel matrix is formed
Figure FDA0002887944460000026
The converted picture is output as a top view.
3. The mu imaging method of claim 2, wherein in step C106, f'(1)K×1And
Figure FDA0002887944460000027
the relationship of the middle element is: f'(1)K×1To the 1 st column
Figure FDA0002887944460000028
Line of
Figure FDA0002887944460000029
Of rows
Figure FDA00028879444600000210
Each data is sequentially corresponded to
Figure FDA00028879444600000211
The nth row of data.
4. The mu imaging method according to claim 1, wherein in the step C, a front view of the object to be imaged is obtained by:
step C201, according to method F, on an orthographic plane XOZ perpendicular to the orthographic direction of the object to be imaged(2)1And method F(2)2Classifying the reconstructed mu sub-tracks, and dividing the reconstructed mu sub-tracks into MN types, wherein:
according to method F(2)1Classifying the reconstructed mu sub-tracks means that: reconstructing an included angle theta between a mu sub track and a Z axis on the front view plane XOZ2Dividing the reconstructed mu sub-tracks into M types for the basis;
according to method F(2)2Classifying the reconstructed mu sub-tracks means that: with d2Dividing the reconstructed mu sub-tracks into N types for the basis; wherein the content of the first and second substances,
Figure FDA0002887944460000031
A2、B2、C2satisfies the linear equation A of the reconstructed mu sub-track on the front view plane XOZ2x+B2z+C2=0;
Step C202, obtaining K pixel units of the front view plane XOZ, and constructing a driving matrix A at the same time(2)(MN)×KWherein A is(2)(MN)×KElement A of the ith row and the jth column(2)ijIs the relative contribution X of the ith reconstructed mu sub-track in the jth pixel unit in the XOZ plane(2)ij/X(2)i,X(2)ijDenotes the length of the ith reconstructed mu sub-track in the jth pixel unit in the XOZ plane, X(2)iRepresents the total length of the ith reconstructed mu sub-track in the XOZ plane;
step C203, constructing an expectation matrix P(2)(MN)×1Which isIn, P(2)(MN)×1Element P of ith row and 1 st column(2)i1The value of (d) is the total number of i reconstructed mu sub-tracks in the XOZ plane;
step C204, constructing a matrix f(2)K×1Wherein f is(2)K×1Element f of j (th) row and 1 (th) column(2)j×1Is 1;
step C205, based on the expectation matrix P(2)(MN)×1Using ASD-POCS algorithm to pair matrix f(2)K×1Performing iterative computation, and outputting an iterative computation result matrix f 'obtained after the iterative computation'(2)K×1
Step C206, converting the matrix f 'output from step C205'(2)K×1Conversion into a matrix of pixels
Figure FDA0002887944460000032
Step C207, the pixel matrix
Figure FDA0002887944460000033
The converted picture is output as a front view.
5. The mu imaging method of claim 4, wherein in step C206, f'(2)K×1And
Figure FDA0002887944460000034
the relationship of the middle element is: f'(2)K×1To the 1 st column
Figure FDA0002887944460000035
Line of
Figure FDA0002887944460000036
Of rows
Figure FDA0002887944460000037
Each data is sequentially corresponded to
Figure FDA0002887944460000038
The nth row of data.
6. The mu imaging method according to claim 1, wherein in the step C, a side view of the object to be imaged is obtained by:
step C301, according to method F, of imaging an object on a side viewing plane YOZ perpendicular to the side viewing direction of the object(3)1And method F(3)2Classifying the reconstructed mu sub-tracks, and dividing the reconstructed mu sub-tracks into MN types, wherein:
according to method F(3)1Classifying the reconstructed mu sub-tracks means that: reconstructing an angle theta between a mu sub track and a Z axis on a side view plane YOZ3Dividing the reconstructed mu sub-tracks into M types for the basis;
according to method F(3)2Classifying the reconstructed mu sub-tracks means that: with d3Dividing the reconstructed mu sub-tracks into N types for the basis; wherein the content of the first and second substances,
Figure FDA0002887944460000041
A3、B3、C3satisfies the linear equation A of the reconstructed mu sub-path on the side view plane YOZ3y+B3z+C3=0;
Step C302, obtaining K pixel units of the side plane YOZ, and constructing a driving matrix A at the same time(3)(MN)×KWherein A is(3)(MN)×KElement A of the ith row and the jth column(3)ijIs the relative contribution X of the ith reconstructed mu sub-track in the jth pixel unit in the YOZ plane(3)ij/X(3)i,X(3)ijDenotes the length of the i-th reconstructed mu sub-track in the j-th pixel unit in the YOZ plane, X(3)iRepresents the total length of the i-th reconstructed mu-sub track in the YOZ plane;
step C303, constructing an expected matrix P(3)(MN)×1Wherein P is(3)(MN)×1Element P of ith row and 1 st column(3)i1The value of (d) is the total number of i reconstructed mu sub tracks in the YOZ plane;
step C304, constructing a matrix f(3)K×1Wherein f is(3)K×1Element f of j (th) row and 1 (th) column(3)j×1Is 1;
step C305, based on the expectation matrix P(3)(MN)×1Using ASD-POCS algorithm to pair matrix f(3)K×1Performing iterative computation, and outputting an iterative computation result matrix f 'obtained after the iterative computation'(3)K×1
Step C306, converting the matrix f 'output in step C305'(3)K×1Conversion into a matrix of pixels
Figure FDA0002887944460000042
Step C307, the pixel matrix
Figure FDA0002887944460000043
The converted picture is output as a side view.
7. The mu imaging method of claim 6, wherein in step C306, f'(3)K×1And
Figure FDA0002887944460000044
the relationship of the middle element is: f'(3)K×1To the 1 st column
Figure FDA0002887944460000045
Line of
Figure FDA0002887944460000046
Of rows
Figure FDA0002887944460000047
Each data is sequentially corresponded to
Figure FDA0002887944460000048
The nth row of data.
CN202110019425.7A 2021-01-07 2021-01-07 Mu sub imaging method Active CN112807004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110019425.7A CN112807004B (en) 2021-01-07 2021-01-07 Mu sub imaging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110019425.7A CN112807004B (en) 2021-01-07 2021-01-07 Mu sub imaging method

Publications (2)

Publication Number Publication Date
CN112807004A true CN112807004A (en) 2021-05-18
CN112807004B CN112807004B (en) 2022-07-19

Family

ID=75868625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110019425.7A Active CN112807004B (en) 2021-01-07 2021-01-07 Mu sub imaging method

Country Status (1)

Country Link
CN (1) CN112807004B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113576504A (en) * 2021-08-02 2021-11-02 南华大学 Mu-sub imaging method for medium-low atomic number substances

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060180753A1 (en) * 2005-02-17 2006-08-17 Douglas Bryman Geological tomography using cosmic rays
US20080315091A1 (en) * 2007-04-23 2008-12-25 Decision Sciences Corporation Los Alamos National Security, LLC Imaging and sensing based on muon tomography
US7470905B1 (en) * 2007-01-04 2008-12-30 Celight, Inc. High Z material detection system and method
CN102203637A (en) * 2008-08-27 2011-09-28 洛斯阿拉莫斯国家安全股份有限公司 Imaging based on cosmic-ray produced charged particles
CN103308938A (en) * 2013-05-29 2013-09-18 清华大学 Muon energy and track measuring and imaging system and method
CN104181575A (en) * 2013-05-21 2014-12-03 环境保护部核与辐射安全中心 Muon imaging method and vehicle-mounted radioactive material monitoring system
US20150287237A1 (en) * 2014-04-04 2015-10-08 Decision Sciences International Corporation Muon tomography imaging improvement using optimized limited angle data
CN105549103A (en) * 2016-01-22 2016-05-04 清华大学 Method, device and system for inspecting moving object based on cosmic rays
CN111290039A (en) * 2020-01-20 2020-06-16 中国工程物理研究院材料研究所 Method for detecting heavy nuclear materials in cylindrical container

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060180753A1 (en) * 2005-02-17 2006-08-17 Douglas Bryman Geological tomography using cosmic rays
US7470905B1 (en) * 2007-01-04 2008-12-30 Celight, Inc. High Z material detection system and method
US20080315091A1 (en) * 2007-04-23 2008-12-25 Decision Sciences Corporation Los Alamos National Security, LLC Imaging and sensing based on muon tomography
CN102203637A (en) * 2008-08-27 2011-09-28 洛斯阿拉莫斯国家安全股份有限公司 Imaging based on cosmic-ray produced charged particles
CN104181575A (en) * 2013-05-21 2014-12-03 环境保护部核与辐射安全中心 Muon imaging method and vehicle-mounted radioactive material monitoring system
CN103308938A (en) * 2013-05-29 2013-09-18 清华大学 Muon energy and track measuring and imaging system and method
US20150287237A1 (en) * 2014-04-04 2015-10-08 Decision Sciences International Corporation Muon tomography imaging improvement using optimized limited angle data
CN105549103A (en) * 2016-01-22 2016-05-04 清华大学 Method, device and system for inspecting moving object based on cosmic rays
CN111290039A (en) * 2020-01-20 2020-06-16 中国工程物理研究院材料研究所 Method for detecting heavy nuclear materials in cylindrical container

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KONSTANTIN N. BOROZDIN,ET AL: "Passive imaging of SNM with cosmic-ray generated neutrons and gamma-rays", 《IEEE NUCLEAR SCIENCE SYMPOSUIM & MEDICAL IMAGING CONFERENCE》 *
P.M.JENNESON: "Large vessel imaging using cosmic-ray muons", 《NUCLEAR INSTRUMENTS AND METHODS IN PHYSICS RESEARCH》 *
于百蕙: "宇宙线缪子成像图像重建与成像质量分析方法研究", 《中国博士学位论文全文数据库 (基础科学辑)》 *
智宇等: "宇宙线缪子散射成像模拟与算法研究", 《原子能科学技术》 *
王晓冬等: "宇宙射线 Muon多次库伦散射对高 Z 物质的图像重建算法研究", 《第十八届全国核电子学与核探测技术学术年会》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113576504A (en) * 2021-08-02 2021-11-02 南华大学 Mu-sub imaging method for medium-low atomic number substances
CN113576504B (en) * 2021-08-02 2023-06-27 南华大学 Mu sub-imaging method for medium-low atomic number substance

Also Published As

Publication number Publication date
CN112807004B (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN101606082B (en) Statistical tomographic reconstruction based on measurements of charged particles
US8304737B2 (en) Apparatus and method to achieve high-resolution microscopy with non-diffracting or refracting radiation
US7888651B2 (en) Method and system for using tissue-scattered coincidence photons for imaging
US9639973B2 (en) Muon tomography imaging improvement using optimized limited angle data
Morishima et al. Development of nuclear emulsions for muography
CN112807004B (en) Mu sub imaging method
Huth A high rate testbeam data acquisition system and characterization of high voltage monolithic active pixel sensors
Luo et al. Hybrid model for muon tomography and quantitative analysis of image quality
Ji et al. A novel 4D resolution imaging method for low and medium atomic number objects at the centimeter scale by coincidence detection technique of cosmic-ray muon and its secondary particles
Aehle et al. Progress in End-to-End Optimization of Detectors for Fundamental Physics with Differentiable Programming
CN113576504B (en) Mu sub-imaging method for medium-low atomic number substance
Liu et al. Muon-computed tomography using POCA trajectory for imaging spent nuclear fuel in dry storage casks
Nguyen et al. Apparent image formation by Compton-scattered photons in gamma-ray imaging
Aliberti Particle Identification with Calorimeters for the Measurement of the Rare Decay $ K^{+}\to\pi^{+}\nu\bar\nu $ at NA62
Oksuz Exploring fast neutron computed tomography for non-destructive evaluation of additive manufactured parts
Ahn et al. A computer code for the simulation of X-ray imaging systems
CN117893687A (en) Miao Zishu-flow-oriented imaging software framework
Powers-Luhn Neural-net-based imager offset estimation in fieldable associated particle imaging
Connors et al. How to win with non-Gaussian data: Poisson goodness-of-fit
Georgadze Fast verification of spent nuclear fuel dry casks using cosmic ray muons: Monte Carlo simulation study
Wright A detector for muon tomography: Data acquisition and preliminary results
Valencia-Rodriguez Neutrino-Electron Scattering in MINER $\nu $ A for Constraint NuMI Flux at Medium
Leea et al. Interpolation-Based Reconstructions for Raster-Scanned Backscatter X-ray Radiography
Hamar et al. Imaging via Cosmic Muon Induced Secondaries
Hughes Measurement of Jet Constituent Yields in Pb-Pb Collisions at√ sNN= 5.02 TeV Using the ALICE Detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant