WO2006062132A1 - 立体画像再構成装置、立体画像再構成方法、及び立体画像再構成プログラム - Google Patents
立体画像再構成装置、立体画像再構成方法、及び立体画像再構成プログラム Download PDFInfo
- Publication number
- WO2006062132A1 WO2006062132A1 PCT/JP2005/022471 JP2005022471W WO2006062132A1 WO 2006062132 A1 WO2006062132 A1 WO 2006062132A1 JP 2005022471 W JP2005022471 W JP 2005022471W WO 2006062132 A1 WO2006062132 A1 WO 2006062132A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- stereoscopic image
- target
- unit
- transmission
- images
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 39
- 238000009826 distribution Methods 0.000 claims abstract description 101
- 238000003384 imaging method Methods 0.000 claims abstract description 82
- 238000004364 calculation method Methods 0.000 claims abstract description 61
- 230000005540 biological transmission Effects 0.000 claims description 112
- 238000012545 processing Methods 0.000 claims description 105
- 238000010894 electron beam technology Methods 0.000 claims description 20
- 239000007787 solid Substances 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 22
- 230000008569 process Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000002708 enhancing effect Effects 0.000 description 3
- 238000000691 measurement method Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000010186 staining Methods 0.000 description 2
- 101001106432 Homo sapiens Rod outer segment membrane protein 1 Proteins 0.000 description 1
- 238000000342 Monte Carlo simulation Methods 0.000 description 1
- 102100021424 Rod outer segment membrane protein 1 Human genes 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000007447 staining method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01J—ELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
- H01J37/00—Discharge tubes with provision for introducing objects or material to be exposed to the discharge, e.g. for the purpose of examination or processing thereof
- H01J37/02—Details
- H01J37/22—Optical, image processing or photographic arrangements associated with the tube
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/04—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
- G01N23/046—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material using tomography, e.g. computed tomography [CT]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/40—Imaging
- G01N2223/419—Imaging computed tomograph
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01J—ELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
- H01J2237/00—Discharge tubes exposing object to beam, e.g. for analysis treatment, etching, imaging
- H01J2237/22—Treatment of data
- H01J2237/221—Image processing
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01J—ELECTRIC DISCHARGE TUBES OR DISCHARGE LAMPS
- H01J2237/00—Discharge tubes exposing object to beam, e.g. for analysis treatment, etching, imaging
- H01J2237/26—Electron or ion microscopes
- H01J2237/2611—Stereoscopic measurements and/or imaging
Definitions
- Stereoscopic image reconstruction device Stereoscopic image reconstruction device, stereoscopic image reconstruction method, and stereoscopic image reconstruction program
- the present invention relates to a stereoscopic image reconstruction device, a stereoscopic image reconstruction method, and a stereoscopic image reconstruction program.
- the present invention relates to a three-dimensional image reconstruction device, a three-dimensional image reconstruction method, and a three-dimensional image reconstruction program for reconstructing a three-dimensional image indicating a three-dimensional structure of a target based on an image obtained by capturing the target.
- This application is related to the following Japanese application. For designated countries where incorporation by reference is permitted, the contents described in the following application are incorporated into this application by reference and made a part of this application.
- a three-dimensional structure of a target is determined based on the overlap of light and shade when a plurality of transmission images obtained by imaging a target from a plurality of angles using a transmission electron microscope are stretched in the imaging direction and superimposed.
- a stereoscopic image reconstruction device for reconstructing the stereoscopic image shown has been proposed.
- the stereoscopic image of the target can be more accurately based on the shape information indicating the outline of the target calculated from the plurality of transparent images, which is not only the shading overlap when the multiple transparent images are stretched and superimposed.
- a stereoscopic image reconstruction device that reconstructs the image has been proposed (see, for example, Patent Document 1).
- Patent Document 1 International Publication Number WO2002Z048961
- the area indicated by the overlap of shades obtained by extending and superimposing a plurality of transmission images is actually the target.
- An area called ghost that does not exist is also included, and the accuracy of the stereoscopic image reconstructed based only on the shading overlap is low.
- the 3D image reconstructed further based on the shape information indicating the outline of the target includes the internal structure of the target. Since the information to be shown is not included, the stereoscopic image reconstruction device described in Patent Document 1 cannot be used for the purpose of knowing the internal structure of the object.
- an object of the present invention is to provide a stereoscopic image reconstruction device, a stereoscopic image reconstruction method, and a stereoscopic image reconstruction program that can solve the above-described problems.
- This object is achieved by a combination of features described in the independent claims.
- the dependent claims define further advantageous specific examples of the present invention.
- a stereoscopic image reconstruction device for reconstructing a stereoscopic image indicating a stereoscopic structure of a target based on an image obtained by capturing the target.
- the imaging unit that captures a plurality of transmission images including the target, and the plurality of transmission images, which are shown in shades, by transmitting an electron beam from a plurality of angles to the target.
- the feature area selector that selects multiple feature areas included in the target and multiple transparent images!
- the stereoscopic image reconstruction apparatus includes a stereoscopic structure information input unit that inputs information indicating an outline of a target stereoscopic structure, and a plurality of transmission-type images based on the input outline of the stereoscopic structure.
- the image processing apparatus further includes an enhancement processing unit that performs image processing for enhancing the target, and the feature region selection unit includes a plurality of transmission images that are included in the plurality of objects included in the target emphasized by the enhancement processing unit.
- a feature region may be selected.
- the feature region distribution calculation unit is configured to spatially calculate each of the plurality of feature regions in the entire target based on the position of the contour of the target included in each of the plurality of feature regions in each of the plurality of transmission images.
- a correct position may be calculated.
- the imaging unit may capture a plurality of transmission-type images including a target having a three-dimensional structure based on a density distribution without staining the interior.
- the stereoscopic image reconstruction device is configured so that the density of the stereoscopic image of the feature region reconstructed by the stereoscopic image reconstruction unit in the feature region having a known stereoscopic structure among the plurality of feature regions is the known stereoscopic image.
- a convergence processing calculation unit that calculates an image processing operation that converges to the structure, and a stereoscopic image of the feature region reconstructed by the stereoscopic image reconstruction unit in a feature region having an unknown three-dimensional structure among the plurality of feature regions
- a convergence processing unit that executes an image processing operation calculated by the convergence processing calculation unit may be further provided for the shading.
- the stereoscopic image reconstruction unit is based on the spatial distribution of the plurality of feature regions calculated by the feature region distribution calculation unit!
- the thickness calculation unit calculates the thickness of the transmission direction of the electron beam in the target. And each of a plurality of minute regions that divide the region within the target thickness calculated by the thickness calculation unit into a plurality of minute regions and match the shading of the plurality of transmission-type images captured by the imaging unit. You may have the micro area
- the micro area allocation unit divides a plurality of transmission-type images into a plurality of micro-areas corresponding to a micro-area within the target thickness, and each of the plurality of micro-areas of the plurality of transmission-type images!
- the stereoscopic image reconstruction apparatus further includes a three-dimensional structure information input unit that inputs known information on the target three-dimensional structure, and the minute area allocating unit matches the known information input by the three-dimensional structure information input unit. You can assign shades to multiple micro areas.
- a stereoscopic image reconstruction device that reconstructs a stereoscopic image showing a stereoscopic structure of an object based on an image obtained by imaging the object.
- An imaging unit that captures a plurality of transmission-type images including a target by transmitting an electron beam from a plurality of angles to the target, and a thickness within a predetermined thickness for the target Is divided into a plurality of minute areas, and a plurality of transparent images captured by the imaging unit are divided.
- the image processing apparatus includes a micro area allocating unit that reconstructs a target stereoscopic image by allocating the density to each of a plurality of micro areas that match the shading of the oversized image.
- the micro area allocation unit divides a plurality of transmissive images into a plurality of micro areas corresponding to a micro area within the target thickness, and each of the micro areas of the plurality of transmissive images! A proportional integer value is assigned, and each of the minute areas in the thickness is assigned one of the binary values of light and shade, and the binary value of the light and shade of the minute area in the thickness as viewed from the angle at which the transmission image is captured is assigned. Total force You can assign a light or dark binary deviation to a small area in the thickness so that it matches the integer value of the corresponding small area in the transmission image!
- the stereoscopic image reconstruction device further includes a three-dimensional structure information input unit that inputs known information about the target three-dimensional structure, and the minute area allocation unit matches the known information input by the three-dimensional structure information input unit. Let me assign shades to multiple small areas.
- a stereoscopic image reconstruction method for reconstructing a stereoscopic image indicating a target body structure based on an image obtained by capturing an object,
- the imaging stage for imaging a plurality of transmission images including the target, which are shown in shades, by transmitting an electron beam from the angle of the angle, and a plurality of features included in the target in each of the plurality of transmission images
- a feature region distribution calculation stage for calculating the spatial distribution of the plurality of feature regions in the entire target, and a plurality of transmissions
- the shades indicated by the plurality of transmission-type images are
- a stereoscopic image reconstruction stage for reconstructing a stereoscopic image in which the whole object is shaded by reconstructing a stereoscopic image indicating the stereoscopic structure in each of the plurality of characteristic regions by assigning to each position of the feature region;
- a stereoscopic image reconstruction method for reconstructing a stereoscopic image indicating a body structure of a target on the basis of an image obtained by capturing the target.
- An imaging stage for capturing a transmissive image and a plurality of minute areas that divide an area within a predetermined thickness for the target into a plurality of minute areas and match the shades of the plurality of transmissive images captured in the imaging stage.
- a stereoscopic image reconstruction that causes a computer to function as a stereoscopic image reconstruction device that reconstructs a stereoscopic image that represents a target body structure based on an image obtained by capturing an object.
- An image capturing unit that captures a plurality of transmission-type images including a target, which are indicated by shading, by transmitting an electron beam from a plurality of angles to the target, and a plurality of transmissions
- Each type image is based on a feature region selection unit that selects a plurality of feature regions included in the target and each position of the plurality of feature regions selected in each of the plurality of transmission images!
- a feature region distribution calculating unit that calculates a spatial distribution of the plurality of feature regions in the entire target by calculating a spatial position of each of the plurality of feature regions in the entire target;
- the shading indicated by the plurality of transmission type images is
- a stereoscopic image reconstruction unit that reconstructs a stereoscopic image in which shades are assigned to the entire object by reconstructing a stereoscopic image that indicates the stereoscopic structure in each of the multiple characteristic regions by assigning to each position of the multiple characteristic regions And function as a stereoscopic image reconstruction device.
- a stereoscopic image reconstruction that causes a computer to function as a stereoscopic image reconstruction device that reconstructs a stereoscopic image that represents a target body structure based on an image obtained by capturing an object.
- An image capturing unit that captures a plurality of transmission-type images including a target, which are indicated by shading by transmitting an electron beam from a plurality of angles to the target, and a target in advance.
- FIG. 1 is a block diagram showing an example of a functional configuration of a stereoscopic image reconstruction device 10 according to an embodiment of the present invention.
- FIG. 2 is a diagram showing a transmissive image 300a that is a first example of a transmissive image captured by the imaging unit 100 according to the embodiment of the present invention.
- FIG. 3 is a diagram showing a transmissive image 300b that is a second example of a transmissive image captured by the imaging unit 100 according to the embodiment of the present invention.
- FIG. 4 is a diagram showing a transmissive image 300c, which is a third example of a transmissive image captured by the imaging unit 100 according to the embodiment of the present invention.
- FIG. 5 is a diagram showing an example of a superimposing process of a plurality of transmission images by the stereoscopic image reconstruction unit 150 according to the embodiment of the present invention.
- FIG. 6 is a diagram showing an example of a stereoscopic image reconstruction process by the stereoscopic image reconstruction unit 150 according to the embodiment of the present invention.
- FIG. 7 is a diagram showing an example of processing in the enhancement processing unit 120 according to the embodiment of the present invention.
- FIG. 8 is a diagram showing an example of processing in a convergence processing calculation unit 160 according to the embodiment of the present invention.
- FIG. 9 is a flowchart showing an example of a processing flow in a stereoscopic image reconstruction method using the stereoscopic image reconstruction device 10 according to the embodiment of the present invention.
- FIG. 10 is a block diagram showing an example of a hardware configuration of a computer 1500 according to the embodiment of the present invention.
- FIG. 11 is a block diagram showing an example of a functional configuration of a stereoscopic image reconstruction device 20 according to another embodiment of the present invention.
- FIG. 12 is a stereoscopic image reconstruction using the stereoscopic image reconstruction device 20 according to the embodiment of the present invention. It is a flowchart which shows an example of the flow of the process in a method.
- FIG. 13 is a diagram schematically showing a target 202 imaged by the imaging unit 100 according to the embodiment of the present invention.
- FIG. 14 is a diagram showing transmissive images 302a and 302b captured by the image capturing unit 100 according to the embodiment of the present invention.
- FIG. 15 is a plan view schematically showing the thickness of the target 202 calculated by the thickness calculation unit 152 according to the embodiment of the present invention.
- FIG. 16 is a diagram showing histograms of transmissive images 302a and 302b captured by the image capturing unit 100 according to the embodiment of the present invention.
- FIG. 17 is a diagram showing a minute region within the thickness of the target 202 divided by the minute region allocating unit 154 according to the embodiment of the present invention.
- FIG. 18 is a block diagram showing an example of a functional configuration of a stereoscopic image reconstruction device 30 according to still another embodiment of the present invention.
- FIG. 19 is a flowchart showing an example of a processing flow in a stereoscopic image reconstruction method using the stereoscopic image reconstruction device 30 according to the embodiment of the present invention.
- 10 stereoscopic image reconstruction device 20 stereoscopic image reconstruction device, 30 stereoscopic image reconstruction device, 100 imaging unit, 110 stereoscopic structure information input unit, 120 enhancement processing unit, 130 feature region selection unit, 140 feature region Distribution calculation unit, 150 stereoscopic image reconstruction unit, 160 convergence processing calculation unit, 170 convergence processing unit, 180 output unit, 200 target, 202 target, 210-220 axis, 30 Oa-c transmission type image, 302a-b transmission type Image, 400a-c Shading distribution information
- FIG. 1 is a block diagram showing an example of a functional configuration of the stereoscopic image reconstruction device 10 according to the embodiment of the present invention.
- the stereoscopic image reconstruction device 10 is based on an image of an object such as a cell protein! / Reconstruct a stereoscopic image showing the target 3D structure.
- An object of the stereoscopic image reconstruction device 10 according to the embodiment of the present invention is to reconstruct a stereoscopic image including not only the shape information indicating the outline of the object but also information regarding the internal structure of the object.
- the stereoscopic image reconstruction device 10 according to the embodiment of the present invention further reconstructs a highly accurate stereoscopic image based on a smaller number of images as compared to the conventional stereoscopic image reconstruction device. With the purpose.
- the stereoscopic image reconstruction device 10 includes an imaging unit 100, a stereoscopic structure information input unit 110, an enhancement processing unit 120, a feature region selection unit 130, a feature region distribution calculation unit 140, a stereoscopic image reconstruction unit 150, and a convergence process calculation.
- the imaging unit 100 captures a plurality of transmission-type images including a target, which are indicated by shading, by transmitting an electron beam from a plurality of angles with respect to the target, for example, a transmission electron microscope. .
- the imaging unit 100 may capture a plurality of transmission-type images including an object whose interior is stained by a negative staining method or the like, or an object having a three-dimensional structure based on a density distribution without being stained. Then, the imaging unit 100 outputs the captured plurality of transmission images to the enhancement processing unit 120 and the stereoscopic image reconstruction unit 150.
- the three-dimensional structure information input unit 110 inputs an outline of a target three-dimensional structure based on, for example, a user operation. Then, the 3D structure information input unit 110 outputs information indicating the outline of the input 3D structure to the enhancement processing unit 120 and the convergence processing calculation unit 160.
- the enhancement processing unit 120 performs image processing for emphasizing a target for each of a plurality of transmission-type images captured by the imaging unit 100 based on an outline of the target three-dimensional structure input by the three-dimensional structure information input unit 110. Execute. Then, the enhancement processing unit 120 outputs a plurality of transmission-type images that have been subjected to image processing for enhancing the target to the feature region selection unit 130.
- the feature region selection unit 130 selects a plurality of feature regions included in the object in each of the plurality of transmission images received from the enhancement processing unit 120.
- the feature region selection unit 130 displays information indicating each of the plurality of transmission images and the plurality of feature regions selected in each of the plurality of transmission images, for example, the position, shape, size, and the like of the feature region. And output to the feature region distribution calculation unit 140.
- the feature area distribution calculation unit 140 is provided for each of a plurality of transmissive images and a plurality of transmissive images! Information indicating each of the plurality of feature regions selected by the feature region selection unit 130 is received from the feature region selection unit 130. receive.
- the feature area distribution calculation unit 140 determines a plurality of feature areas in the entire target based on the positions of the feature areas selected by V for each of the plurality of transmission images.
- the spatial distribution of a plurality of feature regions in the entire object is calculated by calculating the spatial position of each feature region.
- the feature region distribution calculation unit 140 outputs information indicating the calculated distributions of the plurality of feature regions to the stereoscopic image reconstruction unit 150.
- the stereoscopic image reconstruction unit 150 superimposes a plurality of transmission images received from the imaging unit 100 to reconstruct the target stereoscopic structure.
- the three-dimensional image reconstruction unit 150 is based on the spatial distribution of the plurality of feature regions in the entire target calculated by the feature region distribution calculation unit 140, and the shading indicated by the plurality of transmission images. Is assigned to each position of the plurality of feature regions, and a three-dimensional image showing the three-dimensional structure in each of the plurality of feature regions is reconstructed, thereby reconstructing a three-dimensional image in which the shade is assigned to the entire object. Then, the stereoscopic image reconstruction unit 150 outputs the reconstructed stereoscopic image to the convergence processing calculation unit 160.
- the convergence processing calculation unit 160 is reconstructed by the stereoscopic image reconstruction unit 150 in a feature region having a known three-dimensional structure among a plurality of feature regions selected by the feature region selection unit 130. An image processing operation is calculated in which the density of the three-dimensional structure of the feature region converges to the known three-dimensional structure. Then, the convergence processing calculation unit 160 outputs the reconstructed stereoscopic image and information indicating the calculated image processing operation to the convergence processing unit 170.
- the convergence processing unit 170 is a three-dimensional image of the feature region reconstructed by the three-dimensional image reconstruction unit 150 in a feature region having an unknown three-dimensional structure among the plurality of feature regions selected by the feature region selection unit 130.
- the image processing operation calculated by the convergence processing calculation unit 160 is executed with respect to the shading. Then, the convergence processing unit 170 outputs the stereoscopic image subjected to the image processing to the output unit 180.
- the output unit 180 outputs, for example, a stereoscopic image received from the convergence processing unit 170, which is a display device such as an LCD panel or a storage device such as a hard disk drive, to the user.
- the stereoscopic image reconstruction device 10 According to the stereoscopic image reconstruction device 10 according to the embodiment of the present invention, spatial distribution of a plurality of feature regions in each of a plurality of transmission-type images obtained by capturing an object from a plurality of angles. Based on the cloth, it is possible to reconstruct a stereoscopic image in which shades are assigned to the entire object. As a result, unlike conventional stereoscopic image reconstruction devices, it is possible to provide a user with a stereoscopic image including information about the internal structure as well as the shape information indicating the outer shell of the target having a stereoscopic structure.
- an object having a three-dimensional structure based on a density distribution with or without staining the inside is imaged using a transmission electron microscope, and the spatial position of a specific inner structure in the object is determined.
- the angle between the sample and the imaging direction can be changed compared to the conventional case where a stereoscopic image is reconstructed only by combining grayscale information. Since the number of tilted images to be obtained is very small, it is possible to reconstruct a stereoscopic image while keeping the degree to which the target structure is destroyed by image capturing to a low level.
- the convergence processing calculation unit 160 is based on the information indicating the outline of the three-dimensional structure input by the three-dimensional structure information input unit 110! In the feature region having a known three-dimensional structure. Instead, the convergence processing calculation unit 160 is different from the outline of the stereoscopic structure used by the enhancement processing unit 120. For example, the image processing operation may be calculated based on information input separately by the user.
- FIG. 2 shows a transmissive image 300a, which is a first example of a transmissive image captured by the imaging unit 100 according to the embodiment of the present invention.
- FIG. 3 shows a transmissive image 300b, which is a second example of a transmissive image captured by the imaging unit 100 according to the embodiment of the present invention.
- FIG. 4 shows a transmissive image 300c, which is a third example of the transmissive image captured by the imaging unit 100 according to the embodiment of the present invention.
- the imaging unit 100 images at least three different objects 200 from different angles.
- the imaging unit 100 outputs the transmission image 300a by imaging the object 200 from the A direction. Further, the imaging unit 100 outputs the transmission image 300b by imaging the target 200 with the B-direction force.
- the imaging unit 100 outputs a transmissive image 300c by imaging the object 200 from the C direction.
- the angle formed between the A direction and the B direction and the angle formed between the A direction and the C direction are preferably equal to each other, ⁇ .
- the same axis 210 is rotated.
- the direction may be the direction in which the A direction is rotated as the center, or may be the direction in which the A direction is rotated with the axes different from each other, for example, the axis 210 and the axis 220 as rotation centers.
- FIG. 5 shows an example of the overlay processing of a plurality of transmission-type images by the stereoscopic image reconstruction unit 150 according to the embodiment of the present invention.
- the stereoscopic image reconstruction unit 150 extends each of the plurality of transmission images captured by the imaging unit 100 in the imaging direction of the transmission image, and generates a plurality of shade distribution information. Then, the stereoscopic image reconstruction unit 150 superimposes the plurality of generated grayscale distribution information while maintaining the angular relationship between the imaging directions in the transmissive image corresponding to each density distribution information.
- the stereoscopic image reconstruction unit 150 maintains the angular relationship among the A direction, the B direction, and the C direction for each of the generated density distribution information 400a, the density distribution information 400b, and the density distribution information 400c. Overlapping. In the figure, a longitudinal section of concentration distribution information in a plane including the A direction, the B direction, and the C direction is shown.
- the stereoscopic image reconstruction unit 150 detects a region 410 in which all the grayscale distribution information overlaps among the superimposed grayscale distribution information as a region where the target 200 exists. .
- the area 400 includes an area called a ghost in which the target 200 does not actually exist, the stereoscopic image reconstruction unit 150 only detects the area 400 and detects the target 200. A three-dimensional image showing the three-dimensional structure cannot be reconstructed with high accuracy.
- FIG. 6 shows an example of a stereoscopic image reconstruction process by the stereoscopic image reconstruction unit 150 according to the embodiment of the present invention.
- the feature region selection unit 130 selects a plurality of feature regions in each of the plurality of transmission images. Specifically, when the imaging unit 100 captures a plurality of transmission images in the order in which the angle with respect to the target monotonously increases or decreases monotonously. Thus, the feature region selection unit 130 applies the feature region selected in one transmission type image to another transmission type image captured before and after the one transmission type image! Then, it is detected which region corresponds.
- the feature region selection unit 130 may detect a corresponding feature region in each transmission image by using, for example, a known image processing technique called cross-correlation processing.
- the feature region selection unit 130 may select, as the feature region in the one transmissive image, for example, a region in which the change in shading in the image is larger than that in other regions. Then, the feature region selection unit 130 captures the processing for detecting the corresponding feature region in the two transmission images captured before and after, before and after being included in the plurality of transmission images. By sequentially executing each of the two sets of transmission images, each of the plurality of feature regions is tracked in the plurality of transmission images.
- the feature region distribution calculation unit 140 is based on the positions of the plurality of feature regions selected by the feature region selection unit 130 for each of the plurality of transmission images.
- the spatial position of each of the plurality of feature regions in the entire object is calculated.
- the feature region distribution calculation unit 140 uses a known method called a stereo measurement method that calculates height information of the same point based on parallax when the same point is observed from different angles. To calculate the spatial position of each of the plurality of feature regions in the entire object from the change in the position of each of the plurality of feature regions in a plurality of transmission-type images captured at different angular forces. To do.
- the feature region distribution calculation unit 140 calculates the spatial positions of the plurality of feature regions as points 500a, 500b, and 500c.
- point 500a, point 500b, and point 500c are points that appear to overlap when the A-direction force shown in FIG. 2 is also seen.
- the number of feature regions for which the feature region distribution calculation unit 140 calculates a spatial position is limited to three.
- the feature region distribution calculation unit 140 calculates the spatial positions of the plurality of feature regions in the entire target based on the centroid positions of the plurality of feature regions. You can put it out. Instead of this, the feature region distribution calculation unit 140 calculates the plurality of feature regions in the entire target based on the position of the contour of the target included in each of the plurality of feature regions in each of the plurality of transmission images. Each spatial position may be calculated. In many cases, the partial image showing the outline of the target included in the transmission type image has a clearer characteristic than the other partial images. Therefore, the change in the position of the target area is based on the position of the target outline. Can be calculated with high accuracy. As a result, the spatial positions of the plurality of feature regions can be calculated with high accuracy, and the target stereoscopic image can be reconstructed with higher accuracy.
- the stereoscopic image reconstruction unit 150 detects the region 410 where the target 200 exists, detected based on the grayscale distribution information obtained by extending a plurality of captured transmission images in the imaging direction.
- the stereoscopic image of the target 200 is reconstructed based on the spatial positions of the plurality of feature regions calculated by the feature region distribution calculation unit 140.
- the stereoscopic image reconstruction unit 150 includes at least a part of the target 200 in the vicinity of each of the points 500a, 500b, and 500c shown in the drawing in the region 410.
- the three-dimensional image showing the three-dimensional structure in each of the plurality of feature regions is reconstructed by equally assigning the shading in the entire region 410 to the vicinity of each point.
- each spatial feature region is based on a plurality of transmission-type images obtained by imaging an object from a plurality of different angles, such as the B direction and the C direction in addition to the A direction alone.
- the stereoscopic image reconstruction device 10 can reconstruct a solid image in which shades are assigned to the entire target 200 based on the information indicating the internal structure of the target 200 acquired in this way.
- the stereoscopic image reconstruction device 10 it is possible to use a stereo measurement method or the like that uses only a region detected by stretching and superimposing a plurality of transmission images.
- the target stereoscopic image can be reconstructed further based on the calculated spatial positions of the plurality of feature regions. Therefore, when reconstructing a target stereoscopic image based only on a region detected by stretching and superimposing a plurality of transmission-type images, or based on shape information of the target outer shell, As compared with the case of reconstructing, it is possible to acquire more information from each transmission image and reconstruct the target stereoscopic image.
- the stereoscopic image reconstruction device 10 it is possible to reconstruct a stereoscopic image from a smaller number of transmission-type images than in the conventional stereoscopic image reconstruction device. As a result, it is possible to reconstruct a stereoscopic image with high accuracy while suppressing the degree to which the structure of the target is destroyed by the electron beam transmitted through the target when capturing the image. In addition, this makes it possible to reconstruct a highly accurate three-dimensional image even when using an object with an unstable structure that changes the structure while capturing a large number of images. Can do.
- FIG. 7 shows an example of processing in the enhancement processing unit 120 according to the embodiment of the present invention.
- the enhancement processing unit 120 converts, for example, a known morphological filter (Morphological Filter) from the transmission type image captured by the imaging unit 100 based on the outline of the target three-dimensional structure input by the three-dimensional structure information input unit 110. ) Emphasize the object in the transmissive image by performing image processing using a technique called processing.
- the outline of the three-dimensional structure of the target may be information input by a user who is information indicating the thickness of the target when the target is linear, for example.
- an example of image processing in the enhancement processing unit 120 will be described with reference to FIG.
- FIG. 7A shows the relationship between the position and luminance of a transmission type image captured by the imaging unit 100 on a certain tomographic line.
- the transmissive image is captured as a 256-level gray scale image.
- FIG. 7B shows an outline of the target 3D structure input by the 3D structure information input unit 110.
- the object has a linear structure
- FIG. 7 (b) shows a typical thickness T of the object.
- the enhancement processing unit 120 detects a portion other than the target, that is, background noise, in the transmissive image based on the captured transmissive image and the input outline of the three-dimensional structure of the target. Put out.
- the enhancement processing unit 120 includes a target composition in the captured transmission image. For example, an area recognized as a structure having a thickness that is less than or equal to a reference value predetermined by a user, for example, the area 600 shown in FIG.
- the region 610 and the region 620 are detected as regions indicating a part of the object.
- the enhancement processing unit 120 generates a background image that is an image obtained by removing a portion detected as a region indicating a part of the target from the captured transmission-type image.
- FIG. 7 (c) shows the relationship between the position on the same tomographic line as in the case of FIG. 7 (a) and the luminance in the background image generated by the enhancement processing unit 120 in this way.
- the enhancement processing unit 120 generates a target image that is an image indicating the target by subtracting the acquired background image from the captured transmission-type image. Specifically, the enhancement processing unit 120 subtracts the image obtained by subtracting the luminance of the corresponding pixel included in the background image from the luminance of each pixel included in the captured transmission image.
- FIG. 7D shows the relationship between the position and luminance on the same tomographic line as in FIG. 7A in the target image generated by the enhancement processing unit 120 in this way.
- the enhancement processing unit 120 performs binarization processing on the generated target image. Specifically, the emphasis processing unit 120 includes, for each of the regions indicating the target included in the generated target image, the maximum value of the luminance in the region, for example, 255 or the like. By emphasizing the contrast, an object enhanced image in which the object is enhanced is generated.
- FIG. 7E shows the relationship between the position and luminance on the same tomographic line as in FIG. 7A in the target enhanced image generated by the enhancement processing unit 120 in this way.
- the enhancement processing unit 120 outputs, to the feature region selection unit 130 and the stereoscopic image reconstruction unit 150, an image obtained by performing image processing for enhancing the target on the captured transmission type image as described above. To do.
- the background noise different from the grayscale information indicating the target is removed from the grayscale information in the captured transmission image, and the target is emphasized. Can do. As a result, the target stereoscopic image can be reconstructed with higher accuracy.
- FIG. 8 shows an example of processing in the convergence processing calculation unit 160 according to the embodiment of the present invention.
- a 3D image of a feature region reconstructed by the 3D image reconstruction unit 150 is used.
- the shading of the image includes three points, as shown in Fig. 8 (a), point 700a, point 700b, and point 700c.
- the known three-dimensional structure in the feature region input by the three-dimensional structure information input unit 110 includes only one point.
- blurring occurs in the shade of the reconstructed stereoscopic image in the feature region.
- the convergence processing calculation unit 160 calculates an image processing operation in which the density of the solid image reconstructed in the feature region converges to a known three-dimensional structure. Specifically, the convergence processing calculation unit 160 should converge the density of the stereoscopic image with reference to the point 700a, and as shown in FIG. 8 (b), the point 700a and the point 700c are respectively positioned at the force point 700a. Calculate the converged image processing operations.
- the image processing operation may include various image processing such as enlargement, reduction, rotation, and deformation of at least a part of the feature region.
- the convergence processing unit 170 is different from the feature region shown in the figure, and the convergence processing calculation unit is applied to the density of the stereoscopic image reconstructed over the other feature region whose stereoscopic structure is unknown.
- the image processing operation calculated by 160 is executed.
- the angle may be limited due to restrictions on the apparatus, characteristics of the target sample, and the like. Specifically, the angle ⁇ force is limited to the range of about ⁇ 60 degrees shown in FIGS. Then, due to the lack of projection images of the target at angles outside the range, the reconstructed image that should originally have a uniform resolution regardless of the direction on the tomographic image plane of the reconstructed stereoscopic image There arises a problem that only the resolution of the above deteriorates. For this reason, all tomographic images are stretched in one direction, or stereoscopic images that are different from actual ones and represent wrong structures and forms are reconstructed! / ⁇ , high enough!
- the stereoscopic image reconstruction device 10 According to the stereoscopic image reconstruction device 10 according to the embodiment of the present invention, the image processing operation for converging the gray level of the stereoscopic image in the feature region where the stereoscopic structure is known to the known stereoscopic structure is calculated. Thus, the image processing operation can be executed with respect to the density of the three-dimensional structure in another feature region whose three-dimensional structure is unknown. As a result, the reconstructed stereoscopic image can be corrected for shading in the entire object, so that the stereoscopic structure of the object can be displayed at a higher resolution. High-quality stereoscopic images can be output.
- FIG. 9 is a flowchart showing an example of a processing flow in the stereoscopic image reconstruction method using the stereoscopic image reconstruction device 10 according to the embodiment of the present invention.
- the imaging unit 100 captures a plurality of transmission-type images including a target, which are indicated by shading, by transmitting an electron beam from a plurality of angles to the object (S1000).
- the three-dimensional structure information input unit 110 inputs information indicating an outline of the target three-dimensional structure (S1010).
- the enhancement processing unit 120 executes image processing for emphasizing a target for each of a plurality of captured transmission images based on the input outline of the three-dimensional structure (S1020).
- the feature region selection unit 130 selects a plurality of feature regions included in the object emphasized by the enhancement processing unit for each of the plurality of transmission images (S1030).
- the feature region distribution calculation unit 140 selects a plurality of feature regions in the entire target based on the positions of the plurality of feature regions selected in each of the plurality of transmission images. By calculating the spatial position of each, the spatial distribution of a plurality of feature regions in the entire object is calculated (S1040).
- the stereoscopic image reconstruction unit 150 reconstructs the target stereoscopic structure by superimposing a plurality of transmission-type images (S1050).
- the stereoscopic image reconstruction unit 150 assigns the shades indicated by the plurality of transmission images to the respective positions of the plurality of feature regions based on the spatial distribution of the plurality of feature regions in the entire object. Thus, by reconstructing a three-dimensional image showing the three-dimensional structure in each of the plurality of feature regions, a three-dimensional image in which shades are assigned to the entire object is reconstructed.
- the convergence processing calculation unit 160 determines whether the three-dimensional structure of the reconstructed feature region in the feature region having a known three-dimensional structure among the plurality of feature regions has the known three-dimensional structure.
- An image processing operation that converges to the structure is calculated (S1060).
- the convergence processing unit 170 calculates the density of the reconstructed stereoscopic image of the feature region having an unknown three-dimensional structure among the plurality of feature regions by the convergence processing calculation unit 160.
- the issued image processing operation is executed (S1070).
- the output unit 180 outputs the stereoscopic image subjected to the image processing by the convergence processing unit 170 to the outside and provides it to the user (S1080).
- a computer 1500 includes a CPU peripheral unit having a CPU 1505, a RAM 1520, a graphic controller 1575, and a display device 1580 connected to each other by a host controller 1582, and a host by an input / output controller 1584.
- I / O unit having communication interface 1530, hard disk drive 1540, and CD-ROM drive 1560 connected to controller 1582, and ROM 1510, flexible disk drive 1550, and I / O chip connected to input / output controller 1584 And a legacy input / output unit having 1570.
- the host controller 1582 connects the RAM 1520 to the CPU 1505 and the graphic controller 1575 that access the RAM 1520 at a high transfer rate.
- the CPU 1505 operates based on programs stored in the ROM 1510 and the RAM 1520, and controls each part.
- the graphic controller 1575 acquires image data generated by the CPU 1505 and the like on a frame buffer provided in the RAM 1520 and displays the image data on the display device 1580.
- the graphic controller 1575 may include a frame buffer for storing image data generated by the CPU 1505 or the like.
- the input / output controller 1584 connects the host controller 1582 to the communication interface 1530, the hard disk drive 1540, and the CD-ROM drive 1560, which are relatively high-speed input / output devices.
- the communication interface 1530 communicates with other devices via a network.
- the hard disk drive 1540 stores programs and data used by the CPU 1505 in the computer 1500.
- CD-ROM drive 1560 reads CD-ROM 15 95 programs or data and provides them to hard disk drive 1540 via RAM 1520.
- the input / output controller 1584 is connected to the ROM 1510 and the relatively low-speed input / output devices of the flexible disk drive 1 550 and the input / output chip 1570.
- the ROM1 510 stores a boot program executed when the computer 1500 starts up, a program depending on the hardware of the computer 1500, and the like.
- the flexible disk drive 1550 reads a program or data from the flexible disk 1590 and provides it to the hard disk drive 1540 via the RAM 1520.
- the input / output chip 1570 Various input / output devices are connected through the Sibble disk drive 1550 and, for example, a parallel port, a serial port, a keyboard port, a mouse port, and the like.
- the stereoscopic image reconstruction program provided to the hard disk drive 1540 via the RAM 1520 is stored in a recording medium such as the flexible disk 1590, the CD-ROM 1595, or an IC card and provided by the user.
- the stereoscopic image reconstruction program is read from the recording medium, installed on the hard disk drive 1540 in the computer 1500 via the RAM 1520, and executed by the CPU 1505.
- the stereoscopic image reconstruction program installed and executed on the computer 1500 works on the CPU 1505 and the like to cause the computer 1500 to function as the stereoscopic image reconstruction device 10 described with reference to FIGS.
- the program described above may be stored in an external storage medium.
- storage media flexible disk 1590, CD-ROM 1595, optical recording media such as DVD and PD, magneto-optical recording media such as MD, tape media, semiconductor memory such as IC cards, etc.
- a storage device such as a node disk or a RAM provided in a server system connected to a dedicated communication network or the Internet may be used as a recording medium, and the program may be provided to the computer 1500 via the network.
- FIG. 11 is a block diagram showing an example of a functional configuration of a stereoscopic image reconstruction device 20 according to another embodiment of the present invention.
- the stereoscopic image reconstruction device 20 includes an imaging unit 100, a stereoscopic structure information input unit 110, a feature region selection unit 130, a feature region distribution calculation unit 140, a stereoscopic image reconstruction unit 150, and an output unit 180.
- the stereoscopic image reconstruction unit 150 includes a thickness calculation unit 152, a micro area allocation unit 154, and a back projection unit 156.
- the imaging unit 100 has the same configuration and function as the imaging unit 100 of the stereoscopic image reconstruction device 10.
- the imaging unit 100 captures a target from a plurality of angles, acquires a plurality of transmission images, and outputs the images to the feature region selection unit 130 and the thickness calculation unit 152.
- the three-dimensional structure information input unit 110 inputs known information about the target three-dimensional structure based on, for example, a user operation. Then, the three-dimensional structure information input unit 110 outputs information indicating an outline of the input three-dimensional structure to the micro area allocating unit 154.
- the feature region selection unit 130 is a plurality of transmissive images received from the imaging unit 100. In this case, a plurality of feature regions included in the target are selected. And a feature region selection unit
- the 130 is a feature region distribution calculation unit 140 that displays information indicating each of a plurality of transmissive images and a plurality of feature regions selected in each of the plurality of transmissive images, for example, the position, shape, and size of the feature regions. Output to.
- the feature region distribution calculation unit 140 stores information indicating each of the plurality of transmission images and the plurality of feature regions selected by the feature region selection unit 130 for each of the plurality of transmission images. Received from selection section 130. Then, the feature area distribution calculation unit 140 determines a plurality of feature areas in the entire target based on the positions of the feature areas selected by V for each of the plurality of transmission images. The spatial distribution of a plurality of feature regions in the entire object is calculated by calculating the spatial position of each feature region. Then, the feature region distribution calculation unit 140 outputs information indicating the calculated distribution of the plurality of feature regions to the thickness calculation unit 152.
- the thickness calculator 152 is based on the spatial distribution of a plurality of feature regions calculated by the feature region distribution calculator 140! Then, the thickness of the transmission direction of the electron beam in the object is calculated. The thickness calculation unit 152 outputs the spatial distribution of the feature region and the calculated thickness to the micro region allocation unit 154.
- the micro area allocation unit 154 divides an area within the target thickness calculated by the thickness calculation unit 152 into a plurality of micro areas. Further, the micro area allocating unit 154 reconstructs a stereoscopic image in which the gray level is assigned to each of the plurality of micro areas that match the gray level of the plurality of transmission images captured by the imaging unit 100. In this case, the micro area allocation unit 154 first divides a plurality of transmission-type images into a plurality of micro areas corresponding to the micro area within the target thickness. Furthermore, the minute area allocating unit 154 assigns an integer value proportional to the density for each of the plurality of minute areas of the plurality of divided transmission images.
- the micro area allocating unit 154 allocates one of two values of light and shade to each micro area within the thickness of the target, and micro area within the thickness viewed from the angle at which the transmission image is captured.
- One of the binary values of light and shade is assigned to the minute region within the thickness so that the sum of the light and dark values of the region matches the integer value of the light and shade of the corresponding minute region in the transmission image.
- the micro area allocation unit 154 is configured to change the density of the entire object in a micro area within the thickness. By assigning one of the binary values, the density distribution for the entire object is obtained.
- the micro area allocation unit 154 supplies the acquired density distribution to the back projection unit 156.
- the backprojection unit 156 reconstructs a stereoscopic image of the target 202 by assigning the density of the transmission type image to the acquired density distribution of the target 202.
- the back projection unit 156 outputs the reconstructed stereoscopic image to the output unit 180.
- FIG. 12 is a flowchart showing an example of a processing flow in the stereoscopic image reconstruction method using the stereoscopic image reconstruction device 20 according to the embodiment of the present invention.
- FIG. 13 is a diagram schematically showing the target 202 imaged by the imaging unit 100 according to the embodiment of the present invention.
- FIG. 14 is a diagram showing transmissive images 302a and 302b captured by the imaging unit 100 according to the embodiment of the present invention. Note that step S 1052 and subsequent steps in this flowchart will be described with reference to FIGS. 15 to 17 with reference to FIGS.
- the imaging unit 100 transmits the electron beam from one angle (0 °) with respect to the target 202, thereby including a transmission type image including the target 202 shown in FIG. 14 (a).
- 302a is imaged (S1000).
- the imaging unit 100 also captures a transmission-type image 302b including the target 202 shown in FIG. 14B by transmitting an electron beam from another angle (10 °) on the plane with respect to the target 202 ( Same step).
- the imaging unit 100 captures a plurality of transmission-type images by changing the angle by 10 degrees with respect to a specific plane of the target 202.
- a part of the explanation of the transmissive images (not shown) other than the transmissive images 302a and 302b is omitted.
- the three-dimensional structure information input unit 110 inputs known information of the target 202 (S1010). For example, if it is known as known information that the target 202 has a fiber-like internal structure, the length, thickness, etc. of the fiber-like structure are input to the user force three-dimensional structure information input unit 110. Is done.
- the feature region selection unit 130 selects a plurality of feature regions included in the target 202 in each of the transmissive images 302a and b (S1030).
- the feature region distribution calculation unit 140 performs a plurality of processing in the entire target 202 based on the positions of the plurality of feature regions selected in each of the transmission images 302a and 302b in the transmission images 302a and 302b.
- the spatial position of each feature region By calculating the spatial position of each feature region, the spatial distribution of multiple feature regions in the entire target 202 is calculated (S 1040). Steps S1010, S1030, and S1040i are the same as those of the stereoscopic image reconstruction device 10 shown in FIG.
- FIG. 15 is a plan view schematically showing the thickness of the target 202 calculated by the thickness calculation unit 152 according to the embodiment of the present invention.
- FIG. 16 is a diagram showing histograms of transmission-type images 302a and 302b picked up by the image pickup unit 100m according to the embodiment of the present invention.
- FIG. 17 is a micro area allocation image in which shading is assigned by the micro area allocation unit 154 according to the embodiment of the present invention.
- the thickness calculation unit 152 based on the spatial distribution of a plurality of feature regions in the entire target 202 calculated by the feature region distribution calculation unit 140, The thickness d in the transmission direction is calculated (S1052).
- FIG. 15 is an example of the calculated thickness, and the thickness d of the object 202 in the auxiliary line A shown in FIG. 13 parallel to the angle (0 °) at which the object 202 is imaged and the plane including the cross section L. Indicates.
- the thickness calculation unit 152 also divides the thickness d in FIG. 15 into the micro area ⁇ p ⁇ q corresponding to the micro area A p A q in the micro area allocation image in FIG.
- the micro area allocating unit 154 corresponds to the micro area ⁇ A q within the thickness shown in FIG. 15, and the area on the line segment L of the transmissive image 302a shown in FIG. 16 (a). It is divided into a plurality of minute regions ⁇ (for example, pixels having a size of ⁇ ) (S1054).
- the micro area assigning unit 154 corresponds to the micro area A p A q within the thickness shown in FIG. 15 by dividing a plurality of areas on the line segment L of the transmissive image 302b shown in FIG. Divide into small regions ⁇ p (same step).
- the micro area allocating unit 154 performs the S-th pixel (pixel S) in the micro area ⁇ p on the line segment L of the transmissive image 302a.
- the pixel value N is assigned as an integer value proportional to the shading (S1056).
- the micro area allocation unit 154 has a transparent
- N be the pixel value when the pixel S on the line segment L of the oversized image 302a is represented by 256 black and white gradations.
- the micro area allocation unit 154 is connected to another pixel S-1 etc. on the line segment L.
- the pixel value N etc. is assigned as an integer value proportional to the shading. Similarly, fine
- the small region allocating unit 154 For the pixels corresponding to each of the small regions ⁇ p on the line segment L of the transmissive image 302b, the small region allocating unit 154 also converts the pixel value to an integer value proportional to the shading as shown in FIG. Assign as [0070]
- the micro area allocating unit 154 allocates two values of light and shade to the micro area of the micro area allocation image (S1058). In this case, first, the micro area allocation unit 154 prepares a micro area allocation image obtained by dividing a two-dimensional screen into micro areas ⁇ p ⁇ q. Next, for a specific angle (0 °), the micro area assigning unit 154 corresponds to the micro area in FIG. 17A corresponding to the micro area included in the thickness d in FIG. A binary value corresponding to the pixel value N assigned to the pixel S in FIG. 16 (a) is assigned to the area within the dashed line in (a). In this case,
- the area allocation unit 154 is the total K force transmission type image of the binary values of the micro area ⁇ p ⁇ q included in the micro area group P in the thickness direction as viewed from the angle (0 °) at which the transmission image 302a was captured.
- the micro area allocating unit 154 assigns “4” to the pixel value N of the pixel S at the angle (0 °).
- the micro area allocation part
- the micro area allocating unit 154 may randomly select four micro areas out of the micro areas included in the area corresponding to the thickness d and assign white at random.
- the minute area allocating unit 154 satisfies the constraint condition related to the angle (0 °), and at the other angle (10 °), the minute area allocating unit 154 adds two shades of light to the minute area of the minute area assigned image. Assign a value. In this case, the minute area allocating unit 154 changes the position of the white minute area allocated in the case of the angle (0 °) while satisfying the constraint condition.
- the micro area allocating unit 154 calculates the sum of the binary values of the micro area group P in FIG.
- the micro-area allocation unit 154 has a line segment L for both two angles (0 °, 10 °). Satisfy the constraint condition of the upper pixel s and assign white to the minute area of the minute area assigned image. Further, for other angles, white is assigned to a minute area of the minute area assigned image by satisfying the constraint condition of the pixel S on the line segment L for these angles.
- the micro area allocating unit 154 is configured to determine whether or not each of the micro areas ⁇ p ⁇ q in the thickness corresponding to the other pixels on the line segment L has a light / dark binary value. Is assigned. Further, the minute area allocating unit 154 assigns a binary deviation to the entire minute area ⁇ p ⁇ q in the thickness other than on the cross section L, that is, on the line segment L.
- the micro area allocating unit 154 determines that the cross section L and the micro area ⁇ ⁇ ⁇ (thickness other than the line segment L on the line segment L) from the binary distribution allocated to the entire ridge. Obtain the binary distribution of the entire micro area ⁇ ⁇ ⁇ q in the direction perpendicular to the thickness direction, that is, the micro area allocation unit 154 is the binary of the micro area ⁇ p ⁇ q of the entire target 202.
- the micro-area allocating unit 154 supplies the acquired density distribution to the back projection unit 156.
- the back projection unit 156 transmits the transmission type on the acquired density distribution of the target 202.
- the three-dimensional image of the object 202 is reconstructed by assigning the shades of the image (S1060).
- the backprojection unit 156 generates a weighting function by developing a normal distribution around white coordinates.
- the back projection unit 156 reconstructs the stereoscopic image of the target 202 by back projecting and assigning the density of the transmissive image according to the weight value calculated by the generated weight function.
- the spatial position of each of the plurality of feature regions and the spatial position of each of the shades of the plurality of micro regions are calculated with high accuracy, and the solid image of the target 202 is obtained with higher accuracy. Can be reconfigured.
- the stereoscopic image of the target 202 can be obtained at high speed. Can be reconfigured. Subsequently, the back projection unit 156 outputs the reconstructed stereoscopic image to the output unit 180. Then, the output unit 180 outputs the stereoscopic image subjected to the image processing by the back projection unit 156 to the outside and provides it to the user (S1080).
- step S1058 the microregion allocation unit 154 binaries each microregion A p A q based on the distribution of the feature region calculated in step S1040 and the thickness calculated in step S1052. Is assigned. In other words, the point distribution based on shading The stereoscopic image is reconstructed according to the constraint condition related to the feature region and the constraint condition related to the thickness in addition to the constraint condition related to the feature area.
- the micro area allocating unit 154 may allocate a binary value to each micro area ⁇ A q in accordance with the known information input in step S1010.
- the micro area allocation unit 154 Instead of randomly assigning the microregions, assign binary values so that the set of white microregions is approximately the length and thickness of the fiber-like structure.
- the spatial positions of the plurality of feature areas and the spatial positions of the shades of the plurality of minute areas are calculated with higher accuracy. Therefore, it is possible to reconstruct the stereoscopic image of the target 202 with higher accuracy.
- the initial white micro-regions are distributed within a certain error range by multi-stereo measurement or the like.
- a probabilistic method for reducing the difference between the actual transparent image and the projected image of the micro area allocation image under the third condition in step S1054, other methods such as force genetic algorithm using the Monte Carlo method are used. It may be a technique.
- FIG. 18 is a block diagram showing an example of a functional configuration of a stereoscopic image reconstruction device 30 according to still another embodiment of the present invention.
- the stereoscopic image reconstruction device 30 can reconstruct a stereoscopic image of the target 202 even when it is difficult to specify a thickness with a small feature area in the target 202.
- the stereoscopic image reconstruction device 30 includes an imaging unit 100, a stereoscopic structure information input unit 110, a minute area allocation unit 154, a back projection unit 156, and an output unit 180.
- the imaging unit 100 and the three-dimensional structure information input unit 110 of the stereoscopic image reconstruction device 30 have the same configuration and function as those of the stereoscopic image reconstruction device 20 of FIGS.
- the micro area allocating unit 154 of the stereoscopic image reconstruction device 30 divides an area within a predetermined thickness for the target into a plurality of micro areas, and a plurality of transparent images captured by the imaging unit 100 are obtained. Shading is assigned to each of the plurality of minute areas that match the shading of the overmolded image. For example, the micro area allocating unit 154 first divides a plurality of transmission images into a plurality of micro areas corresponding to a micro area within a predetermined thickness, and a plurality of micro images of the divided plurality of transmission images. Assign an integer value proportional to shading for each region.
- the minute area allocating unit 154 assigns one of the binary values of light and shade to each of the minute areas within a predetermined thickness in the minute area assigned image, and viewed from the angle at which the transmissive image is captured. The total power of the binary density of the minute area within the thickness. Assign one of the binary density values to the minute area within the thickness so that it matches the integer value of the shade of the corresponding minute area in the transmission image. . In this way, the minute area allocating unit 154 acquires a density distribution that is a binary distribution of the minute area A p A q of the entire object 202.
- the micro area allocating unit 154 obtains known information on the target 3D structure from the 3D structure information input unit 110, and assigns shading to a plurality of micro areas in accordance with the known information. Then, the micro area allocation unit 154 supplies the acquired density distribution to the back projection unit 156.
- the back projection unit 156 reconstructs the stereoscopic image of the target 202 by assigning the density of the transmission type image to the acquired density distribution of the target 202.
- the back projection unit 156 outputs the reconstructed stereoscopic image to the output unit 180.
- an example of the predetermined thickness is the entire rectangular minute area allocation image having a predetermined size. Another example of the predetermined thickness is the thickness or shape in the case where the outline of the target thickness or shape is known.
- FIG. 19 is a flowchart showing an example of a process flow in the stereoscopic image reconstruction method using the stereoscopic image reconstruction device 30 according to the embodiment of the present invention. Note that the stereoscopic image reconstruction method using the stereoscopic image reconstruction device 30 has a process common to the stereoscopic image reconstruction method using the stereoscopic image reconstruction device 20. Therefore, in this flowchart, a part of the description of this common process is omitted.
- the imaging unit 100 has one angle (0 °) on the plane including one cross-section L of the object 202 and another angle (10 ° ),
- the transmission images 302a and 302b including the object 202 shown in FIGS. 14 (a) and 14 (b) are captured (S1000).
- the three-dimensional structure information input unit 110 inputs known information of the target 202 (S1010).
- the micro area allocation unit 154 sets the micro area A p A q within the thickness shown in FIG.
- the region on the line segment L of the transmissive images 302a, b is divided into a plurality of minute regions ⁇ (S1054).
- the minute area allocating unit 154 assigns a pixel value as an integer value proportional to shading for the minute area ⁇ on the line segment L of the transmission type images 302a and 302b (S1056).
- the micro area allocation unit 154 allocates one of two values for the micro area ⁇ A q included in the entire micro area allocation image within the thickness of the micro area allocation image (S10 58). ).
- a binary value is assigned to each of the minute regions in the thickness so as to match an integer value proportional to the respective shades of the plurality of minute regions of the plurality of transmission images, so that the stereoscopic image of the target 202 can be obtained at high speed. Can be reconfigured.
- micro area allocation unit 154 allocates a binary value to each micro area A p A q in accordance with the known information input in step S1010.
- the spatial positions of the shades related to the plurality of minute regions are calculated with higher accuracy based on the known information related to the target 202, and the stereoscopic image of the target 202 is reconstructed with higher accuracy. Can do.
- the micro area allocation unit 154 calculates the density distribution from the binary distribution assigned to the entire line L and the micro area ⁇ ⁇ ⁇ (thickness other than on the line segment L).
- the micro region allocation unit 154 supplies the acquired density distribution to the back projection unit 156.
- the back projection unit 156 allocates the density of the transmission-type image on the acquired density distribution of the target 202, and the target distribution is obtained.
- the back projection unit 156 outputs the reconstructed stereoscopic image to the output unit 180.
- the output unit 180 performs image processing by the micro area allocation unit 154.
- the processed stereoscopic image is output to the outside and provided to the user (S1080).
- the stereoscopic image reconstruction devices 20 and 30 described above reconstruct a stereoscopic image from the constraint condition of the point distribution related to the transmission type image
- the same number of transmission types as the other methods such as CT scanning are used. 3D images with higher accuracy can be reconstructed from images, and 3D images with the same degree of accuracy as the above methods can be reconstructed from about 1/10 of the transmission type images of the other methods. can do.
- the stereoscopic image reconstruction devices 20 and 30 can reconstruct a stereoscopic image from an incomplete transmission image, for example, where there is a blind spot and a transmission image from all directions cannot be obtained.
- the stereoscopic image reconstruction devices 20 and 30 have the above point distribution.
- the number of transmission-type images to be imaged and the granularity of the micro area Ap A q within the thickness of the target 202 are determined according to the stereoscopic image reconstruction method using the stereoscopic image reconstruction devices 20 and 30. It is preferable to set according to the general structure of the force or force of which the structure is known. For example, in the present embodiment, the imaging unit 100 may capture a transmissive image in an angular step larger than 10 degrees or an angular step smaller than 10 degrees correspondingly.
Landscapes
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Biochemistry (AREA)
- Pulmonology (AREA)
- Radiology & Medical Imaging (AREA)
- Theoretical Computer Science (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Processing (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
- Image Analysis (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006546736A JP4974680B2 (ja) | 2004-12-07 | 2005-12-07 | 立体画像再構成装置、立体画像再構成方法、及び立体画像再構成プログラム |
US11/810,632 US7853069B2 (en) | 2004-12-07 | 2007-06-06 | Stereoscopic image regenerating apparatus, stereoscopic image regenerating method, and stereoscopic image regenerating program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004354563 | 2004-12-07 | ||
JP2004-354563 | 2004-12-07 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/810,632 Continuation US7853069B2 (en) | 2004-12-07 | 2007-06-06 | Stereoscopic image regenerating apparatus, stereoscopic image regenerating method, and stereoscopic image regenerating program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006062132A1 true WO2006062132A1 (ja) | 2006-06-15 |
Family
ID=36577959
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/022471 WO2006062132A1 (ja) | 2004-12-07 | 2005-12-07 | 立体画像再構成装置、立体画像再構成方法、及び立体画像再構成プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US7853069B2 (ja) |
JP (2) | JP4974680B2 (ja) |
WO (1) | WO2006062132A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7806144B2 (en) | 2006-10-17 | 2010-10-05 | Baxter International Inc. | Flow restrictor for infusion system |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006062132A1 (ja) * | 2004-12-07 | 2006-06-15 | The University Of Tokyo | 立体画像再構成装置、立体画像再構成方法、及び立体画像再構成プログラム |
JP5567908B2 (ja) * | 2009-06-24 | 2014-08-06 | キヤノン株式会社 | 3次元計測装置、その計測方法及びプログラム |
EP2708874A1 (en) * | 2012-09-12 | 2014-03-19 | Fei Company | Method of performing tomographic imaging of a sample in a charged-particle microscope |
JP7074344B2 (ja) * | 2016-10-11 | 2022-05-24 | 淳 山崎 | 立体物形成指示装置、立体物の製造方法、及びプログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08292164A (ja) * | 1995-04-25 | 1996-11-05 | Hitachi Ltd | 電子顕微鏡及び3次元原子配列観察方法 |
JP2002109516A (ja) * | 2000-09-27 | 2002-04-12 | Inst Of Physical & Chemical Res | 3次元形状復元方法 |
JP2003000583A (ja) * | 2001-05-31 | 2003-01-07 | Siemens Ag | コンピュータトモグラフの作動方法 |
JP2004132944A (ja) * | 2002-08-16 | 2004-04-30 | Foundation For The Promotion Of Industrial Science | 製造誤差評価システム及び方法並びにプログラム |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4149247A (en) * | 1975-12-23 | 1979-04-10 | Varian Associates, Inc. | Tomographic apparatus and method for reconstructing planar slices from non-absorbed and non-scattered radiation |
US4149248A (en) * | 1975-12-23 | 1979-04-10 | Varian Associates, Inc. | Apparatus and method for reconstructing data |
US4365339A (en) * | 1975-12-23 | 1982-12-21 | General Electric Company | Tomographic apparatus and method for reconstructing planar slices from non-absorbed and non-scattered radiation |
JPH0618570B2 (ja) * | 1985-03-28 | 1994-03-16 | 株式会社日立製作所 | 2次元断層像撮影装置 |
US5552602A (en) * | 1991-05-15 | 1996-09-03 | Hitachi, Ltd. | Electron microscope |
US5866905A (en) * | 1991-05-15 | 1999-02-02 | Hitachi, Ltd. | Electron microscope |
US6051834A (en) * | 1991-05-15 | 2000-04-18 | Hitachi, Ltd. | Electron microscope |
JP3287858B2 (ja) * | 1991-05-15 | 2002-06-04 | 株式会社日立製作所 | 電子顕微鏡装置及び電子顕微方法 |
US5414261A (en) * | 1993-07-01 | 1995-05-09 | The Regents Of The University Of California | Enhanced imaging mode for transmission electron microscopy |
SE9601229D0 (sv) * | 1996-03-07 | 1996-03-29 | B Ulf Skoglund | Apparatus and method for providing reconstruction |
JPH11250850A (ja) * | 1998-03-02 | 1999-09-17 | Hitachi Ltd | 走査電子顕微鏡及び顕微方法並びに対話型入力装置 |
AU2002221101A1 (en) * | 2000-12-15 | 2002-06-24 | Norio Baba | Mage processor, image processing method, recording medium and program |
EP1363245A1 (en) * | 2001-01-05 | 2003-11-19 | Center for Advanced Science and Technology Incubation, Ltd. | Three-dimensional verification supporting apparatus, three-dimensional structure verification method, record medium, and program |
JP3603204B2 (ja) * | 2001-03-12 | 2004-12-22 | 株式会社東京大学Tlo | 立体構造検証支援装置、立体構造検証支援方法、及びプログラム |
WO2006062132A1 (ja) * | 2004-12-07 | 2006-06-15 | The University Of Tokyo | 立体画像再構成装置、立体画像再構成方法、及び立体画像再構成プログラム |
US7538329B2 (en) * | 2005-02-02 | 2009-05-26 | Nomadics, Inc. | Energy-transfer nanocomposite materials and methods of making and using same |
-
2005
- 2005-12-07 WO PCT/JP2005/022471 patent/WO2006062132A1/ja active Application Filing
- 2005-12-07 JP JP2006546736A patent/JP4974680B2/ja not_active Expired - Fee Related
-
2007
- 2007-06-06 US US11/810,632 patent/US7853069B2/en not_active Expired - Fee Related
-
2012
- 2012-02-22 JP JP2012036457A patent/JP2012133796A/ja active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08292164A (ja) * | 1995-04-25 | 1996-11-05 | Hitachi Ltd | 電子顕微鏡及び3次元原子配列観察方法 |
JP2002109516A (ja) * | 2000-09-27 | 2002-04-12 | Inst Of Physical & Chemical Res | 3次元形状復元方法 |
JP2003000583A (ja) * | 2001-05-31 | 2003-01-07 | Siemens Ag | コンピュータトモグラフの作動方法 |
JP2004132944A (ja) * | 2002-08-16 | 2004-04-30 | Foundation For The Promotion Of Industrial Science | 製造誤差評価システム及び方法並びにプログラム |
Non-Patent Citations (2)
Title |
---|
ARGYROS A ET AL: "Electron tomography and computer visualisation of a three-dimensional photonic' crystal in a butterfly wing-scale.", MICRON., vol. 33, no. 5, 2002, pages 483 - 487, XP002995726 * |
ZIESE U ET AL: "Electron Tomography: a tool for 3D structural probing of heterogeneous catalysts at the nanometer scale.", APPLIED CATALYSIS A: GENERAL., vol. 260, no. 1, 25 March 2004 (2004-03-25), pages 71 - 74, XP004494829 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7806144B2 (en) | 2006-10-17 | 2010-10-05 | Baxter International Inc. | Flow restrictor for infusion system |
Also Published As
Publication number | Publication date |
---|---|
JP4974680B2 (ja) | 2012-07-11 |
JPWO2006062132A1 (ja) | 2008-06-12 |
US7853069B2 (en) | 2010-12-14 |
JP2012133796A (ja) | 2012-07-12 |
US20070253612A1 (en) | 2007-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113424224B (zh) | 识别并保留优选点的合并点云的方法 | |
US7931602B2 (en) | Gaze guidance degree calculation system, gaze guidance degree calculation program, storage medium, and gaze guidance degree calculation method | |
KR101956149B1 (ko) | 이미지 사이의 광학 흐름의 효율적 결정 | |
US9881373B2 (en) | Image generating apparatus and image generating method | |
JP2006271971A (ja) | ボリュメトリック画像強調システム及び方法 | |
US20110148909A1 (en) | Generating object representation from bitmap image | |
JP2012133796A (ja) | 立体画像再構成装置、立体画像再構成方法、及び立体画像再構成プログラム | |
JP2004509722A (ja) | 椎骨の椎弓根の位置を含む脊椎の正面幾何学データを抽出する方法及びシステム | |
JP6762570B2 (ja) | 画像処理装置、画像処理方法、及び画像処理プログラム | |
EP3291532B1 (en) | Image processing device and image processing method | |
JP2000137815A (ja) | 新視点画像生成方法 | |
CN104597061A (zh) | 基于虚拟探测器的大视场锥束ct成像方法 | |
US9959672B2 (en) | Color-based dynamic sub-division to generate 3D mesh | |
Shah et al. | Recovery of surfaces with discontinuities by fusing shading and range data within a variational framework | |
US10964094B1 (en) | Visualization system that transforms 2D images of objects slices into 3D point clouds | |
CN116433695B (zh) | 一种乳腺钼靶图像的乳腺区域提取方法及系统 | |
CN117522850A (zh) | 高亮面缺陷检测方法、装置、计算机设备和存储介质 | |
JP2001273487A (ja) | 完全な2次元サブミクロン形状の測定の方法および装置 | |
WO2023034100A1 (en) | Systems for generating presentations of eyebrow designs | |
WO2016032683A1 (en) | Method for removing streak from detector cell with performance difference | |
Aronsson et al. | Slice-based digital volume assembly of a small paper sample | |
CN111613302A (zh) | 基于VRDS 4D医学影像的肿瘤Ai处理方法及产品 | |
WO2021075314A1 (ja) | 画像処理装置、画像処理方法、及びコンピュータ読み取り可能な記録媒体 | |
WO2023224041A1 (ja) | 検査装置および学習方法 | |
CN117649493A (zh) | 一种图像重建方法、装置、设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 11810632 Country of ref document: US Ref document number: 2006546736 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 11810632 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 05814538 Country of ref document: EP Kind code of ref document: A1 |