WO2019156139A1 - Image processing device, image processing method, and image processing program - Google Patents

Image processing device, image processing method, and image processing program Download PDF

Info

Publication number
WO2019156139A1
WO2019156139A1 PCT/JP2019/004336 JP2019004336W WO2019156139A1 WO 2019156139 A1 WO2019156139 A1 WO 2019156139A1 JP 2019004336 W JP2019004336 W JP 2019004336W WO 2019156139 A1 WO2019156139 A1 WO 2019156139A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
image
pixel
image processing
target pixel
Prior art date
Application number
PCT/JP2019/004336
Other languages
French (fr)
Japanese (ja)
Inventor
克己 薮崎
Original Assignee
興和株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 興和株式会社 filed Critical 興和株式会社
Priority to JP2019570789A priority Critical patent/JPWO2019156139A1/en
Publication of WO2019156139A1 publication Critical patent/WO2019156139A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Definitions

  • the present invention relates to an image processing apparatus, an image processing method, and an image processing program for generating a single front image by overlaying a plurality of tomographic images generated from three-dimensional image data.
  • OCT Optical Coherence Tomography
  • OCT angiography Optical Coherence Tomography Angiography
  • an angiographic image can be created non-invasively without using a fluorescent agent.
  • several identical images of the fundus of the eye to be examined are continuously photographed to obtain B-scan images with a time difference of several ms, and a short-time change between the B-scan images with a time difference is represented by a change in blood flow.
  • a blood vessel tomographic image motion contrast image
  • One vascular tomographic image is generated from a plurality of B-scan image groups having a time difference, and this is repeated spatially by changing the scan position in the C-scan direction (y-direction) with respect to the observation target region.
  • a plurality of vascular tomographic images can be obtained. From these continuous plural blood vessel images, one three-dimensional data set capable of capturing the blood vessel morphology in the retina three-dimensionally is constructed. That is, the three-dimensional data set is data in which the observation target region of the fundus of the eye to be examined is three-dimensionally modeled.
  • a blood vessel tomographic image is generated as a tomographic image of a plurality of xy planes that are continuous in the depth direction (z direction).
  • a plurality of continuous xy plane vascular tomographic images in a desired range are layered to generate a single front image (Enface image) on which the retinal blood vessels in the range are superimposed and used for various diagnoses.
  • face image a single front image
  • OMAG Optical Microangiography
  • SSADA Split-spectrum Amplitude-decorrelation Angiography
  • the present invention has been made in view of such points, and an object thereof is to provide an image processing apparatus, an image processing method, and an image processing program capable of generating a front image suitable for diagnosis.
  • an object is to provide an image processing apparatus, an image processing method, and an image processing program capable of generating a front image with improved contrast and improved blood vessel visibility.
  • the present invention includes a smoothing unit that performs a smoothing process on three-dimensional image data, and a plurality of tomographic images continuous in a predetermined direction after the smoothing process.
  • Front image generating means for generating one front image by overlaying the plurality of tomographic images, and generating the one or more tomographic images, wherein the smoothing means includes the pixel of interest and the predetermined direction.
  • First filter means for performing Gaussian blurring processing on a plurality of peripheral pixels located around the pixel of interest on an orthogonal plane and replacing the luminance value of the pixel of interest; and the continuous in the predetermined direction
  • an image processing apparatus comprising: a second filter unit that performs a moving average process on a plurality of target pixels in a desired range centered on a target pixel to replace a luminance value of the target pixel. Do ( Akira 1).
  • the three-dimensional image data is obtained by executing two different filtering processes, namely, a Gaussian blur process for the plane direction of the tomographic image used for generating the front image and a moving average process for the depth direction.
  • a Gaussian blur process for the plane direction of the tomographic image used for generating the front image
  • a moving average process for the depth direction.
  • the resolution in the depth direction is higher than that in the plane direction. Therefore, in the depth direction, even if moving average processing that is easily affected by the value of the adjacent pixel is performed, the data of the minute structure is lost. The risk is low, and as a result, it is possible to improve the smoothness and lower the black level of the background without impairing the blood vessel connectivity.
  • the data value of the adjacent pixel greatly affects the data value of the adjacent pixel, resulting in a decrease in contrast and loss of structure data, or blood vessel connectivity. There is concern that it will get worse. Therefore, by adopting a process for increasing the weight of its own pixel value over the surrounding pixel value, such as Gaussian blurring, in the plane direction, it is possible to prevent the data having a minute structure from being buried.
  • the Gaussian blurring process is much more computationally intensive than the moving average process and takes a long time to process. Therefore, it is not preferable to perform the Gaussian blurring process in all directions from the viewpoint of the calculation amount and the calculation time.
  • the present invention performs two different filtering processes, a Gaussian blur process in the plane direction and a moving average process in the depth direction, to improve smoothness without impairing blood vessel connectivity.
  • the background black level can be reduced, and a three-dimensional image and a front image suitable for diagnosis can be generated with a realistic calculation amount and calculation time.
  • the front image generation means selects and selects a predetermined number of pixels in descending order of luminance value from a plurality of pixels at the same position in each of the plurality of tomographic images. It is preferable to generate the front image by using an average value of luminance values of a predetermined number of pixels as a luminance value at the position of the front image (Invention 2).
  • noise removal means for executing noise removal processing on the three-dimensional image data before the smoothing processing (invention 3).
  • the present invention generates a smoothing step for performing a smoothing process on three-dimensional image data, and generates a plurality of tomographic images continuous in a predetermined direction from the three-dimensional image data after the smoothing process,
  • a Gaussian blur process is performed on a plurality of peripheral pixels located around the pixel to replace the luminance value of the pixel of interest, and a plurality of objects in a desired range centered on the pixel of interest continuous in the predetermined direction
  • an image processing method characterized in that a moving average process is performed on a pixel to replace a luminance value of the target pixel (invention 4).
  • the two-dimensional filtering process is executed by performing two different filtering processes, namely, a Gaussian blur process for the plane direction of the tomographic image used for generating the front image and a moving average process for the depth direction.
  • a Gaussian blur process for the plane direction of the tomographic image used for generating the front image
  • a moving average process for the depth direction.
  • the resolution in the depth direction is higher than that in the plane direction. Therefore, in the depth direction, even if moving average processing that is easily affected by the value of the adjacent pixel is performed, the data of the minute structure is lost. The risk is low, and as a result, it is possible to improve the smoothness and lower the black level of the background without impairing the blood vessel connectivity.
  • the data value of the adjacent pixel greatly affects the data value of the adjacent pixel, resulting in a decrease in contrast and loss of structure data, or blood vessel connectivity. There is concern that it will get worse. Therefore, by adopting a process for increasing the weight of its own pixel value over the surrounding pixel value, such as Gaussian blurring, in the plane direction, it is possible to prevent the data having a minute structure from being buried.
  • the Gaussian blurring process is much more computationally intensive than the moving average process and takes a long time to process. Therefore, it is not preferable to perform the Gaussian blurring process in all directions from the viewpoint of the calculation amount and the calculation time.
  • the present invention performs two different filtering processes, a Gaussian blur process in the plane direction and a moving average process in the depth direction, to improve smoothness without impairing blood vessel connectivity.
  • the background black level can be reduced, and a three-dimensional image and a front image suitable for diagnosis can be generated with a realistic calculation amount and calculation time.
  • invention 4 in the front image generation step, a predetermined number of pixels are selected from a plurality of pixels at the same position in each of the plurality of tomographic images in descending order of luminance value and selected.
  • invention 5 invention 4, a predetermined number of pixels are selected from a plurality of pixels at the same position in each of the plurality of tomographic images in descending order of luminance value and selected.
  • the present invention is for causing a computer to function as an image processing apparatus according to any one of inventions 1 to 3, or for causing a computer to execute the image processing method according to any one of inventions 4 to 6.
  • An image processing program is provided (Invention 7).
  • a front image suitable for diagnosis can be generated.
  • FIG. 1 is a block diagram illustrating an overall configuration of an image processing apparatus according to an embodiment of the present invention. It is explanatory drawing which shows the state which acquires a fundus tomographic image by scanning a fundus. It is explanatory drawing which shows the state which acquires multiple sheets of the fundus tomographic image of the same location of the fundus by time difference, and produces
  • a tomographic image is used to acquire a tomographic image of the fundus E of the eye to be examined, and one three-dimensional image that can grasp the blood vessel morphology on the fundus three-dimensionally from the obtained tomographic image by OCT angiography.
  • Data is constructed and image processing is performed on the three-dimensional image data to generate a single front image in which blood vessels are superimposed and displayed in the depth direction.
  • a front image suitable for diagnosis is generated with improved contrast and improved blood vessel visibility.
  • the image to be subjected to image processing in the present invention is not limited to the three-dimensional image data capable of capturing the blood vessel morphology on the fundus three-dimensionally, and other objects are photographed with other types of devices.
  • the present invention can also be applied to the constructed three-dimensional image data.
  • FIG. 1 is a block diagram showing the entire system for acquiring and processing a tomographic image of the fundus of the eye to be examined.
  • the tomographic imaging apparatus 10 is an apparatus (OCT: Optical Coherence Tomography) that captures a tomographic image of the fundus of the subject's eye and operates, for example, in the Fourier domain method. Since the tomographic imaging apparatus 10 is known, detailed description thereof is omitted, but the tomographic imaging apparatus 10 is provided with a low coherence light source, and light from the low coherence light source is divided into reference light and signal light.
  • the signal light is raster scanned on the fundus oculi E, for example, in the x and y directions. The signal light scanned and reflected by the fundus E is superimposed on the reference light reflected by the reference mirror to generate interference light, and an OCT signal indicating information on the depth direction (z direction) of the fundus is generated based on the interference light. To do.
  • the image processing apparatus 20 includes a control unit 21 realized by a computer including a CPU, a RAM, a ROM, and the like, and the control unit 21 controls the entire image processing by executing an image processing program.
  • the image processing apparatus 20 includes a tomographic image forming unit 22, a storage unit 23, a vascular tomographic image forming unit 24, a three-dimensional data construction unit 25, a display unit 26, and an operation unit 27.
  • the tomographic image forming unit 22 is realized by a dedicated electronic circuit that executes a known analysis method such as a Fourier domain method, or an image processing program executed by the CPU, and an OCT signal generated by the tomographic imaging apparatus 10. Based on the above, a tomographic image of the fundus of the subject eye is formed.
  • the tomographic image B N is an image having a size of m ⁇ n pixels (m pixels in the x direction). (Length, length of n pixels in the z direction) and is also called a B-scan image.
  • B-scan images BNI formed by the tomographic image forming unit 22 are stored in a storage unit 23 configured by, for example, a semiconductor memory or a hard disk device.
  • the storage unit 23 further stores the above-described image processing program and the like.
  • i B-scan images B (N ⁇ 1) I (I 1, 2,..., I) with a time difference of several ms at the position of y N ⁇ 1.
  • Three-dimensional data constructing unit 25 as shown in FIG. 3, the t pieces of blood vessel tomographic image F N consecutive vascular tomographic image forming unit spatially formed by 24, sterically capture that vessels form in the retina
  • One 3D image data G that can be stored is constructed and stored in the storage unit 23.
  • the three-dimensional image data G is data in which the observation target region of the fundus of the eye to be examined is three-dimensionally modeled so that the blood vessel morphology in the retina can be captured three-dimensionally.
  • the image processing device 20 is provided with an image processing unit 30.
  • the image processing unit 30 includes a calculation unit 31 as a noise removal unit, a counting unit 32 and a replacement unit 33, a first filter unit 34 and a second filter unit 35 as a smoothing unit, and a front image generation unit 36.
  • Various image processing is executed on the three-dimensional image data G.
  • the calculation unit 31, the counting unit 32, and the replacement unit 33 are constructed by the three-dimensional data construction unit 25 and execute noise removal processing on the three-dimensional image data G stored in the storage unit 23.
  • the calculating unit 31 calculates a difference value between the luminance value of the target pixel in the image to be processed and the luminance values of a plurality of adjacent pixels located around the target pixel, and the counting unit 32 Counts the number of adjacent pixels whose calculated difference value is less than or equal to a predetermined threshold (luminance difference threshold) as the number of non-corresponding adjacent pixels, and the replacing means 33 determines that the number of non-corresponding adjacent pixels is a predetermined number (not applicable Number) or less, the luminance value of the target pixel is replaced with the average value of the luminance values of a plurality of adjacent pixels.
  • a predetermined threshold luminance difference threshold
  • First filter means 34 and second filter means 35 is adapted to perform smoothing processing on the three-dimensional image data G 1 after the noise removal processing, the first filter means 34, to be processed three-dimensional and target pixel in the image data G 1, the luminance of the plurality of the pixel of interest by performing a Gaussian blurring process with respect to the peripheral pixels located around the target pixel on a plane perpendicular to the predetermined direction (depth direction) The value is replaced, and the second filter means 35 performs a moving average process on a plurality of target pixels in a desired range centered on the target pixel continuous in a predetermined direction (depth direction), and the luminance of the target pixel. Replace the value.
  • the front image generation unit 36 generates a plurality of tomographic images continuous to the smoothing process after the three-dimensional image data G 2 from a predetermined direction (depth direction), and overlaid the plurality of tomographic images
  • a predetermined number of pixels are selected in descending order of luminance value from a plurality of pixels at the same position in each of the plurality of tomographic images, and the selected predetermined number of pixels are generated.
  • a front image is generated by using an average value of luminance values of pixels as a luminance value at the position of the front image.
  • Each means or each image processing in the image processing unit 30 is realized by using a dedicated electronic circuit or by executing an image processing program.
  • the display unit 26 is configured by a display device such as an LCD, for example, and displays various images stored in the storage unit 23, various images generated or processed by the image processing device 20, accompanying information such as information on the subject, and the like. To do.
  • a display device such as an LCD, for example, and displays various images stored in the storage unit 23, various images generated or processed by the image processing device 20, accompanying information such as information on the subject, and the like. To do.
  • the operation unit 27 includes, for example, a mouse, a keyboard, an operation pen, a pointer, an operation panel, and the like.
  • the operation unit 27 is used for selecting an image displayed on the display unit 26 or for giving an instruction to the image processing apparatus 20 by the operator. It is done.
  • the processing target image FT is an image having a size of m ⁇ n pixels, and the vascular tomographic image FT-1 and the vascular tomographic image FT + 1 are the processing target image FT.
  • vascular tomographic image F T ⁇ 1 an image similar to the target image F T having a size of m ⁇ n pixels.
  • the vascular tomographic image F T ⁇ 1 , the processing target image FT, and the vascular tomographic image F T + 1 are all stored in the storage unit 23.
  • adjacent pixels q i in addition to a total of eight pixels of the pixel positioned in contact with the four corners of the pixel and the target pixel located at the upper, lower, left and right of the target pixel Q in the processing target image F T, vascular tomographic image F T -1 and a total of 26 pixels, including a total of 18 pixels to be positioned around the spatially target pixel Q when the blood vessel tomographic image F T + 1 arranged in the processing target image F T.
  • adjacent pixels is 26 set in the case of setting the end portion of the pixel of interest Q in the processing target image F T, when the target pixel Q is set to an end in the processing target image F T
  • the adjacent pixels may be appropriately set within 26 in accordance with the position of the target pixel.
  • the first column the pixel of interest from the left of the processing target image F T when it is set in the third row from the top, above the target pixel Q, down, right, upper right, located lower right 5 17 pixels including the pixel and 12 pixels located adjacent to the target pixel Q and the five pixels in the successive vascular tomographic images may be set as adjacent pixels.
  • adjacent pixels may be appropriately set within 26 according to the position of the target pixel.
  • the method for setting adjacent pixels may be appropriately changed according to the type of 3D image data to be processed, the purpose of noise removal, and the like.
  • the vascular tomographic image FT-2 and the vascular tomographic image FT + 2 that are continuous before and after the vascular tomographic image FT-1 and vascular tomographic image FT + 1. May also be used as a setting target for adjacent pixels, and the setting of adjacent pixels may be extended to a 5 ⁇ 5 range instead of a 3 ⁇ 3 range, Two pixels positioned adjacent to the target pixel in the vascular tomographic images before and after the four consecutive pixels (for example, four pixels q 2 , q 4 , q 5 , and q 7 in FIG. 5).
  • the method of setting adjacent pixels may be changed as appropriate, for example, six pixels including pixels (for example, two pixels q 13 and q 22 in FIG. 5) are set as adjacent pixels.
  • the calculation unit 31 After obtaining the luminance value D of the pixel of interest Q and the luminance value d i of the adjacent pixel q i , the calculation unit 31 first subtracts the luminance value d i of each adjacent pixel q i from the luminance value D of the pixel of interest Q. A difference value is calculated for each of the 26 adjacent pixels q i (step 103a). This first difference value is calculated by the number of adjacent pixels, and is used to estimate whether the target pixel Q is white point noise.
  • the counting means 32 compares whether or not the calculated first difference value is equal to or less than the first luminance difference threshold value PKT stored in the storage unit 23 in advance, and the first luminance difference threshold value PKT.
  • the number of adjacent pixels that was below is counted as the first non-applicable number of adjacent pixels (step 104a). That is, the number of adjacent pixels satisfying the conditional expression: luminance value D of target pixel Q ⁇ luminance value d i ⁇ PKT of adjacent pixel q i is the first non-applicable number of adjacent pixels.
  • the replacement unit 33 determines whether the first non-applicable adjacent pixel number is equal to or less than the first non-applicable allowable number PFT stored in the storage unit 23 in advance. (Step 105a). If the number of first non-applicable adjacent pixels is equal to or less than the first non-applicable permissible number PFT, the target pixel Q is determined to be white point noise.
  • the calculating unit 31 obtains a second value obtained by subtracting the luminance value D of the target pixel Q from the luminance value d i of each adjacent pixel q i .
  • a difference value is calculated for each of the 26 adjacent pixels q i (step 103b). Similar to the first difference value, the second difference value is also calculated by the number of adjacent pixels, and is used to estimate whether the target pixel Q is black point noise.
  • the counting means 32 compares whether or not the calculated second difference value is equal to or less than the second luminance difference threshold value HLT stored in the storage unit 23 in advance, and the second luminance difference threshold value HLT is compared.
  • the number of adjacent pixels that was below is counted as the second non-applicable number of adjacent pixels (step 104b). That is, the conditional expression: the luminance value d i adjacent pixels q i - the number of neighboring pixels satisfying the luminance value D ⁇ HLT of the pixel of interest Q is the second non-relevant number of adjacent pixels.
  • the replacement unit 33 determines whether the second non-applicable adjacent pixel number is equal to or less than the second non-applicable allowable number HFT stored in the storage unit 23 in advance. (Step 105b). If the number of second non-applicable adjacent pixels is equal to or less than the second non-applicable allowable number HFT, the target pixel Q is determined to be black point noise.
  • the replacement unit 33 calculates the average value of the luminance values d i of the adjacent pixels q i. Calculate (step 106). Then, an image in which the luminance value of the target pixel Q is replaced with the calculated average value is generated (step 107) and stored in the storage unit 23.
  • the first non-applicable adjacent pixel number is not less than or equal to the first non-applicable allowable number PFT, and as a result of the determination in step 105b, the second non-applicable adjacent pixel number is equal to or less than the second non-applicable allowable number HFT. Otherwise, the luminance value of the target pixel Q is left as it is without being replaced with the average value.
  • the three-dimensional image data G 1 after the noise removal processing is stored in the storage unit 23. Note that not all vascular tomographic images need to be processed, and any number of vascular tomographic images can be selected according to the type of 3D image data to be processed, the purpose of noise removal, and the like.
  • the noise removal processing compares the pixel of interest in the image to be processed with the adjacent pixels located around the target pixel, and the luminance value of the pixel of interest is a predetermined value higher than that of the adjacent pixel. If it is brighter or darker than this, it is replaced with the average value of adjacent pixels.
  • Increasing the luminance difference between the adjacent pixel used for replacement and the target pixel, that is, setting the first luminance difference threshold and the second luminance difference threshold described above to be large reduces the number of target pixels to be replaced, and conversely setting it to be small. The number of target pixels to be replaced increases.
  • the most strict determination formula is used to determine whether the target pixel is brighter than a predetermined value than all adjacent pixels, or whether the target pixel is darker than a predetermined value than all adjacent pixels. It can also be used. In this case, by setting the first non-applicable permissible number and the second non-applicable permissible number to 0, the target pixel is brighter than a predetermined value than all the adjacent pixels, or the target pixel is more than all the adjacent pixels. Only when the brightness is darker than the predetermined value, the process of replacing the luminance value of the target pixel with the average value of the luminance values of the adjacent pixels is performed.
  • the number of target pixels replaced with the average value decreases, and the replacement is performed as the first non-applicable permissible number or the second non-applicable permissible number is increased.
  • the number of pixels of interest increases.
  • the same value may be used for the first luminance difference threshold and the second luminance difference threshold, or different numerical values may be set individually. Moreover, the same value may be used for the first non-applicable allowable number and the second non-applicable allowable number, or different numerical values may be set individually.
  • the condition values stored in advance in the storage unit 23 may be applied after considering the optimum values according to the quality of the image and the properties (shape, brightness, size, etc.) of the structure to be extracted. it can.
  • the pixel of interest does not have bright pixels around it.
  • the pixel of interest is a white point noise pixel and can be replaced with the average value of adjacent pixels.
  • the first non-applicable permissible number is increased, even if there are several high-luminance pixels in the vicinity, it is possible to determine that the pixel of interest is a white point noise pixel and replace the adjacent value with the average value.
  • the noise removal effect can be enhanced and the black level of the background can be improved.
  • microvessels are difficult to be extracted.
  • both white point noise and black point noise are removed, but it is not always necessary to remove both, and depending on the type of 3D image data to be processed and the purpose of noise removal, etc. Either one of the point noise and the black point noise may be removed.
  • the first filter means 34 performs the Gaussian blurring process to target a total of nine pixel of interest P and eight surrounding pixels p i, replaces the luminance value of the target pixel P (step 203).
  • the luminance value of the target pixel P and the eight peripheral pixels p i is weighted and averaged using a Gaussian distribution function as shown in Equation 1 below so that the weight decreases as the distance from the center increases. The luminance value.
  • E 6 , e 8 When weighting is performed at a rate corresponding to each position of the peripheral pixel p i using the above-described kernel K, for example, the luminance values e 1 and e 3 of the peripheral pixels p 1 , p 3 , p 6 and p 8 shown in FIG. , E 6 , e 8 have a weight of 1/50, and luminance values e 2 , e 4 , e 5 , e 7 of peripheral pixels p 2 , p 4 , p 5 , p 7 have a weight of 6/50.
  • An average value is calculated by weighting the luminance value of the pixel P by 22/50, and this average value is set as the luminance value of the pixel of interest P.
  • the three-dimensional image in the data G 1 repeated while setting the order all the pixels as a target pixel, finally the three-dimensional image data G 1 of performed smoothing process in the entire three-dimensional image data G 2 Is generated and stored in the storage unit 23.
  • the target range can also be set arbitrarily.
  • the three-dimensional image data G 1 is obtained by executing two different filtering processes, namely, a Gaussian blur process for the plane direction (xy direction) of the three-dimensional image data G 1 and a moving average process for the depth direction (z direction).
  • a Gaussian blur process for the plane direction (xy direction) of the three-dimensional image data G 1 and a moving average process for the depth direction (z direction).
  • processing 1 a three-dimensional image and a front image suitable for diagnosis can be generated.
  • the depth direction has a higher resolution than the plane direction, so even if moving average processing is performed in the depth direction, the smoothness is improved and the black level of the background is reduced without impairing blood vessel connectivity. It is possible. On the other hand, if the moving average process is performed in the plane direction, the contrast is lowered and the blood vessel connectivity is deteriorated. For the plane direction, the data of the minute structure is buried by adopting the Gaussian blurring process. It becomes possible to prevent it.
  • H L 1, 2,..., J
  • a single front image E is generated by overlaying the j xy planar tomographic images HL generated from G 2
  • Gaussian blurring processing is performed in the plane direction
  • moving average processing is performed in the depth direction.
  • 3D images and front images can be generated.
  • the front image is generated after the smoothing process is performed on the three-dimensional image data after the noise removal process is performed as in the present embodiment, the front image with improved contrast and extremely good blood vessel visibility. Can be obtained.
  • the peripheral pixels is eight sets, the target pixel P is set to end in the three-dimensional image data G 1 In this case, the number of peripheral pixels may be appropriately set within 8 according to the position of the target pixel.
  • the kernel used in the Gaussian blur processing by the first filter means 34 is not necessarily a combination of the above-mentioned rates in a 3 ⁇ 3 range, and depends on the type of 3D image data to be processed, the purpose of smoothing, and the like. Thus, for example, a kernel having a combination of different rates in the range of 5 ⁇ 5 may be used. If Gaussian blurring processing is performed using a kernel in a 5 ⁇ 5 range, a total of 24 pixels positioned around the pixel of interest on the xy plane where the pixel of interest exists is set as the peripheral pixel.
  • the number of target pixels to be subjected to the moving average process by the second filter unit 35 is not necessarily five, and as long as the target pixel is a pixel in a range centered on the target pixel, the third order to be processed Depending on the type of the original image data, the purpose of smoothing, etc., the desired number of target pixels such as 3, 5, 7, etc. may be used.
  • xy plane tomographic image H L need not necessarily be generated through the 3D image data G 2 as the target, set to an arbitrary portion to be observed as a diagnosis target in a three-dimensional image data G 2, the portion A plurality of xy plane tomographic images H L can be generated only for the target.
  • a single front image E is generated by overlaying j xy planar tomographic images HL (step 400).
  • the front image is obtained by simply selecting the maximum luminance value from the luminance values of a plurality of pixels (pixels marked in FIG. 8) at the same position in each of the j xy planar tomographic images HL .
  • a predetermined number of pixels are selected from a plurality of pixels at the same position in each of the j xy planar tomographic images HL in descending order of the luminance value.
  • the front image E is generated by setting the average value of the luminance values of the selected predetermined number of pixels as the luminance value at the position of the front image E.
  • FIG. 9 shows a schematic view of this process.
  • FIG. 9 schematically shows a process in which the average value of the luminance values of a predetermined number of selected pixels is used as the luminance value at the position of the front image E.
  • FIG. 9A shows a thick blood vessel at the position.
  • (b) shows a case where there is a thin blood vessel at the position
  • (c) shows a case where there is a noise bright spot at the position.
  • twelve xy planar tomographic images HL the state where twelve pixels at the same position are arranged in a line as they are is shown on the left side of each of FIGS. 9 (a), (b), and (c). Has been.
  • the maximum luminance value among the pixel values of a plurality of pixels at the same position is simply set as the front image E. If the predetermined number of pixels are selected in descending order of the luminance value, and the average value of the luminance values of the selected predetermined number of pixels is set as the luminance value at the position of the front image E, For example, as shown in FIG. 9A, if there is a thick blood vessel at the position, the luminance value at the position of the front image E is reflected and the blood vessel can be clearly seen in the front image E. . Further, as shown in FIG.
  • the luminance value at the position of the front image E is calculated and the brightness level of the micro blood vessel is maintained in the front image E.
  • the average of the position of the front image E is obtained by averaging.
  • the luminance value becomes small, and sesame salt noise in the front image E can be reduced.
  • the black level of the background can be reduced, and a front image suitable for diagnosis can be generated.
  • the present invention can be used as a method for generating a single front image by overlaying a plurality of tomographic images generated from three-dimensional image data.

Abstract

This image processing device 20 comprises: a smoothing means that performs smoothing processing on three-dimensional image data; and an en-face image generation means 36 that generates a plurality of tomographic images that are sequential in a prescribed direction from the smoothed three-dimensional image data and layers the plurality of tomographic images to generate a single en-face image. The smoothing means has: a first filter means 34 that commutes the luminance value of a pixel of interest by performing Gaussian blur processing on the pixel of interest and a plurality of surrounding pixels that are positioned around the pixel of interest in a plane that is orthogonal to the prescribed direction; and a second filter means 35 that commutes the luminance value of the pixel of interest by performing moving average processing on a plurality of target pixels that are sequential in the prescribed direction and are within a desired range that is centered on the pixel of interest.

Description

画像処理装置、画像処理方法及び画像処理プログラムImage processing apparatus, image processing method, and image processing program
 本発明は、三次元画像データから生成した複数枚の断層画像を重層して一の正面画像を生成する画像処理装置、画像処理方法及び画像処理プログラムに関する。 The present invention relates to an image processing apparatus, an image processing method, and an image processing program for generating a single front image by overlaying a plurality of tomographic images generated from three-dimensional image data.
 眼科診断機の一つで、網膜の断層像を撮影するOCT(Optical Coherence Tomography)という断層像撮影装置がある。一般的なOCTの撮影を行えば、得られる断層像は、例えば40枚/秒の速度で撮影され、一度の検査(網膜中のある一部分での撮影)で100枚以上画像が取得される。近年、網膜血管の形態を把握するために、このOCTを用いた擬似的な血管造影法であるOCTアンギオグラフィ(OCTA:Optical Coherence Tomography Angiography)と呼ばれる手法が知られている。 There is a tomographic imaging apparatus called OCT (Optical Coherence Tomography), which is one of the ophthalmologic diagnosis machines and takes a tomographic image of the retina. If general OCT imaging is performed, the obtained tomographic image is captured at a speed of, for example, 40 images / second, and 100 or more images are acquired by one inspection (imaging of a part of the retina). In recent years, a technique called OCT angiography (OCTA: Optical Coherence Tomography Angiography), which is a pseudo angiographic method using OCT, is known in order to grasp the retinal blood vessel morphology.
 OCTアンギオグラフィでは、非侵襲的に蛍光剤を使用せずに血管造影のような画像を作成することができる。具体的には、被検眼の眼底の同一箇所を連続で数枚撮影して数msの時間差のあるBスキャン画像を取得し、この時間差のあるBスキャン画像間の短時間変化を血流変化であると仮定することで、眼底の微小血管を描出した血管断層画像(モーションコントラスト画像)を生成するものである。時間差のある複数のBスキャン画像群から一枚の血管断層画像が生成され、これを観察対象領域に対してCスキャン方向(y方向)にスキャン位置を変えながら繰り返すことにより、空間的に連続する複数枚の血管断層画像を得ることができる。これら連続する複数枚の血管画像から、網膜における血管形態を立体的に捉えることのできる一つの三次元データセットが構築される。すなわち、三次元データセットは被検眼の眼底の観察対象領域を立体的にモデル化したデータである。 In OCT angiography, an angiographic image can be created non-invasively without using a fluorescent agent. Specifically, several identical images of the fundus of the eye to be examined are continuously photographed to obtain B-scan images with a time difference of several ms, and a short-time change between the B-scan images with a time difference is represented by a change in blood flow. Assuming that there is a blood vessel tomographic image (motion contrast image) depicting a microvessel of the fundus. One vascular tomographic image is generated from a plurality of B-scan image groups having a time difference, and this is repeated spatially by changing the scan position in the C-scan direction (y-direction) with respect to the observation target region. A plurality of vascular tomographic images can be obtained. From these continuous plural blood vessel images, one three-dimensional data set capable of capturing the blood vessel morphology in the retina three-dimensionally is constructed. That is, the three-dimensional data set is data in which the observation target region of the fundus of the eye to be examined is three-dimensionally modeled.
 この三次元データセットから今度は深さ方向(z方向)に連続する複数のxy平面の断層画像として血管断層画像を生成する。そして、所望の範囲にある連続する複数枚のxy平面の血管断層画像を重層することにより当該範囲の網膜血管が重畳表示された一枚の正面画像(Enface画像)を生成し、各種診断に利用する。血流変化の抽出には、例えば特許文献1や非特許文献1に示すように、OMAG(Optical Microangiography)法や、SSADA(Split-spectrum Amplitude-decorrelation Angiography)法等、数種類の手法が提案されている。 From this three-dimensional data set, a blood vessel tomographic image is generated as a tomographic image of a plurality of xy planes that are continuous in the depth direction (z direction). A plurality of continuous xy plane vascular tomographic images in a desired range are layered to generate a single front image (Enface image) on which the retinal blood vessels in the range are superimposed and used for various diagnoses. To do. For extracting blood flow changes, for example, as shown in Patent Document 1 and Non-Patent Document 1, several methods such as OMAG (Optical Microangiography) method and SSADA (Split-spectrum Amplitude-decorrelation Angiography) method have been proposed. Yes.
特表2013-518695号公報Special table 2013-518695 gazette
 ここで、診断用の正面画像を生成するに際し、領域内に塩粒ノイズ(スパイクノイズ)があると、血管と血管との隙間にノイズが紛れ込み、血管の抽出成績を低下させてしまうという問題がある。また、ノイズによる影響を低減するために単純に平均化して正面画像を生成しようとしても、平均化処理が背景の黒レベルを上昇させてしまうため、正面画像のコントラストが低下してしまい、診断に適する正面画像を容易に得ることができないという問題点があった。 Here, when generating a front image for diagnosis, if there is salt particle noise (spike noise) in the region, the noise is mixed into the gap between the blood vessels and the extraction result of the blood vessels is deteriorated. There is. In addition, even if an attempt is made to generate a front image simply by averaging in order to reduce the influence of noise, the averaging process increases the black level of the background, which reduces the contrast of the front image, which is useful for diagnosis. There was a problem that a suitable front image could not be obtained easily.
 本発明は、このような点に鑑みてなされたものであり、診断に適した正面画像を生成することのできる画像処理装置、画像処理方法及び画像処理プログラムを提供することを目的とする。特に、コントラストが改善され、血管の視認性が向上された正面画像を生成することのできる画像処理装置、画像処理方法及び画像処理プログラムを提供することを目的とする。 The present invention has been made in view of such points, and an object thereof is to provide an image processing apparatus, an image processing method, and an image processing program capable of generating a front image suitable for diagnosis. In particular, an object is to provide an image processing apparatus, an image processing method, and an image processing program capable of generating a front image with improved contrast and improved blood vessel visibility.
 上記目的を達成するために、第一に本発明は、三次元画像データに対して平滑化処理を実行する平滑化手段と、所定の方向に連続する複数枚の断層画像を平滑化処理後の三次元画像データから生成し、前記複数枚の断層画像を重層して一枚の正面画像を生成する正面画像生成手段と、を備え、前記平滑化手段が、注目画素と、前記所定の方向に直交する平面上において前記注目画素の周囲に位置する複数の周辺画素とに対してガウスぼかし処理を実行して前記注目画素の輝度値を置き換える第1フィルタ手段と、前記所定の方向に連続する前記注目画素を中心とした所望の範囲の複数の対象画素に対して移動平均処理を実行して前記注目画素の輝度値を置き換える第2フィルタ手段と、を有することを特徴とする画像処理装置を提供する(発明1)。 In order to achieve the above object, firstly, the present invention includes a smoothing unit that performs a smoothing process on three-dimensional image data, and a plurality of tomographic images continuous in a predetermined direction after the smoothing process. Front image generating means for generating one front image by overlaying the plurality of tomographic images, and generating the one or more tomographic images, wherein the smoothing means includes the pixel of interest and the predetermined direction. First filter means for performing Gaussian blurring processing on a plurality of peripheral pixels located around the pixel of interest on an orthogonal plane and replacing the luminance value of the pixel of interest; and the continuous in the predetermined direction There is provided an image processing apparatus comprising: a second filter unit that performs a moving average process on a plurality of target pixels in a desired range centered on a target pixel to replace a luminance value of the target pixel. Do ( Akira 1).
 上記発明(発明1)によれば、正面画像の生成に用いる断層画像の平面方向についてはガウスぼかし処理、深さ方向については移動平均処理という二つの異なるフィルタリング処理を実行して三次元画像データを処理することにより、血管の接続性を損なうことなく、平滑性の向上や背景の黒レベルを低下させることが可能となり、また現実的な計算量、計算時間で、診断に適した3次元画像や正面画像を生成することができる。 According to the above invention (Invention 1), the three-dimensional image data is obtained by executing two different filtering processes, namely, a Gaussian blur process for the plane direction of the tomographic image used for generating the front image and a moving average process for the depth direction. By processing, it becomes possible to improve smoothness and reduce the black level of the background without impairing the connectivity of blood vessels, and with a realistic calculation amount and calculation time, a three-dimensional image suitable for diagnosis and A front image can be generated.
 断層画像において、深さ方向は平面方向に比べて解像度が高いので、深さ方向については、隣の画素の値の影響を受けやすい移動平均処理を施しても微小な構造物のデータを消失させる危険性が少なく、結果として血管の接続性を損なうことなく、平滑性の向上や背景の黒レベルの低下を図ることが可能である。 In the tomographic image, the resolution in the depth direction is higher than that in the plane direction. Therefore, in the depth direction, even if moving average processing that is easily affected by the value of the adjacent pixel is performed, the data of the minute structure is lost. The risk is low, and as a result, it is possible to improve the smoothness and lower the black level of the background without impairing the blood vessel connectivity.
 一方、平面方向について移動平均処理を施してしまうと、隣の画素の値により自分のデータ値が大きく影響されるのでコントラストが低下して、構造物のデータが消えてしまう、あるいは血管の接続性が悪くなることが懸念される。そこで、平面方向については、ガウスぼかし処理という周りの画素値より自己の画素値の重みを大きくする処理を採用することにより、微小な構造のデータが埋もれてしまうのを防ぐことができる。 On the other hand, if the moving average processing is performed in the plane direction, the data value of the adjacent pixel greatly affects the data value of the adjacent pixel, resulting in a decrease in contrast and loss of structure data, or blood vessel connectivity. There is concern that it will get worse. Therefore, by adopting a process for increasing the weight of its own pixel value over the surrounding pixel value, such as Gaussian blurring, in the plane direction, it is possible to prevent the data having a minute structure from being buried.
 なお、ガウスぼかし処理は、移動平均処理に比べて非常に計算量が多く、処理に時間がかかるので、すべての方向についてガウスぼかし処理を施すのは、計算量・計算時間の観点から好ましくない。これらのことから、本発明は、平面方向についてはガウスぼかし処理、深さ方向については移動平均処理という二つの異なるフィルタリング処理を実行するにより、血管の接続性を損なうことなく、平滑性の向上や背景の黒レベルを低下させることが可能となり、また現実的な計算量、計算時間で、診断に適した3次元画像や正面画像を生成することができるように構成されている。 It should be noted that the Gaussian blurring process is much more computationally intensive than the moving average process and takes a long time to process. Therefore, it is not preferable to perform the Gaussian blurring process in all directions from the viewpoint of the calculation amount and the calculation time. For these reasons, the present invention performs two different filtering processes, a Gaussian blur process in the plane direction and a moving average process in the depth direction, to improve smoothness without impairing blood vessel connectivity. The background black level can be reduced, and a three-dimensional image and a front image suitable for diagnosis can be generated with a realistic calculation amount and calculation time.
 上記発明(発明1)においては、前記正面画像生成手段が、前記複数枚の断層画像それぞれの同一の位置にある複数の画素から、輝度値の大きい順に所定の数の画素を選択し、選択した所定の数の画素の輝度値の平均値を前記正面画像の当該位置の輝度値とすることにより、前記正面画像を生成することが好ましい(発明2)。 In the above invention (Invention 1), the front image generation means selects and selects a predetermined number of pixels in descending order of luminance value from a plurality of pixels at the same position in each of the plurality of tomographic images. It is preferable to generate the front image by using an average value of luminance values of a predetermined number of pixels as a luminance value at the position of the front image (Invention 2).
 また、上記発明(1,2)においては、前記平滑化処理前の三次元画像データに対してノイズ除去処理を実行するノイズ除去手段を更に備えることが好ましい(発明3)。 In the above inventions (1, 2), it is preferable to further comprise noise removal means for executing noise removal processing on the three-dimensional image data before the smoothing processing (invention 3).
 第二に本発明は、三次元画像データに対して平滑化処理を実行する平滑化工程と、所定の方向に連続する複数枚の断層画像を平滑化処理後の三次元画像データから生成し、前記複数枚の断層画像を重層して一枚の正面画像を生成する正面画像生成工程と、を備え、前記平滑化工程において、注目画素と、前記所定の方向に直交する平面上において前記注目画素の周囲に位置する複数の周辺画素とに対してガウスぼかし処理を実行して前記注目画素の輝度値を置き換え、前記所定の方向に連続する前記注目画素を中心とした所望の範囲の複数の対象画素に対して移動平均処理を実行して前記注目画素の輝度値を置き換えることを特徴とする画像処理方法を提供する(発明4)。 Secondly, the present invention generates a smoothing step for performing a smoothing process on three-dimensional image data, and generates a plurality of tomographic images continuous in a predetermined direction from the three-dimensional image data after the smoothing process, A front image generating step of generating a single front image by overlaying the plurality of tomographic images, and in the smoothing step, the target pixel and the target pixel on a plane orthogonal to the predetermined direction A Gaussian blur process is performed on a plurality of peripheral pixels located around the pixel to replace the luminance value of the pixel of interest, and a plurality of objects in a desired range centered on the pixel of interest continuous in the predetermined direction Provided is an image processing method characterized in that a moving average process is performed on a pixel to replace a luminance value of the target pixel (invention 4).
 上記発明(発明4)によれば、正面画像の生成に用いる断層画像の平面方向についてはガウスぼかし処理、深さ方向については移動平均処理という二つの異なるフィルタリング処理を実行して三次元画像データを処理することにより、血管の接続性を損なうことなく、平滑性の向上や背景の黒レベルを低下させることが可能となり、また現実的な計算量、計算時間で、診断に適した3次元画像や正面画像を生成することができる。 According to the above invention (Invention 4), the two-dimensional filtering process is executed by performing two different filtering processes, namely, a Gaussian blur process for the plane direction of the tomographic image used for generating the front image and a moving average process for the depth direction. By processing, it becomes possible to improve smoothness and reduce the black level of the background without impairing the connectivity of blood vessels, and with a realistic calculation amount and calculation time, a three-dimensional image suitable for diagnosis and A front image can be generated.
 断層画像において、深さ方向は平面方向に比べて解像度が高いので、深さ方向については、隣の画素の値の影響を受けやすい移動平均処理を施しても微小な構造物のデータを消失させる危険性が少なく、結果として血管の接続性を損なうことなく、平滑性の向上や背景の黒レベルの低下を図ることが可能である。 In the tomographic image, the resolution in the depth direction is higher than that in the plane direction. Therefore, in the depth direction, even if moving average processing that is easily affected by the value of the adjacent pixel is performed, the data of the minute structure is lost. The risk is low, and as a result, it is possible to improve the smoothness and lower the black level of the background without impairing the blood vessel connectivity.
 一方、平面方向について移動平均処理を施してしまうと、隣の画素の値により自分のデータ値が大きく影響されるのでコントラストが低下して、構造物のデータが消えてしまう、あるいは血管の接続性が悪くなることが懸念される。そこで、平面方向については、ガウスぼかし処理という周りの画素値より自己の画素値の重みを大きくする処理を採用することにより、微小な構造のデータが埋もれてしまうのを防ぐことができる。 On the other hand, if the moving average processing is performed in the plane direction, the data value of the adjacent pixel greatly affects the data value of the adjacent pixel, resulting in a decrease in contrast and loss of structure data, or blood vessel connectivity. There is concern that it will get worse. Therefore, by adopting a process for increasing the weight of its own pixel value over the surrounding pixel value, such as Gaussian blurring, in the plane direction, it is possible to prevent the data having a minute structure from being buried.
 なお、ガウスぼかし処理は、移動平均処理に比べて非常に計算量が多く、処理に時間がかかるので、すべての方向についてガウスぼかし処理を施すのは、計算量・計算時間の観点から好ましくない。これらのことから、本発明は、平面方向についてはガウスぼかし処理、深さ方向については移動平均処理という二つの異なるフィルタリング処理を実行するにより、血管の接続性を損なうことなく、平滑性の向上や背景の黒レベルを低下させることが可能となり、また現実的な計算量、計算時間で、診断に適した3次元画像や正面画像を生成することができるように構成されている。 It should be noted that the Gaussian blurring process is much more computationally intensive than the moving average process and takes a long time to process. Therefore, it is not preferable to perform the Gaussian blurring process in all directions from the viewpoint of the calculation amount and the calculation time. For these reasons, the present invention performs two different filtering processes, a Gaussian blur process in the plane direction and a moving average process in the depth direction, to improve smoothness without impairing blood vessel connectivity. The background black level can be reduced, and a three-dimensional image and a front image suitable for diagnosis can be generated with a realistic calculation amount and calculation time.
 上記発明(発明4)においては、前記正面画像生成工程において、前記複数枚の断層画像それぞれの同一の位置にある複数の画素から、輝度値の大きい順に所定の数の画素を選択し、選択した所定の数の画素の輝度値の平均値を前記正面画像の当該位置の輝度値とすることにより、前記正面画像を生成することを特徴とする、請求項4に記載の画像処理方法が好ましい(発明5)。 In the above invention (invention 4), in the front image generation step, a predetermined number of pixels are selected from a plurality of pixels at the same position in each of the plurality of tomographic images in descending order of luminance value and selected. The image processing method according to claim 4, wherein the front image is generated by using an average value of luminance values of a predetermined number of pixels as a luminance value at the position of the front image. Invention 5).
 また、上記発明(発明4,5)においては、前記平滑化処理前の三次元画像データに対してノイズ除去処理を実行するノイズ除去工程を更に備えることが好ましい(発明6)。 Further, in the above inventions (Inventions 4 and 5), it is preferable to further include a noise removal step of performing noise removal processing on the three-dimensional image data before the smoothing processing (Invention 6).
 第三に本発明は、コンピュータを発明1~3のいずれか1つに係る画像処理装置として機能させるための、あるいはコンピュータに発明4~6のいずれか1つに係る画像処理方法を実行させるための画像処理プログラムを提供する(発明7)。 Thirdly, the present invention is for causing a computer to function as an image processing apparatus according to any one of inventions 1 to 3, or for causing a computer to execute the image processing method according to any one of inventions 4 to 6. An image processing program is provided (Invention 7).
 本発明の画像処理装置、画像処理方法及び画像処理プログラムによれば、診断に適した正面画像を生成することができる。 According to the image processing apparatus, the image processing method, and the image processing program of the present invention, a front image suitable for diagnosis can be generated.
本発明の一実施形態に係る画像処理装置の全体構成を示すブロック図である。1 is a block diagram illustrating an overall configuration of an image processing apparatus according to an embodiment of the present invention. 眼底をスキャンすることにより眼底断層画像を取得する状態を示す説明図である。It is explanatory drawing which shows the state which acquires a fundus tomographic image by scanning a fundus. 眼底の同一箇所の眼底断層画像を時間差で複数枚取得し、血管断層画像を生成する状態を示す説明図である。It is explanatory drawing which shows the state which acquires multiple sheets of the fundus tomographic image of the same location of the fundus by time difference, and produces | generates a blood vessel tomographic image. 本実施形態における、画像処理全体の流れを示すフロー図である。It is a flowchart which shows the flow of the whole image processing in this embodiment. 本実施形態における、ノイズ除去処理の流れを示すフロー図である。It is a flowchart which shows the flow of the noise removal process in this embodiment. 本実施形態における、平滑化処理の流れを示すフロー図である。It is a flowchart which shows the flow of the smoothing process in this embodiment. 本実施形態に係る画像処理装置によるノイズ除去処理における着目画素と隣接画素の関係を模式的に示す説明図(その1)である。It is explanatory drawing (the 1) which shows typically the relationship between the attention pixel and adjacent pixel in the noise removal process by the image processing apparatus which concerns on this embodiment. 本実施形態に係る画像処理装置によるノイズ除去処理における着目画素と隣接画素の関係を模式的に示す説明図(その2)である。It is explanatory drawing (the 2) which shows typically the relationship of the attention pixel and adjacent pixel in the noise removal process by the image processing apparatus which concerns on this embodiment. (a)は本実施形態に係る画像処理装置による平滑化処理における注目画素と周辺画素との関係を模式的に示す説明図であり、(b)は注目画素と対象画素との関係を模式的に示す説明図である。(A) is explanatory drawing which shows typically the relationship between the attention pixel and peripheral pixel in the smoothing process by the image processing apparatus which concerns on this embodiment, (b) is typical about the relationship between a attention pixel and an object pixel. It is explanatory drawing shown in. 本実施形態に係る画像処理装置によって正面画像を生成する様子を模式的に示す説明図(その1)である。It is explanatory drawing (the 1) which shows typically a mode that a front image is produced | generated by the image processing apparatus which concerns on this embodiment. 本実施形態に係る画像処理装置によって正面画像を生成する様子を模式的に示す説明図(その2)である。It is explanatory drawing (the 2) which shows typically a mode that a front image is produced | generated by the image processing apparatus which concerns on this embodiment.
 以下、本発明の実施形態を図面に基づいて詳細に説明する。ここでは、断層像撮影装置(OCT)により被検眼の眼底Eの断層画像を取得し、得られた断層画像からOCTアンギオグラフィにより眼底における血管形態を立体的に捉えることのできる一つの三次元画像データを構築し、この三次元画像データに対して画像処理を施して、深さ方向に血管が重畳表示された一枚の正面画像を生成する。画像処理の結果、コントラストが改善され、血管の視認性が向上された、診断に適した正面画像が生成される。なお、本発明における画像処理の対象となる画像は眼底における血管形態を立体的に捉えることのできる三次元画像データに限定されるものでなく、他の種類の装置で他の対象を撮影して構築された三次元画像データにも適用することができる。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. Here, a tomographic image (OCT) is used to acquire a tomographic image of the fundus E of the eye to be examined, and one three-dimensional image that can grasp the blood vessel morphology on the fundus three-dimensionally from the obtained tomographic image by OCT angiography. Data is constructed and image processing is performed on the three-dimensional image data to generate a single front image in which blood vessels are superimposed and displayed in the depth direction. As a result of the image processing, a front image suitable for diagnosis is generated with improved contrast and improved blood vessel visibility. Note that the image to be subjected to image processing in the present invention is not limited to the three-dimensional image data capable of capturing the blood vessel morphology on the fundus three-dimensionally, and other objects are photographed with other types of devices. The present invention can also be applied to the constructed three-dimensional image data.
 図1は、被検眼眼底の断層画像を取得して画像処理するシステム全体を示すブロック図である。断層像撮影装置10は、被検眼の眼底の断層像を撮影する装置(OCT:Optical Coherence Tomography)であり、例えばフーリエドメイン方式で動作する。断層像撮影装置10は公知であるので、その詳細な説明は省略するが、断層像撮影装置10には、低コヒーレンス光源が設けられ、低コヒーレンス光源からの光が参照光と信号光に分割される。信号光は、図2に図示したように、眼底E上で、例えばx、y方向にラスタースキャンされる。眼底Eで走査され反射された信号光は、参照ミラーで反射した参照光と重畳され干渉光を発生し、該干渉光に基づいて眼底の深度方向(z方向)の情報を示すOCT信号が発生する。 FIG. 1 is a block diagram showing the entire system for acquiring and processing a tomographic image of the fundus of the eye to be examined. The tomographic imaging apparatus 10 is an apparatus (OCT: Optical Coherence Tomography) that captures a tomographic image of the fundus of the subject's eye and operates, for example, in the Fourier domain method. Since the tomographic imaging apparatus 10 is known, detailed description thereof is omitted, but the tomographic imaging apparatus 10 is provided with a low coherence light source, and light from the low coherence light source is divided into reference light and signal light. The As shown in FIG. 2, the signal light is raster scanned on the fundus oculi E, for example, in the x and y directions. The signal light scanned and reflected by the fundus E is superimposed on the reference light reflected by the reference mirror to generate interference light, and an OCT signal indicating information on the depth direction (z direction) of the fundus is generated based on the interference light. To do.
 画像処理装置20は、CPU、RAM、ROMなどで構成されたコンピュータで実現される制御部21を有し、制御部21は画像処理プログラムを実行することにより全体の画像処理を制御する。また、画像処理装置20には、断層画像形成部22、記憶部23、血管断層画像形成部24、三次元データ構築部25、表示部26及び操作部27が設けられる。 The image processing apparatus 20 includes a control unit 21 realized by a computer including a CPU, a RAM, a ROM, and the like, and the control unit 21 controls the entire image processing by executing an image processing program. The image processing apparatus 20 includes a tomographic image forming unit 22, a storage unit 23, a vascular tomographic image forming unit 24, a three-dimensional data construction unit 25, a display unit 26, and an operation unit 27.
 断層画像形成部22は、フーリエドメイン方式などの公知の解析方法を実行する専用の電子回路、または、前述のCPUが実行する画像処理プログラムにより実現され、断層像撮影装置10で生成されたOCT信号に基づいて、被検眼眼底の断層画像を形成する。 The tomographic image forming unit 22 is realized by a dedicated electronic circuit that executes a known analysis method such as a Fourier domain method, or an image processing program executed by the CPU, and an OCT signal generated by the tomographic imaging apparatus 10. Based on the above, a tomographic image of the fundus of the subject eye is formed.
 例えば、図2に示すように、眼底Eがy方向にy(N=1、2、・・・・・、t)の位置で、x方向にスキャンされた場合、当該スキャン時に複数回(m回)サンプリングが行われる。このx方向の各サンプリング時点でそれぞれz方向の断層画像(Aスキャン画像)A(h=1、2、・・・・・、m)が取得され、これらm個のAスキャン画像Aから断層画像B(N=1、2、・・・・・、t)が形成される。Aスキャン画像は、例えば、x方向に1画素幅、z方向にn画素の長さで格納されるので、断層画像Bはm×n画素の大きさをもつ画像(x方向にm画素の長さ、z方向にn画素の長さ)となり、Bスキャン画像とも呼ばれる。 For example, as shown in FIG. 2, when the fundus oculi E is scanned in the x direction at a position of y N (N = 1, 2,..., T) in the y direction, m) sampling is performed. At each sampling time in the x direction, a tomographic image (A scan image) A h (h = 1, 2,..., M) in the z direction is acquired, and from these m A scan images A h. A tomographic image B N (N = 1, 2,..., T) is formed. Since the A scan image is stored with a width of one pixel in the x direction and a length of n pixels in the z direction, the tomographic image B N is an image having a size of m × n pixels (m pixels in the x direction). (Length, length of n pixels in the z direction) and is also called a B-scan image.
 Bスキャン画像の形成をy(N=1、2、・・・・・、t)の位置それぞれにおいて数msの時間差で連続して数回(例えばi回)行うと、同一箇所毎にi枚の数msの時間差のあるBスキャン画像BNI(N=1、2、・・・・・、t)(I=1、2、・・・・・、i)が形成される。断層画像形成部22で形成されたt×i枚のBスキャン画像BNIは、例えば半導体メモリ、ハードディスク装置等により構成された記憶部23に格納される。記憶部23はさらに上述した画像処理プログラムなども格納する。 If a B-scan image is formed several times (for example, i times) continuously at a time difference of several ms at each position of y N (N = 1, 2,..., T), i for each same location. B scan images B NI (N = 1, 2,..., T) (I = 1, 2,..., I) having a time difference of several ms are formed. The t × i B-scan images BNI formed by the tomographic image forming unit 22 are stored in a storage unit 23 configured by, for example, a semiconductor memory or a hard disk device. The storage unit 23 further stores the above-described image processing program and the like.
 血管断層画像形成部24は、OMAG法やSSADA法などの公知の血流変化を抽出するための方法を実行する専用の電子回路、または、前述のCPUが実行する画像処理プログラムにより実現され、断層画像形成部22で形成された被検眼の眼底の同一箇所に対する数msの時間差のある複数の断層画像(Bスキャン画像BNI)に基づいて、当該位置における眼底の微小血管を描出した血管断層画像F(N=1、2、・・・・・、t)を形成する。 The vascular tomographic image forming unit 24 is realized by a dedicated electronic circuit that executes a known method for extracting a blood flow change, such as the OMAG method or the SSADA method, or an image processing program executed by the above-described CPU. Based on a plurality of tomographic images (B scan image B NI ) having a time difference of several ms with respect to the same portion of the fundus of the eye to be examined formed by the image forming unit 22, a blood vessel tomographic image depicting a microvessel of the fundus at that position F N (N = 1, 2,..., T) is formed.
 例えば、図3に示すように、yN-1の位置における数msの時間差のあるi枚のBスキャン画像B(N-1)I(I=1、2、・・・・・、i)から血管断層画像FT-1が、yの位置における数msの時間差のあるi枚のBスキャン画像BNI(I=1、2、・・・・・、i)からは血管断層画像Fが、yN+1の位置における数msの時間差のあるi枚のBスキャン画像B(N+1)I(I=1、2、・・・・・、i)からは血管断層画像FN+1が、血管断層画像形成部24によりそれぞれ形成される。このようにしてy(N=1、2、・・・・・、t)の位置それぞれにおけるt枚の血管断層画像F(N=1、2、・・・・・、t)が形成され、記憶部23に格納される。 For example, as shown in FIG. 3, i B-scan images B (N−1) I (I = 1, 2,..., I) with a time difference of several ms at the position of y N−1. To the vascular tomographic image F T-1 from the i-th B scan images B NI (I = 1, 2,..., I) having a time difference of several ms at the position of y N. From the i B scan images B (N + 1) I (I = 1, 2,..., I) with a time difference of several ms at the position where N is y N + 1, the vascular tomographic image F N + 1 Each of them is formed by the tomographic image forming unit 24. In this way, t vascular tomographic images F N (N = 1, 2,..., T) at the positions of y N (N = 1, 2,..., T) are formed. And stored in the storage unit 23.
 三次元データ構築部25は、図3に示すように、血管断層画像形成部24により形成された空間的に連続するt枚の血管断層画像Fから、網膜における血管形態を立体的に捉えることのできる一つの三次元画像データGを構築し、記憶部23に格納する。三次元画像データGは、網膜における血管形態を立体的に捉えることができるように被検眼の眼底の観察対象領域を立体的にモデル化したデータである。 Three-dimensional data constructing unit 25, as shown in FIG. 3, the t pieces of blood vessel tomographic image F N consecutive vascular tomographic image forming unit spatially formed by 24, sterically capture that vessels form in the retina One 3D image data G that can be stored is constructed and stored in the storage unit 23. The three-dimensional image data G is data in which the observation target region of the fundus of the eye to be examined is three-dimensionally modeled so that the blood vessel morphology in the retina can be captured three-dimensionally.
 画像処理装置20には、画像処理部30が設けられる。画像処理部30は、ノイズ除去手段としての算出手段31、計数手段32及び置換手段33、平滑化手段としての第1フィルタ手段34及び第2フィルタ手段35、それに正面画像生成手段36を備えており、三次元画像データGに対して各種画像処理を実行する。 The image processing device 20 is provided with an image processing unit 30. The image processing unit 30 includes a calculation unit 31 as a noise removal unit, a counting unit 32 and a replacement unit 33, a first filter unit 34 and a second filter unit 35 as a smoothing unit, and a front image generation unit 36. Various image processing is executed on the three-dimensional image data G.
 後述するように、算出手段31、計数手段32及び置換手段33は、三次元データ構築部25で構築され、記憶部23に格納された三次元画像データGに対してノイズ除去処理を実行するものであり、算出手段31は、処理対象となる画像中の注目画素の輝度値と、当該注目画素の周囲に位置する複数の隣接画素それぞれの輝度値との差分値をそれぞれ算出し、計数手段32は、算出された差分値が所定の閾値(輝度差閾値)以下である隣接画素の数を非該当隣接画素数としてカウントし、置換手段33は、非該当隣接画素数が所定数(非該当許容数)以下である場合には、複数の隣接画素の輝度値の平均値をもって注目画素の輝度値を置き換える。 As will be described later, the calculation unit 31, the counting unit 32, and the replacement unit 33 are constructed by the three-dimensional data construction unit 25 and execute noise removal processing on the three-dimensional image data G stored in the storage unit 23. The calculating unit 31 calculates a difference value between the luminance value of the target pixel in the image to be processed and the luminance values of a plurality of adjacent pixels located around the target pixel, and the counting unit 32 Counts the number of adjacent pixels whose calculated difference value is less than or equal to a predetermined threshold (luminance difference threshold) as the number of non-corresponding adjacent pixels, and the replacing means 33 determines that the number of non-corresponding adjacent pixels is a predetermined number (not applicable Number) or less, the luminance value of the target pixel is replaced with the average value of the luminance values of a plurality of adjacent pixels.
 第1フィルタ手段34及び第2フィルタ手段35は、ノイズ除去処理後の三次元画像データGに対して平滑化処理を実行するものであり、第1フィルタ手段34は、処理対象となる三次元画像データG中の着目画素と、所定の方向(深さ方向)に直交する平面上において着目画素の周囲に位置する複数の周辺画素とに対してガウスぼかし処理を実行して着目画素の輝度値を置き換え、第2フィルタ手段35は、所定の方向(深さ方向)に連続する着目画素を中心とした所望の範囲の複数の対象画素に対して移動平均処理を実行して着目画素の輝度値を置き換える。 First filter means 34 and second filter means 35 is adapted to perform smoothing processing on the three-dimensional image data G 1 after the noise removal processing, the first filter means 34, to be processed three-dimensional and target pixel in the image data G 1, the luminance of the plurality of the pixel of interest by performing a Gaussian blurring process with respect to the peripheral pixels located around the target pixel on a plane perpendicular to the predetermined direction (depth direction) The value is replaced, and the second filter means 35 performs a moving average process on a plurality of target pixels in a desired range centered on the target pixel continuous in a predetermined direction (depth direction), and the luminance of the target pixel. Replace the value.
 また、正面画像生成手段36は、平滑化処理後の三次元画像データGから所定の方向(深さ方向)に連続する複数枚の断層画像を生成し、当該複数枚の断層画像を重層して一枚の正面画像を生成するものであり、複数枚の断層画像それぞれの同一の位置にある複数の画素から、輝度値の大きい順に所定の数の画素を選択し、選択した所定の数の画素の輝度値の平均値を正面画像の当該位置の輝度値とすることにより、正面画像を生成する。画像処理部30における各手段あるいは各画像処理は、専用の電子回路を用いることにより、あるいは画像処理プログラムを実行することに実現される。 Further, the front image generation unit 36 generates a plurality of tomographic images continuous to the smoothing process after the three-dimensional image data G 2 from a predetermined direction (depth direction), and overlaid the plurality of tomographic images A predetermined number of pixels are selected in descending order of luminance value from a plurality of pixels at the same position in each of the plurality of tomographic images, and the selected predetermined number of pixels are generated. A front image is generated by using an average value of luminance values of pixels as a luminance value at the position of the front image. Each means or each image processing in the image processing unit 30 is realized by using a dedicated electronic circuit or by executing an image processing program.
 表示部26は、例えば、LCDなどのディスプレイ装置によって構成され、記憶部23に格納された各種画像、画像処理装置20で生成あるいは処理された各種画像、被験者に関する情報などの付随する情報などを表示する。 The display unit 26 is configured by a display device such as an LCD, for example, and displays various images stored in the storage unit 23, various images generated or processed by the image processing device 20, accompanying information such as information on the subject, and the like. To do.
 操作部27は、例えば、マウスやキーボード、操作ペン、ポインター、操作パネル等を有し、表示部26に表示された画像の選択、あるいは操作者が画像処理装置20などに指示を与えるために用いられる。 The operation unit 27 includes, for example, a mouse, a keyboard, an operation pen, a pointer, an operation panel, and the like. The operation unit 27 is used for selecting an image displayed on the display unit 26 or for giving an instruction to the image processing apparatus 20 by the operator. It is done.
 続いて、三次元画像データに対して画像処理部30による各種画像処理を実行する流れを、図4のフロー図を参照しながら説明する。なお、図4(a)中のS100~S400が以下に述べる流れの説明におけるステップ100~400に相当し、図4(b)中のS101~S107はステップ101~107に、図4(c)中のS201~S205はステップ201~205に相当する。 Subsequently, the flow of executing various image processing by the image processing unit 30 on the three-dimensional image data will be described with reference to the flowchart of FIG. Note that S100 to S400 in FIG. 4A correspond to Steps 100 to 400 in the description of the flow described below, and S101 to S107 in FIG. 4B correspond to Steps 101 to 107, and FIG. S201 to S205 in the middle correspond to steps 201 to 205.
 本実施形態においては、まず、構築された三次元画像データGからノイズを除去する画像処理を行う(ステップ100)。三次元画像データGは空間的に連続するt枚の血管断層画像F(N=1、2、・・・・・、t)の集合であり、この血管断層画像Fのうちの一枚を処理対象画像Fとし、その前後の血管断層画像FT-1及び血管断層画像FT+1も併せて用いてノイズ除去処理を行う。処理対象画像Fは、図5の(b)に示すように、m×n画素の大きさをもつ画像であり、血管断層画像FT-1及び血管断層画像FT+1は、処理対象画像Fに連続する断層画像であり、処理対象画像Fと同様にm×n画素の大きさをもつ画像である。血管断層画像FT-1、処理対象画像F及び血管断層画像FT+1はいずれも記憶部23に格納されている。 In the present embodiment, first, image processing for removing noise from the constructed three-dimensional image data G is performed (step 100). The three-dimensional image data G is a set of t vascular tomographic images F N (N = 1, 2,..., T) that are spatially continuous, and one of the vascular tomographic images F N. was processed image F T, the noise removal process is used in conjunction also vascular tomographic image F T-1 and vascular tomographic image F T + 1 before and after. As shown in FIG. 5B, the processing target image FT is an image having a size of m × n pixels, and the vascular tomographic image FT-1 and the vascular tomographic image FT + 1 are the processing target image FT. a tomographic image that is continuous with T, an image similar to the target image F T having a size of m × n pixels. The vascular tomographic image F T−1 , the processing target image FT, and the vascular tomographic image F T + 1 are all stored in the storage unit 23.
 まず算出手段31は、図5に示すように、この処理対象画像Fにおいて注目する画素を注目画素Qとして一つ設定し、当該処理対象画像F及びその前後の断層画像FT-1及び断層画像FT+1において空間的に注目画素の周囲に位置する合計26個の画素を隣接画素q(i=1~26)として設定する(ステップ101)。すなわち、隣接画素qは、処理対象画像Fにおいて注目画素Qの上下左右に位置する画素及び注目画素の四隅に接するように位置する画素の合計8個の画素に加え、血管断層画像FT-1及び血管断層画像FT+1を処理対象画像Fに並べたときに空間的に注目画素Qの周囲に位置することとなる合計18個の画素を含めた合計26個の画素である。図5に、処理対象画像Fの左から4列目、上から3行目に設定された注目画素Qと、当該処理対象画像F及びその前後の血管断層画像FT-1及び血管断層画像FT+1において空間的に注目画素Qの周囲に位置する26個の隣接画素qの関係を示す。また、処理対象画像F及びその前後の血管断層画像FT-1及び血管断層画像FT+1を並べた状態から注目画素Qと隣接画素qだけを切り出した状態を図6に示す。 First calculating means 31, as shown in FIG. 5, the processing pixel of interest in the target image F T and one set as a target pixel Q, the processing target image F T and the tomographic image F T-1 and before and after the A total of 26 pixels spatially located around the pixel of interest in the tomographic image FT + 1 are set as adjacent pixels q i (i = 1 to 26) (step 101). That is, adjacent pixels q i, in addition to a total of eight pixels of the pixel positioned in contact with the four corners of the pixel and the target pixel located at the upper, lower, left and right of the target pixel Q in the processing target image F T, vascular tomographic image F T -1 and a total of 26 pixels, including a total of 18 pixels to be positioned around the spatially target pixel Q when the blood vessel tomographic image F T + 1 arranged in the processing target image F T. 5, the processing object fourth column from the left of the image F T, and the target pixel Q, which is set in the third row from the top, the processing target image F T and longitudinal vascular tomographic image F T-1 and vascular tomographic thereof The relationship between 26 adjacent pixels q i spatially located around the pixel of interest Q in the image FT + 1 is shown. Also shows the processing target image F T and before and after vascular tomographic image F T-1 and vascular tomographic image F T + 1 the pixel of interest Q from the state arranged state cut out only adjacent pixels q i in FIG.
 なお、処理対象画像Fにおいて注目画素Qを端部に設定する場合には必ずしも隣接画素が26個設定されるわけではなく、注目画素Qが処理対象画像Fにおいて端部に設定される場合、隣接画素はその注目画素の位置に合わせて適宜26個以内で設定されてよい。例えば注目画素が処理対象画像Fの左から1列目、上から3行目に設定された場合には、注目画素Qの上、下、右、右斜め上、右斜め下に位置する5個の画素と、連続する前後の血管断層画像において注目画素Q及び当該5個の画素に隣り合うように位置する12個の画素を併せた17個の画素が隣接画素として設定されてもよい。また、前後に連続する血管断層画像が存在しない場合も隣接画素はその注目画素の位置に合わせて適宜26個以内で設定されてよい。さらに、処理する三次元画像データの種類やノイズ除去の目的等に応じて隣接画素の設定の方法を適宜変更してもよい。 Incidentally, not necessarily adjacent pixels is 26 set in the case of setting the end portion of the pixel of interest Q in the processing target image F T, when the target pixel Q is set to an end in the processing target image F T The adjacent pixels may be appropriately set within 26 in accordance with the position of the target pixel. For example the first column the pixel of interest from the left of the processing target image F T, when it is set in the third row from the top, above the target pixel Q, down, right, upper right, located lower right 5 17 pixels including the pixel and 12 pixels located adjacent to the target pixel Q and the five pixels in the successive vascular tomographic images may be set as adjacent pixels. Further, even when there are no vascular tomographic images continuous before and after, adjacent pixels may be appropriately set within 26 according to the position of the target pixel. Furthermore, the method for setting adjacent pixels may be appropriately changed according to the type of 3D image data to be processed, the purpose of noise removal, and the like.
 また、処理する画像の種類やノイズ除去の目的等に応じて、血管断層画像FT-1及び血管断層画像FT+1のさらに前後に連続する血管断層画像FT-2及び血管断層画像FT+2までをも隣接画素の設定対象として利用し、隣接画素の設定を3×3の範囲までではなく、5×5の範囲にまで拡張して設定するようにしてもよいし、注目画素の上下左右に位置する4個の画素(例えば、図5におけるq、q、q、qの4個の画素)と連続する前後の血管断層画像において注目画素に隣り合うように位置する2個の画素(例えば、図5におけるq13、q22の2個の画素)を併せた6個の画素を隣接画素として設定する等、隣接画素の設定の方法を適宜変更してもよい。 Further, depending on the type of image to be processed, the purpose of noise removal, etc., the vascular tomographic image FT-2 and the vascular tomographic image FT + 2 that are continuous before and after the vascular tomographic image FT-1 and vascular tomographic image FT + 1. May also be used as a setting target for adjacent pixels, and the setting of adjacent pixels may be extended to a 5 × 5 range instead of a 3 × 3 range, Two pixels positioned adjacent to the target pixel in the vascular tomographic images before and after the four consecutive pixels (for example, four pixels q 2 , q 4 , q 5 , and q 7 in FIG. 5). The method of setting adjacent pixels may be changed as appropriate, for example, six pixels including pixels (for example, two pixels q 13 and q 22 in FIG. 5) are set as adjacent pixels.
 続いて算出手段31は、注目画素Qの輝度値Dと、26個の隣接画素qの輝度値d(i=1~26)を処理対象画像F、血管断層画像FT-1及び血管断層画像FT+1の原画像データから取得する(ステップ102)。 Subsequently, the calculation unit 31 calculates the luminance value D of the target pixel Q and the luminance values d i (i = 1 to 26) of the 26 adjacent pixels q i as the processing target image F T , the vascular tomographic image F T−1, and the like. Obtained from the original image data of the vascular tomographic image FT + 1 (step 102).
 注目画素Qの輝度値D及び隣接画素qの輝度値dを取得した後、算出手段31は、注目画素Qの輝度値Dから各隣接画素qの輝度値dを差し引いた第1差分値を、26個の隣接画素qのそれぞれについて算出する(ステップ103a)。この第1差分値は隣接画素の数だけ算出され、注目画素Qが白色点ノイズであるかどうかを推定するために用いられる。 After obtaining the luminance value D of the pixel of interest Q and the luminance value d i of the adjacent pixel q i , the calculation unit 31 first subtracts the luminance value d i of each adjacent pixel q i from the luminance value D of the pixel of interest Q. A difference value is calculated for each of the 26 adjacent pixels q i (step 103a). This first difference value is calculated by the number of adjacent pixels, and is used to estimate whether the target pixel Q is white point noise.
 続いて注目画素Qの白色点判定を行う。具体的には、計数手段32が、算出された第1差分値が、あらかじめ記憶部23に保存しておいた第1輝度差閾値PKT以下であるかどうかを比較し、第1輝度差閾値PKT以下であった隣接画素の数を第1非該当隣接画素数としてカウントする(ステップ104a)。すなわち、条件式:注目画素Qの輝度値D-隣接画素qの輝度値d≦PKTを満たす隣接画素の数が第1非該当隣接画素数となる。 Subsequently, the white point of the target pixel Q is determined. Specifically, the counting means 32 compares whether or not the calculated first difference value is equal to or less than the first luminance difference threshold value PKT stored in the storage unit 23 in advance, and the first luminance difference threshold value PKT. The number of adjacent pixels that was below is counted as the first non-applicable number of adjacent pixels (step 104a). That is, the number of adjacent pixels satisfying the conditional expression: luminance value D of target pixel Q−luminance value d i ≦ PKT of adjacent pixel q i is the first non-applicable number of adjacent pixels.
 第1非該当隣接画素数をカウントした後、置換手段33が、その第1非該当隣接画素数があらかじめ記憶部23に保存しておいた第1非該当許容数PFT以下であるかどうかを判定する(ステップ105a)。第1非該当隣接画素数が第1非該当許容数PFT以下であれば、注目画素Qは白色点ノイズであると判断される。 After counting the number of first non-applicable adjacent pixels, the replacement unit 33 determines whether the first non-applicable adjacent pixel number is equal to or less than the first non-applicable allowable number PFT stored in the storage unit 23 in advance. (Step 105a). If the number of first non-applicable adjacent pixels is equal to or less than the first non-applicable permissible number PFT, the target pixel Q is determined to be white point noise.
 一方、第1非該当隣接画素数が第1非該当許容数PFT以下でなければ、算出手段31は、各隣接画素qの輝度値dから注目画素Qの輝度値Dを差し引いた第2差分値を、26個の隣接画素qのそれぞれについて算出する(ステップ103b)。この第2差分値も第1差分値同様に隣接画素の数だけ算出され、注目画素Qが黒色点ノイズであるかどうかを推定するために用いられる。 On the other hand, if the number of first non-applicable neighboring pixels is not less than or equal to the first non-applicable allowable number PFT, the calculating unit 31 obtains a second value obtained by subtracting the luminance value D of the target pixel Q from the luminance value d i of each adjacent pixel q i . A difference value is calculated for each of the 26 adjacent pixels q i (step 103b). Similar to the first difference value, the second difference value is also calculated by the number of adjacent pixels, and is used to estimate whether the target pixel Q is black point noise.
 続いて注目画素Qの黒色点判定を行う。具体的には、計数手段32が、算出された第2差分値が、あらかじめ記憶部23に保存しておいた第2輝度差閾値HLT以下であるかどうかを比較し、第2輝度差閾値HLT以下であった隣接画素の数を第2非該当隣接画素数としてカウントする(ステップ104b)。すなわち、条件式:隣接画素qの輝度値d-注目画素Qの輝度値D≦HLTを満たす隣接画素の数が第2非該当隣接画素数となる。 Subsequently, the black point of the target pixel Q is determined. Specifically, the counting means 32 compares whether or not the calculated second difference value is equal to or less than the second luminance difference threshold value HLT stored in the storage unit 23 in advance, and the second luminance difference threshold value HLT is compared. The number of adjacent pixels that was below is counted as the second non-applicable number of adjacent pixels (step 104b). That is, the conditional expression: the luminance value d i adjacent pixels q i - the number of neighboring pixels satisfying the luminance value D ≦ HLT of the pixel of interest Q is the second non-relevant number of adjacent pixels.
 第2非該当隣接画素数をカウントした後、置換手段33が、その第2非該当隣接画素数があらかじめ記憶部23に保存しておいた第2非該当許容数HFT以下であるかどうかを判定する(ステップ105b)。第2非該当隣接画素数が第2非該当許容数HFT以下であれば、注目画素Qは黒色点ノイズであると判断される。 After counting the number of second non-applicable adjacent pixels, the replacement unit 33 determines whether the second non-applicable adjacent pixel number is equal to or less than the second non-applicable allowable number HFT stored in the storage unit 23 in advance. (Step 105b). If the number of second non-applicable adjacent pixels is equal to or less than the second non-applicable allowable number HFT, the target pixel Q is determined to be black point noise.
 ステップ105a及び105bの判定の結果、注目画素Qが白色点ノイズであると判断された場合(第1非該当隣接画素数が第1非該当許容数PFT以下である場合)及び注目画素Qが黒色点ノイズであると判断された場合(第2非該当隣接画素数が第2非該当許容数HFT以下である場合)、置換手段33は、各隣接画素qの輝度値dの平均値を算出する(ステップ106)。そして、算出した平均値をもって注目画素Qの輝度値を置き換えた画像を生成し(ステップ107)、記憶部23に保存する。ステップ105aの判定の結果、第1非該当隣接画素数が第1非該当許容数PFT以下ではなく、かつステップ105bの判定の結果、第2非該当隣接画素数が第2非該当許容数HFT以下ではない場合、注目画素Qの輝度値は平均値に置き換えることなく、そのままとする。 As a result of the determinations in steps 105a and 105b, when the target pixel Q is determined to be white point noise (when the number of first non-applicable neighboring pixels is equal to or less than the first non-applicable permissible number PFT) and the target pixel Q is black When it is determined that it is point noise (when the second non-applicable adjacent pixel number is equal to or less than the second non-applicable allowable number HFT), the replacement unit 33 calculates the average value of the luminance values d i of the adjacent pixels q i. Calculate (step 106). Then, an image in which the luminance value of the target pixel Q is replaced with the calculated average value is generated (step 107) and stored in the storage unit 23. As a result of the determination in step 105a, the first non-applicable adjacent pixel number is not less than or equal to the first non-applicable allowable number PFT, and as a result of the determination in step 105b, the second non-applicable adjacent pixel number is equal to or less than the second non-applicable allowable number HFT. Otherwise, the luminance value of the target pixel Q is left as it is without being replaced with the average value.
 上述の処理を、処理対象画像Fにおいて、例えば左上端の画素から順に全ての画素を注目画素として設定しながら繰り返し、最終的に処理対象画像Fの全体においてノイズ除去処理を行った画像が生成される。三次元画像データを構成するt枚の血管断層画像Fに対して上述のノイズ除去処理が実行されると、ノイズ除去処理後の三次元画像データGが記憶部23に格納される。なお、必ずしも全ての血管断層画像を処理対象とする必要はなく、処理する三次元画像データの種類やノイズ除去の目的等に応じて任意の数の血管断層画像を選択することもできる。 The processing described above, in the processing target image F T, for example, repeated while setting the order all the pixels from the pixels of the upper left end as the target pixel, the image subjected to noise removal processing in the overall final processed target image F T Generated. When the above-described noise removal processing to the t pieces of blood vessel tomographic image F N constituting the three-dimensional image data is performed, the three-dimensional image data G 1 after the noise removal processing is stored in the storage unit 23. Note that not all vascular tomographic images need to be processed, and any number of vascular tomographic images can be selected according to the type of 3D image data to be processed, the purpose of noise removal, and the like.
 このように本実施形態におけるノイズ除去処理は、処理対象となる画像中の注目する画素と、その周囲に位置する隣接画素とを比較して、注目画素の輝度値が隣接画素よりも所定の値以上に明るい場合、あるいは暗い場合には隣接画素の平均値と置き換えるものである。置き換えに用いる隣接画素と注目画素の輝度の差を大きくする、すなわち上述した第1輝度差閾値及び第2輝度差閾値を大きく設定すれば置き換えられる注目画素の数は減り、逆に小さく設定すれば置き換えられる注目画素の数が増加する。 As described above, the noise removal processing according to the present embodiment compares the pixel of interest in the image to be processed with the adjacent pixels located around the target pixel, and the luminance value of the pixel of interest is a predetermined value higher than that of the adjacent pixel. If it is brighter or darker than this, it is replaced with the average value of adjacent pixels. Increasing the luminance difference between the adjacent pixel used for replacement and the target pixel, that is, setting the first luminance difference threshold and the second luminance difference threshold described above to be large reduces the number of target pixels to be replaced, and conversely setting it to be small. The number of target pixels to be replaced increases.
 一方、置き換えの判定において、注目画素が全ての隣接画素よりも所定値以上に明るいかどうか、あるいは、注目画素が全ての隣接画素よりも所定値以上に暗いかどうかという、最も厳格な判定式を用いることもできる。この場合、上述の第1非該当許容数や第2非該当許容数を0とすることにより、注目画素が全ての隣接画素よりも所定値以上に明るい、あるいは、注目画素が全ての隣接画素よりも所定値以上に暗い場合にのみ、注目画素の輝度値を隣接画素の輝度値の平均値で置き換えるという処理を行うことになる。第1非該当許容数や第2非該当許容数を小さくするほど平均値に置換される注目画素の個数は減り、第1非該当許容数や第2非該当許容数を増やすにしたがって置換される注目画素の数が増加する。 On the other hand, in the replacement determination, the most strict determination formula is used to determine whether the target pixel is brighter than a predetermined value than all adjacent pixels, or whether the target pixel is darker than a predetermined value than all adjacent pixels. It can also be used. In this case, by setting the first non-applicable permissible number and the second non-applicable permissible number to 0, the target pixel is brighter than a predetermined value than all the adjacent pixels, or the target pixel is more than all the adjacent pixels. Only when the brightness is darker than the predetermined value, the process of replacing the luminance value of the target pixel with the average value of the luminance values of the adjacent pixels is performed. As the first non-applicable permissible number or the second non-applicable permissible number is decreased, the number of target pixels replaced with the average value decreases, and the replacement is performed as the first non-applicable permissible number or the second non-applicable permissible number is increased. The number of pixels of interest increases.
 第1輝度差閾値及び第2輝度差閾値は同じ値を用いてもよいし、個別に異なる数値を設定してもよい。また、第1非該当許容数及び第2非該当許容数も同じ値を用いてもよいし、個別に異なる数値を設定してもよい。これらのあらかじめ記憶部23に保存しておく条件値は、画像の質や抽出したい構造物の性状(形、明るさ、大きさ等)によって最適な値をあらかじめ検討しておいて適用させることができる。 The same value may be used for the first luminance difference threshold and the second luminance difference threshold, or different numerical values may be set individually. Moreover, the same value may be used for the first non-applicable allowable number and the second non-applicable allowable number, or different numerical values may be set individually. The condition values stored in advance in the storage unit 23 may be applied after considering the optimum values according to the quality of the image and the properties (shape, brightness, size, etc.) of the structure to be extracted. it can.
 特に、白色点ノイズが血管と血管の隙間に紛れ込むと血管の抽出結果を低下させてしまうが、上述のノイズ除去処理を行うことにより、注目画素がその周辺に明るい画素を持たない独立した高輝度画素の場合は注目画素が白色点ノイズ画素であると判断して隣接画素の平均値と置換することができる。なお、第1非該当許容数を大きくすれば、周辺に高輝度の画素が数個あっても注目画素を白色点ノイズ画素であると判断して隣接がその平均値と置換することができるため、ノイズ除去効果を高めることができ、背景の黒レベルを向上させることができるが、その一方で微小血管が抽出されにくくなる。 In particular, if white point noise is mixed in between the blood vessels, the extraction result of the blood vessels is reduced. However, by performing the noise removal process described above, the pixel of interest does not have bright pixels around it. In the case of a pixel, it can be determined that the pixel of interest is a white point noise pixel and can be replaced with the average value of adjacent pixels. Note that if the first non-applicable permissible number is increased, even if there are several high-luminance pixels in the vicinity, it is possible to determine that the pixel of interest is a white point noise pixel and replace the adjacent value with the average value. The noise removal effect can be enhanced and the black level of the background can be improved. On the other hand, microvessels are difficult to be extracted.
 上述のノイズ除去処理では白色点ノイズ及び黒色点ノイズの両方を除去しているが、必ずしも両方を除去する必要はなく、処理する三次元画像データの種類やノイズ除去の目的等に応じて、白色点ノイズ及び黒色点ノイズのどちらか一方を除去するようにしてもよい。 In the above noise removal processing, both white point noise and black point noise are removed, but it is not always necessary to remove both, and depending on the type of 3D image data to be processed and the purpose of noise removal, etc. Either one of the point noise and the black point noise may be removed.
 次にノイズ除去処理後の三次元画像データGを平滑化する画像処理を行う(ステップ200)。まず第1フィルタ手段34は、図7(a)に示すように、三次元画像データG中において着目する画素を着目画素Pとして一つ設定し(ステップ201)、着目画素Pの存するXY平面上において着目画素Pの周囲に位置する合計8個の画素を周辺画素p(i=1~8)として設定する(ステップ202)。なお、図7(a)及び(b)は、三次元画像データGから一部分を切り出して、x方向に3画素、y方向に3画素、z方向に5画素の合計45画素分だけを模式化して表示している。 Then performing image processing for smoothing the three-dimensional image data G 1 after the noise removal processing (step 200). First filter means 34 First, as shown in FIG. 7 (a), one sets the pixel of interest in a three-dimensional image data G 1 as the target pixel P (step 201), XY plane that exist in the target pixel P In the above, a total of eight pixels located around the pixel of interest P are set as the peripheral pixels p i (i = 1 to 8) (step 202). Incidentally, FIGS. 7 (a) and (b), by cutting a portion from the three-dimensional image data G 1, 3 pixels in the x direction, three pixels in the y direction, only total 45 pixels of five pixels in the z-direction schematically Is displayed.
 続いて第1フィルタ手段34は、着目画素P及び8個の周辺画素pの合計9個を対象にガウスぼかし処理を実行し、着目画素Pの輝度値を置き換える(ステップ203)。ガウスぼかし処理は、注目画素からの距離に応じて近傍の画素値に重みをかけて画素値の平均をとるフィルタリング処理である。具体的には、着目画素P及び8個の周辺画素pの輝度値e(i=1~8)を原画像データから取得し、着目画素Pに近いほど重みを大きくし、着目画素Pから遠くなるほど重みを小さくなるように、下記の数式1に示すようなガウス分布関数を用いて着目画素P及び8個の周辺画素pの輝度値に重み付けをして平均し、着目画素Pの輝度値とする。 Then the first filter means 34 performs the Gaussian blurring process to target a total of nine pixel of interest P and eight surrounding pixels p i, replaces the luminance value of the target pixel P (step 203). The Gaussian blurring process is a filtering process that averages pixel values by applying weights to neighboring pixel values according to the distance from the target pixel. Specifically, the luminance values e i (i = 1 to 8) of the target pixel P and the eight peripheral pixels p i are acquired from the original image data, and the weight is increased as the target pixel P is closer to the target pixel P. The luminance value of the target pixel P and the eight peripheral pixels p i is weighted and averaged using a Gaussian distribution function as shown in Equation 1 below so that the weight decreases as the distance from the center increases. The luminance value.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 例えば、着目画素P及び8個の周辺画素pの合計9個を対象にする場合、重み付けに用いるカーネルKとして次の数式2を用いることができる。 For example, to target a total of nine pixel of interest P and eight surrounding pixels p i, it is possible to use the following formula 2 as a kernel K to be used for weighting.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 上述のカーネルKを用いて周辺画素pのそれぞれの位置に対応するレートで重み付けをすると、例えば図7に示す周辺画素p、p、p、pの輝度値e、e、e、eは1/50の重みを、周辺画素p、p、p、pの輝度値e、e、e、eは6/50の重みを、着目画素Pの輝度値には22/50の重みを付けて平均値を算出し、この平均値を着目画素Pの輝度値とする。 When weighting is performed at a rate corresponding to each position of the peripheral pixel p i using the above-described kernel K, for example, the luminance values e 1 and e 3 of the peripheral pixels p 1 , p 3 , p 6 and p 8 shown in FIG. , E 6 , e 8 have a weight of 1/50, and luminance values e 2 , e 4 , e 5 , e 7 of peripheral pixels p 2 , p 4 , p 5 , p 7 have a weight of 6/50. An average value is calculated by weighting the luminance value of the pixel P by 22/50, and this average value is set as the luminance value of the pixel of interest P.
 続いて第2フィルタ手段35は、Z方向に連続する着目画素Pを中心とした5個の画素、すなわち図7に示す着目画素P及び画素C(i=1~4)を対象画素として設定し(ステップ204)、着目画素P及び画素C(i=1~4)を対象に移動平均処理を実行し、着目画素Pの輝度値を置き換える(ステップ205)。具体的には、着目画素P及び画素C(i=1~4)の輝度値を原画像データから取得し、この取得した輝度値を単純に平均し、着目画素Pの輝度値とする。 Subsequently, the second filter unit 35 sets five pixels centered on the pixel of interest P continuous in the Z direction, that is, the pixel of interest P and the pixel C i (i = 1 to 4) shown in FIG. 7 as target pixels. Then, the moving average process is executed on the target pixel P and the pixel C i (i = 1 to 4) as targets, and the luminance value of the target pixel P is replaced (step 205). Specifically, the luminance values of the pixel of interest P and the pixels C i (i = 1 to 4) are acquired from the original image data, and the acquired luminance values are simply averaged to obtain the luminance value of the pixel of interest P.
 上述の処理を、三次元画像データGにおいて、順に全ての画素を着目画素として設定しながら繰り返し、最終的に三次元画像データGの全体において平滑化処理を行った三次元画像データGが生成され、記憶部23に格納される。なお、必ずしも三次元画像データGの全ての画素を着目画素として設定し、平滑化処理をする必要はなく、処理する三次元画像データの種類や平滑化の目的等に応じて平滑化処理の対象範囲を任意に設定することもできる。 The above process, the three-dimensional image in the data G 1, repeated while setting the order all the pixels as a target pixel, finally the three-dimensional image data G 1 of performed smoothing process in the entire three-dimensional image data G 2 Is generated and stored in the storage unit 23. Incidentally, not necessarily set for all the pixels of the three-dimensional image data G 1 as the target pixel, it is not necessary to the smoothing process, the three-dimensional image data to be processed type and the smoothing process in accordance with the purpose or the like of the smoothing The target range can also be set arbitrarily.
 このように、三次元画像データGの平面方向(xy方向)についてはガウスぼかし処理、深さ方向(z方向)については移動平均処理という二つの異なるフィルタリング処理を実行して三次元画像データGを処理することにより、診断に適した3次元画像や正面画像を生成することができる。 As described above, the three-dimensional image data G 1 is obtained by executing two different filtering processes, namely, a Gaussian blur process for the plane direction (xy direction) of the three-dimensional image data G 1 and a moving average process for the depth direction (z direction). By processing 1 , a three-dimensional image and a front image suitable for diagnosis can be generated.
 断層画像において、深さ方向は平面方向に比べて解像度が高いので、深さ方向について移動平均処理を施しても血管の接続性を損なうことなく、平滑性の向上や背景の黒レベルを低下させることが可能である。一方、平面方向について移動平均処理を施してしまうと、コントラストが低下して血管の接続性が悪くなるため、平面方向については、ガウスぼかし処理を採用することにより、微小な構造のデータが埋もれてしまうのを防ぐことが可能となる。 In a tomographic image, the depth direction has a higher resolution than the plane direction, so even if moving average processing is performed in the depth direction, the smoothness is improved and the black level of the background is reduced without impairing blood vessel connectivity. It is possible. On the other hand, if the moving average process is performed in the plane direction, the contrast is lowered and the blood vessel connectivity is deteriorated. For the plane direction, the data of the minute structure is buried by adopting the Gaussian blurring process. It becomes possible to prevent it.
 なお、ガウスぼかし処理は移動平均処理に比べて、非常に計算量が多く時間がかかるので、すべての方向についてガウスぼかし処理を施すのは計算量・計算時間の観点から好ましくない。そこで、後述するように深さ方向(z方向)に連続するj枚のxy平面断層画像H(L=1、2、・・・・・、j)を平滑化処理後の三次元画像データGから生成し、当該j枚のxy平面断層画像Hを重層して一枚の正面画像Eを生成する際に、平面方向についてはガウスぼかし処理、深さ方向については移動平均処理という二つの異なるフィルタリング処理を実行することにより、血管の接続性を損なうことなく、平滑性の向上や背景の黒レベルを低下させることが可能となり、また現実的な計算量、計算時間で、診断に適した3次元画像や正面画像を生成することができる。特に、本実施形態のように、ノイズ除去処理を行った後の三次元画像データに対して平滑化処理を行ってから正面画像を生成すると、コントラストの改善された極めて血管視認性のよい正面画像を得ることができる。 Note that the Gaussian blurring process is much more computationally intensive and time-consuming than the moving average process, so it is not preferable to perform the Gaussian blurring process in all directions from the viewpoint of the calculation amount and calculation time. Therefore, as described later, three-dimensional image data after smoothing j xy planar tomographic images H L (L = 1, 2,..., J) continuous in the depth direction (z direction). When a single front image E is generated by overlaying the j xy planar tomographic images HL generated from G 2, Gaussian blurring processing is performed in the plane direction, and moving average processing is performed in the depth direction. By executing three different filtering processes, it is possible to improve smoothness and reduce the black level of the background without damaging the connectivity of blood vessels, and it is suitable for diagnosis with realistic calculation amount and calculation time. 3D images and front images can be generated. In particular, when the front image is generated after the smoothing process is performed on the three-dimensional image data after the noise removal process is performed as in the present embodiment, the front image with improved contrast and extremely good blood vessel visibility. Can be obtained.
 なお、三次元画像データGにおいて着目画素Pを端部に設定する場合には必ずしも周辺画素が8個設定されるわけではなく、着目画素Pが三次元画像データGにおいて端部に設定される場合、周辺画素はその着目画素の位置に合わせて適宜8個以内で設定されてもよい。 In the case of setting the end portion of the target pixel P in the three-dimensional image data G 1 are not necessarily the peripheral pixels is eight sets, the target pixel P is set to end in the three-dimensional image data G 1 In this case, the number of peripheral pixels may be appropriately set within 8 according to the position of the target pixel.
 また、第1フィルタ手段34によるガウスぼかし処理で用いるカーネルは、必ずしも3×3の範囲における上述のレートの組み合わせである必要はなく、処理する三次元画像データの種類や平滑化の目的等に応じて、例えば5×5の範囲における異なるレートの組み合わせのカーネルを用いてもよい。仮に5×5の範囲におけるカーネルを用いてガウスぼかし処理をする場合には、着目画素の存するxy平面上において着目画素の周囲に位置する合計24個の画素を周辺画素として設定することとなる。 Further, the kernel used in the Gaussian blur processing by the first filter means 34 is not necessarily a combination of the above-mentioned rates in a 3 × 3 range, and depends on the type of 3D image data to be processed, the purpose of smoothing, and the like. Thus, for example, a kernel having a combination of different rates in the range of 5 × 5 may be used. If Gaussian blurring processing is performed using a kernel in a 5 × 5 range, a total of 24 pixels positioned around the pixel of interest on the xy plane where the pixel of interest exists is set as the peripheral pixel.
 さらに、第2フィルタ手段35による移動平均処理の対象とする対象画素数は、必ずしも5個である必要はなく、着目画素を中心とした範囲の画素を対象画素とする限りにおいては、処理する三次元画像データの種類や平滑化の目的等に応じて、例えば3個、5個、7個等の所望の個数の対象画素数であってもよい。 Further, the number of target pixels to be subjected to the moving average process by the second filter unit 35 is not necessarily five, and as long as the target pixel is a pixel in a range centered on the target pixel, the third order to be processed Depending on the type of the original image data, the purpose of smoothing, etc., the desired number of target pixels such as 3, 5, 7, etc. may be used.
 次に、平滑化処理後の三次元画像データGから正面画像Eを生成する処理を説明する。正面画像生成手段36は、平滑化処理後の三次元画像データGからz方向に連続するj枚のxy平面断層画像H(L=1、2、・・・・・、j)を生成する(ステップ300)。なお、xy平面断層画像Hは必ずしも三次元画像データGの全体を対象として生成される必要はなく、診断対象として観察したい部分を三次元画像データG中に任意に設定し、当該部分のみを対象にして複数枚のxy平面断層画像Hを生成することができる。 Next, processing for generating a front image E from the three-dimensional image data G 2 after the smoothing process. The front image generation unit 36 generates a continuous in the z-direction from the three-dimensional image data G 2 after the smoothing process j pieces of xy plane tomographic image H L (L = 1,2, ····· , j) (Step 300). Incidentally, xy plane tomographic image H L need not necessarily be generated through the 3D image data G 2 as the target, set to an arbitrary portion to be observed as a diagnosis target in a three-dimensional image data G 2, the portion A plurality of xy plane tomographic images H L can be generated only for the target.
 続いて、図8に示すように、j枚のxy平面断層画像Hを重層して一枚の正面画像Eを生成する(ステップ400)。このとき、単純にj枚のxy平面断層画像Hそれぞれの同一の位置にある複数の画素(図8においてマークされている画素)の輝度値のうち、最大の輝度値を選択して正面画像Eの当該位置の輝度値としてもよいが、本実施形態では、j枚のxy平面断層画像Hそれぞれの同一の位置にある複数の画素から、輝度値の大きい順に所定の数の画素を選択し、選択した所定の数の画素の輝度値の平均値を正面画像Eの当該位置の輝度値とすることにより、正面画像Eを生成する。この処理を模式化した様子を図9に示す。 Subsequently, as shown in FIG. 8, a single front image E is generated by overlaying j xy planar tomographic images HL (step 400). At this time, the front image is obtained by simply selecting the maximum luminance value from the luminance values of a plurality of pixels (pixels marked in FIG. 8) at the same position in each of the j xy planar tomographic images HL . In this embodiment, a predetermined number of pixels are selected from a plurality of pixels at the same position in each of the j xy planar tomographic images HL in descending order of the luminance value. Then, the front image E is generated by setting the average value of the luminance values of the selected predetermined number of pixels as the luminance value at the position of the front image E. FIG. 9 shows a schematic view of this process.
 図9には、選択した所定の数の画素の輝度値の平均値を正面画像Eの当該位置の輝度値とする処理が模式化して示されており、(a)は当該位置に太い血管がある場合、(b)は当該位置に細い血管がある場合、(c)は当該位置にノイズ輝点がある場合を示している。例えばxy平面断層画像Hが12枚ある場合、同一の位置にある12個の画素をそのまま一列に並べた状態が図9の(a)、(b)、(c)のそれぞれの左側に示されている。これらを輝度値の大きい順に並び替えたものが図9の(a)、(b)、(c)のそれぞれの右側に示されているものである。このように並び替えた12個の画素のうち輝度値の大きい順に上から5個の画素を選択し、これら選択した5個の画素の輝度値の平均値を正面画像Eの当該位置の輝度値とすることにより、正面画像Eを生成する。 FIG. 9 schematically shows a process in which the average value of the luminance values of a predetermined number of selected pixels is used as the luminance value at the position of the front image E. FIG. 9A shows a thick blood vessel at the position. In some cases, (b) shows a case where there is a thin blood vessel at the position, and (c) shows a case where there is a noise bright spot at the position. For example, when there are twelve xy planar tomographic images HL , the state where twelve pixels at the same position are arranged in a line as they are is shown on the left side of each of FIGS. 9 (a), (b), and (c). Has been. These are rearranged in the descending order of luminance values, as shown on the right side of each of FIGS. 9A, 9B, and 9C. Among the 12 pixels rearranged in this way, the five pixels from the top are selected in descending order of the luminance value, and the average value of the luminance values of the selected five pixels is determined as the luminance value at the corresponding position of the front image E. By doing so, the front image E is generated.
 上述のように、j枚のxy平面断層画像Hを重層して一枚の正面画像Eを生成するとき、単純に同一位置における複数の画素の画素値のうち最大の輝度値を正面画像Eの当該位置の輝度値とせず、輝度値の大きい順に所定の数の画素を選択し、選択した所定の数の画素の輝度値の平均値を正面画像Eの当該位置の輝度値とすれば、例えば図9(a)に示すように当該位置に太い血管があれば、それが反映されて正面画像Eの当該位置の輝度値は大きくなり、正面画像Eにおいてはっきりと血管が視認できるようになる。また図9(b)に示すように、当該位置に細い血管があれば、それが反映されて正面画像Eの当該位置の輝度値が算出され、正面画像Eにおいても微小血管の輝度レベルを維持することができ、図9(c)に示すように、当該位置にノイズ輝点があっても、その他の選択された画素の輝度値が小さければ、平均することで正面画像Eの当該位置の輝度値は小さくなり、正面画像Eにおけるゴマ塩ノイズを低減することができる。これにより背景の黒レベルを低下させることもでき、診断に適した正面画像を生成することができる。 As described above, when a single front image E is generated by overlaying j xy planar tomographic images HL , the maximum luminance value among the pixel values of a plurality of pixels at the same position is simply set as the front image E. If the predetermined number of pixels are selected in descending order of the luminance value, and the average value of the luminance values of the selected predetermined number of pixels is set as the luminance value at the position of the front image E, For example, as shown in FIG. 9A, if there is a thick blood vessel at the position, the luminance value at the position of the front image E is reflected and the blood vessel can be clearly seen in the front image E. . Further, as shown in FIG. 9B, if there is a thin blood vessel at the position, the luminance value at the position of the front image E is calculated and the brightness level of the micro blood vessel is maintained in the front image E. As shown in FIG. 9C, even if there is a noise bright spot at the position, if the luminance value of the other selected pixels is small, the average of the position of the front image E is obtained by averaging. The luminance value becomes small, and sesame salt noise in the front image E can be reduced. As a result, the black level of the background can be reduced, and a front image suitable for diagnosis can be generated.
 以上、本発明に係る画像処理装置、画像処理方法及び画像処理プログラムについて図面に基づいて説明してきたが、本発明は上記実施形態に限定されることはなく、種々の変更実施が可能である。 As described above, the image processing apparatus, the image processing method, and the image processing program according to the present invention have been described with reference to the drawings. However, the present invention is not limited to the above-described embodiments, and various modifications can be made.
 本発明は三次元画像データから生成した複数枚の断層画像を重層して一の正面画像を生成する方法として利用することが可能である。 The present invention can be used as a method for generating a single front image by overlaying a plurality of tomographic images generated from three-dimensional image data.
10 断層像撮影装置
20 画像処理装置
21 制御部
22 断層画像形成部
23 記憶部
24 血管断層画像形成部
25 三次元データ構築部
26 表示部
27 操作部
30 画像処理部
31 算出手段
32 計数手段
33 置換手段
34 第1フィルタ手段
35 第2フィルタ手段
36 正面画像生成手段
DESCRIPTION OF SYMBOLS 10 Tomography apparatus 20 Image processing apparatus 21 Control part 22 Tomographic image formation part 23 Storage part 24 Vascular tomographic image formation part 25 Three-dimensional data construction part 26 Display part 27 Operation part 30 Image processing part 31 Calculation means 32 Counting means 33 Replacement Means 34 First filter means 35 Second filter means 36 Front image generation means

Claims (7)

  1.  三次元画像データに対して平滑化処理を実行する平滑化手段と、
     所定の方向に連続する複数枚の断層画像を平滑化処理後の三次元画像データから生成し、前記複数枚の断層画像を重層して一枚の正面画像を生成する正面画像生成手段と、を備え、
     前記平滑化手段が、
     注目画素と、前記所定の方向に直交する平面上において前記注目画素の周囲に位置する複数の周辺画素とに対してガウスぼかし処理を実行して前記注目画素の輝度値を置き換える第1フィルタ手段と、
     前記所定の方向に連続する前記注目画素を中心とした所望の範囲の複数の対象画素に対して移動平均処理を実行して前記注目画素の輝度値を置き換える第2フィルタ手段と、を有することを特徴とする画像処理装置。
    Smoothing means for executing a smoothing process on the three-dimensional image data;
    Front image generation means for generating a plurality of tomographic images continuous in a predetermined direction from the three-dimensional image data after the smoothing process, and generating a single front image by overlaying the plurality of tomographic images; Prepared,
    The smoothing means comprises:
    First filter means for performing Gaussian blurring processing on the target pixel and a plurality of peripheral pixels positioned around the target pixel on a plane orthogonal to the predetermined direction to replace the luminance value of the target pixel; ,
    Second filtering means for executing a moving average process on a plurality of target pixels in a desired range centered on the target pixel continuous in the predetermined direction to replace the luminance value of the target pixel. A featured image processing apparatus.
  2.  前記正面画像生成手段が、前記複数枚の断層画像それぞれの同一の位置にある複数の画素から、輝度値の大きい順に所定の数の画素を選択し、選択した所定の数の画素の輝度値の平均値を前記正面画像の当該位置の輝度値とすることにより、前記正面画像を生成することを特徴とする、請求項1に記載の画像処理装置。 The front image generation means selects a predetermined number of pixels in descending order of luminance values from a plurality of pixels at the same position in each of the plurality of tomographic images, and sets the luminance values of the selected predetermined number of pixels. The image processing apparatus according to claim 1, wherein the front image is generated by using an average value as a luminance value at the position of the front image.
  3.  前記平滑化処理前の三次元画像データに対してノイズ除去処理を実行するノイズ除去手段を更に備えることを特徴とする、請求項1又は2に記載の画像処理装置。 The image processing apparatus according to claim 1, further comprising a noise removing unit that performs a noise removing process on the three-dimensional image data before the smoothing process.
  4.  三次元画像データに対して平滑化処理を実行する平滑化工程と、
     所定の方向に連続する複数枚の断層画像を平滑化処理後の三次元画像データから生成し、前記複数枚の断層画像を重層して一枚の正面画像を生成する正面画像生成工程と、を備え、
     前記平滑化工程において、
     注目画素と、前記所定の方向に直交する平面上において前記注目画素の周囲に位置する複数の周辺画素とに対してガウスぼかし処理を実行して前記注目画素の輝度値を置き換え、
     前記所定の方向に連続する前記注目画素を中心とした所望の範囲の複数の対象画素に対して移動平均処理を実行して前記注目画素の輝度値を置き換えることを特徴とする画像処理方法。
    A smoothing step for performing a smoothing process on the three-dimensional image data;
    A front image generation step of generating a plurality of tomographic images continuous in a predetermined direction from the three-dimensional image data after the smoothing process, and generating a single front image by overlaying the plurality of tomographic images. Prepared,
    In the smoothing step,
    Performing a Gaussian blurring process on the pixel of interest and a plurality of peripheral pixels located around the pixel of interest on a plane orthogonal to the predetermined direction to replace the luminance value of the pixel of interest;
    An image processing method comprising: performing a moving average process on a plurality of target pixels in a desired range centering on the target pixel continuous in the predetermined direction to replace a luminance value of the target pixel.
  5.  前記正面画像生成工程において、前記複数枚の断層画像それぞれの同一の位置にある複数の画素から、輝度値の大きい順に所定の数の画素を選択し、選択した所定の数の画素の輝度値の平均値を前記正面画像の当該位置の輝度値とすることにより、前記正面画像を生成することを特徴とする、請求項4に記載の画像処理方法。 In the front image generation step, a predetermined number of pixels are selected in descending order of luminance values from a plurality of pixels at the same position in each of the plurality of tomographic images, and the luminance values of the selected predetermined number of pixels are selected. The image processing method according to claim 4, wherein the front image is generated by using an average value as a luminance value at the position of the front image.
  6.  前記平滑化処理前の三次元画像データに対してノイズ除去処理を実行するノイズ除去工程を更に備えることを特徴とする、請求項4又は5に記載の画像処理方法。 The image processing method according to claim 4, further comprising a noise removal step of performing a noise removal process on the three-dimensional image data before the smoothing process.
  7.  コンピュータを請求項1~3のいずれか1項に記載の画像処理装置として機能させるための、あるいはコンピュータに請求項4~6のいずれか1項に記載の画像処理方法を実行させるための画像処理プログラム。 An image processing for causing a computer to function as the image processing apparatus according to any one of claims 1 to 3, or for causing a computer to execute the image processing method according to any one of claims 4 to 6. program.
PCT/JP2019/004336 2018-02-08 2019-02-07 Image processing device, image processing method, and image processing program WO2019156139A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019570789A JPWO2019156139A1 (en) 2018-02-08 2019-02-07 Image processing device, image processing method and image processing program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-021451 2018-02-08
JP2018021451 2018-02-08

Publications (1)

Publication Number Publication Date
WO2019156139A1 true WO2019156139A1 (en) 2019-08-15

Family

ID=67549480

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/004336 WO2019156139A1 (en) 2018-02-08 2019-02-07 Image processing device, image processing method, and image processing program

Country Status (2)

Country Link
JP (1) JPWO2019156139A1 (en)
WO (1) WO2019156139A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006123729A1 (en) * 2005-05-19 2006-11-23 Hitachi Medical Corporation Ultrasonographic device and image processing method thereof
JP2013059551A (en) * 2011-09-14 2013-04-04 Topcon Corp Fundus observation apparatus
WO2014203901A1 (en) * 2013-06-19 2014-12-24 株式会社トプコン Ophthalmological imaging device and ophthalmological image display device
JP2016202900A (en) * 2015-04-15 2016-12-08 株式会社トプコン Oct angiography calculation with optimized signal processing
JP2017042443A (en) * 2015-08-27 2017-03-02 キヤノン株式会社 Ophthalmologic apparatus, information processing method, and program
JP2018079208A (en) * 2016-11-18 2018-05-24 キヤノン株式会社 Image processing device, image processing method, and program
JP2018153611A (en) * 2017-03-17 2018-10-04 キヤノン株式会社 Information processor, image generation method and program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006123729A1 (en) * 2005-05-19 2006-11-23 Hitachi Medical Corporation Ultrasonographic device and image processing method thereof
JP2013059551A (en) * 2011-09-14 2013-04-04 Topcon Corp Fundus observation apparatus
WO2014203901A1 (en) * 2013-06-19 2014-12-24 株式会社トプコン Ophthalmological imaging device and ophthalmological image display device
JP2016202900A (en) * 2015-04-15 2016-12-08 株式会社トプコン Oct angiography calculation with optimized signal processing
JP2017042443A (en) * 2015-08-27 2017-03-02 キヤノン株式会社 Ophthalmologic apparatus, information processing method, and program
JP2018079208A (en) * 2016-11-18 2018-05-24 キヤノン株式会社 Image processing device, image processing method, and program
JP2018153611A (en) * 2017-03-17 2018-10-04 キヤノン株式会社 Information processor, image generation method and program

Also Published As

Publication number Publication date
JPWO2019156139A1 (en) 2021-01-28

Similar Documents

Publication Publication Date Title
US9872614B2 (en) Image processing apparatus, method for image processing, image pickup system, and computer-readable storage medium
JP5025715B2 (en) Tomographic imaging apparatus, image processing apparatus, image processing system, control method and program for image processing apparatus
US8687863B2 (en) Image processing apparatus, control method thereof and computer program
JP5331797B2 (en) Medical diagnostic device and method for improving image quality of medical diagnostic device
JP6408916B2 (en) Image processing apparatus, image processing method, image processing program, and recording medium storing the program
JP6608138B2 (en) Image processing apparatus, image processing method, image processing program, and recording medium storing the program
JP2021176101A (en) Image improvement method and system for low coherence interferometry
JP7284103B2 (en) Image processing device, image processing method and image processing program
US11972544B2 (en) Method and apparatus for optical coherence tomography angiography
EP3216387B1 (en) Method and system for motion artefacts removal in optical coherence tomograpy
WO2019156139A1 (en) Image processing device, image processing method, and image processing program
JP6716197B2 (en) Image processing apparatus and X-ray diagnostic apparatus
WO2018074459A1 (en) Image processing device, image processing method, and image processing program
JP2016202319A (en) Medical image processing device, medical image processing method, and medical image processing program
JPWO2020003524A1 (en) Ophthalmic image processing device and OCT device
JP2020195782A (en) Ophthalmologic image processing program and oct device
JP7007125B2 (en) Ophthalmology information processing equipment and ophthalmology imaging equipment
JP6647305B2 (en) Image processing apparatus, image processing method, and image processing program
JP7233792B2 (en) Diagnostic imaging device, diagnostic imaging method, program, and method for generating training data for machine learning
JP2012176291A (en) Tomographic imaging apparatus, image processing apparatus, image processing system, and method and program for controlling image processing apparatus
EP4000502B1 (en) Volumetric oct image data processing
Kumar et al. 3D visualization and mapping of choroid thickness based on optical coherence tomography: a step-by-step geometric approach
JP2021049205A (en) Image processing device and image processing method
JP2021183293A (en) Ophthalmologic information processing device and ophthalmologic imaging device
JP2016179348A (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19751187

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019570789

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19751187

Country of ref document: EP

Kind code of ref document: A1