WO2021215357A1 - Dispositif de traitement d'image, système d'affichage d'image, procédé de traitement d'image, et programme - Google Patents

Dispositif de traitement d'image, système d'affichage d'image, procédé de traitement d'image, et programme Download PDF

Info

Publication number
WO2021215357A1
WO2021215357A1 PCT/JP2021/015671 JP2021015671W WO2021215357A1 WO 2021215357 A1 WO2021215357 A1 WO 2021215357A1 JP 2021015671 W JP2021015671 W JP 2021015671W WO 2021215357 A1 WO2021215357 A1 WO 2021215357A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
image
medical image
processor
grid
Prior art date
Application number
PCT/JP2021/015671
Other languages
English (en)
Japanese (ja)
Inventor
佳児 中村
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Priority to JP2022517016A priority Critical patent/JP7430249B2/ja
Publication of WO2021215357A1 publication Critical patent/WO2021215357A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]

Definitions

  • the present invention relates to an image processing device, an image display system, an image processing method and a program.
  • CT is an abbreviation for Computed Tomography.
  • Patent Document 1 describes a medical image processing system that analyzes a medical image, generates diagnostic support information based on the analysis result, and displays the diagnostic support information using a display device.
  • the system described in the document produces diagnostic support images suitable for reference to diagnostic support information.
  • the diagnostic support image applied to the system is superposed with scale and evenly spaced grid patterns. This makes it easier to recognize the position and size of the structure in the diagnostic support image.
  • Patent Document 2 describes a medical imaging system that irradiates a specific part of a patient as a subject and generates a still image of the subject.
  • the system described in the document performs a reduction process on a medical image to generate a preview image.
  • the system divides the irradiation field of the preview image into a plurality of small areas, and extracts the feature amount from the signal value of each small area.
  • Patent Document 3 describes an ultrasonic diagnostic apparatus that generates an ultrasonic image of a subject.
  • the apparatus described in the same document superimposes a color Doppler image of a region of interest on a B-mode tomographic image and displays it using a display unit.
  • the apparatus determines whether or not the pixel is included in the region of interest based on the pixel value of the designated pixel.
  • Patent Document 4 describes an image display device that reads a CT image taken by using an image capturing device, generates various images for medical diagnosis, and displays the generated images using a screen.
  • the document discloses voxel data obtained from CT images.
  • the bounding box can be displayed for each isolated area, but when displaying a plurality of bounding boxes on one screen, the screen can be difficult to see.
  • the judgment of the extent of the lesion area may differ from doctor to doctor.
  • CAD computer-Aided Diagnosis
  • CAD has the original purpose of suppressing oversight of lesions.
  • CAD shows the area of the lesion to the doctor. Even if CAD is launched for the purpose of suppressing lesion oversight, CAD may be used in a way that exceeds lesion oversight suppression, and there is a possibility that the diagnosis will be biased.
  • the grid pattern described in Patent Document 1 is used when recognizing the position and size of a structure, and the visibility of the CAD detection result may be lowered depending on the mode of the grid pattern.
  • the grid illustrated in FIG. 5A of Patent Document 2 is a small region obtained by dividing the preview image into a 10 ⁇ 10 matrix.
  • the small area is a unit for calculating the feature amount and is not related to the display of the CAD detection result.
  • the grid illustrated in FIG. 4 of Patent Document 3 is a unit of processing when extracting a region of interest, and is not related to the display of CAD detection results.
  • Patent Document 4 does not describe or suggest the display of CAD detection results.
  • the present invention has been made in view of such circumstances, and provides an image processing apparatus, an image display system, an image processing method, and a program capable of displaying a detection result without impairing the reliability of detection of a feature region.
  • the purpose is.
  • the image processing apparatus is an image processing apparatus including one or more processors, in which the processor acquires a medical image obtained by photographing a subject, detects a characteristic region from the medical image, and detects a characteristic region.
  • a display unit cell having a size of two or more integral multiples of the pixels constituting the medical image, and a display unit cell having a size exceeding the size of the processing unit in the detection of the feature area is arranged as a medical image.
  • This is an image processing device that generates a display signal in which a display frame having one or more display unit lattices corresponding to the size of the feature area is superimposed and displayed on the feature area at a position corresponding to the position of the feature area. ..
  • the feature region detected from the medical image has a size that is an integral multiple of two or more of the pixels constituting the medical image, and the size of the processing unit of the medical image.
  • Characteristic areas may include lung nodules, fractures, bleeding and cerebral infarction.
  • the feature area may include lesions. Multiple feature regions can be detected from a single medical image.
  • the processor generates a display signal for displaying the outline of the display frame by using an aspect different from the outline of the display unit cell.
  • the display frame can be conspicuous on the screen on which the medical image is displayed.
  • the processor generates a display signal for displaying the outline of the display frame outside the outline of the display unit cell which is the edge in the feature area.
  • the display frame can be made to stand out with respect to the display grid.
  • the processor generates a display signal that hides the display unit cell.
  • the display frame can be made conspicuous.
  • the processor generates a mask image corresponding to the feature area and generates a display signal representing the mask image.
  • a mask image is used together with the display frame. This can improve the visibility of the feature area.
  • the processor sets the size of the display unit cell according to the type of the feature area.
  • a display frame corresponding to the type of the feature area can be displayed.
  • the processor uses the center of gravity of the medical image and the center of gravity of the display grid to align the medical image and the display grid.
  • the center of gravity of the medical image and the center of gravity of the display grid can be aligned with the medical image to be used and the display grid.
  • the processor uses the center of gravity of the subject and the center of gravity of the display grid to align the medical image and the display grid.
  • the center of gravity of the subject and the center of gravity of the display grid can be aligned with the medical image to be used and the display grid.
  • the processor sets a display grid in which two-dimensional display unit grids are arranged in a two-dimensional manner.
  • the visibility of the feature region can be improved in a two-dimensional medical image such as a tomographic image.
  • the processor sets a display grid in which three-dimensional display unit grids are arranged in a three-dimensional manner.
  • the visibility of the feature region can be improved in a three-dimensional medical image.
  • the processor detects the feature area as a polygonal shape.
  • the display frame can be superimposed and displayed on the feature area of the polygonal shape.
  • the image display system includes an image processing device including one or more processors, and a display that receives a display image signal transmitted from the image processing device and displays an image represented by the display image signal.
  • a processor acquires a medical image obtained by photographing a subject, detects a feature region from the medical image, and determines the size of two or more integral multiples of the pixels constituting the medical image.
  • a display unit cell having a display unit cell and having a size exceeding the size of the processing unit in the detection of the feature area is set in the medical image, and the feature area is set at a position corresponding to the position of the feature area.
  • the display is an image display system that superimposes a display frame on a medical image to generate a display signal that superimposes and displays a display frame having one or more display unit lattices corresponding to the size of.
  • the image processing method acquires a medical image obtained by photographing a subject, detects a characteristic region from the medical image, and has a size of two or more integral multiples of the pixels constituting the medical image.
  • a display grid in which display unit grids that are display unit grids and have a size exceeding the size of the processing unit in the detection of the feature region are arranged in the medical image is set in the medical image, and the feature region is set at a position corresponding to the position of the feature region.
  • the program according to the present disclosure has a function of acquiring a medical image obtained by photographing a subject, a function of detecting a feature region from the medical image, and a size of two or more integral multiples of the pixels constituting the medical image.
  • the feature region detected from the medical image has a size that is an integral multiple of two or more of the pixels constituting the medical image and has a size that exceeds the size of the processing unit of the medical image.
  • a display frame having one or more display unit arrays is superimposed, whereby the detection result can be displayed without impairing the reliability of detection of the feature region.
  • FIG. 1 is a functional block diagram of a medical image display system according to an embodiment.
  • FIG. 2 is a flowchart showing the procedure of the medical image processing method according to the embodiment.
  • FIG. 3 is a schematic view of a display grid set on a medical image.
  • FIG. 4 is an explanatory diagram of an embodiment in which the medical image and the display grid are aligned using the center of gravity of the subject.
  • FIG. 5 is an explanatory diagram of an example of the correspondence between the center of gravity of the three-dimensional image and the center of gravity of the tomographic image.
  • FIG. 6 is an explanatory diagram of another example of the correspondence between the center of gravity of the three-dimensional image and the center of gravity of the tomographic image.
  • FIG. 1 is a functional block diagram of a medical image display system according to an embodiment.
  • FIG. 2 is a flowchart showing the procedure of the medical image processing method according to the embodiment.
  • FIG. 3 is a schematic view of a display grid set on a medical image.
  • FIG. 7 is an explanatory diagram showing an example of a display grid setting screen.
  • FIG. 8 is a schematic view of a display frame superimposed on a medical image.
  • FIG. 9 is an explanatory diagram of a modified example of the display frame.
  • FIG. 10 is an explanatory diagram of the action and effect of the embodiment.
  • FIG. 1 is a functional block diagram of a medical image display system according to an embodiment.
  • the medical image display system 10 includes a medical image processing device 12, a medical image storage device 18, and a medical image viewer device 20.
  • the medical image processing device 12 is a terminal device used by a user in hospitals, examination laboratories, and the like.
  • the medical image processing device 12 may apply a computer.
  • the medical image processing device 12 includes a processor 14 and a memory 16.
  • the memory 16 includes a program memory in which a program including an instruction to be executed by the processor 14 is stored.
  • the memory 16 may include a data memory in which various types of data are stored.
  • the medical image processing device 12 executes a program read from the memory 16 by the processor 14, and has a medical image acquisition function, a feature area detection function, a display grid setting function, an alignment function, a display frame setting function, a display image signal generation function, and a display image signal generation function. Realize various functions including display image signal transmission function.
  • the term image may include the meaning of an image signal representing an image and image data representing an image.
  • the term generation can be interpreted as synonymous with terms such as production and production.
  • the term signal transmission may include the meaning of the output of a signal from the source of the signal.
  • the processor 14 described in the embodiment is an example of one or more processors.
  • the medical image acquisition function is a function of acquiring a medical image to be processed from a medical image storage device 18 or the like. Acquisition of a medical image may include conversion of the medical image and generation of a new medical image. For example, when a two-dimensional tomographic image is generated using raw data acquired from a CT imaging device 28 or the like, it can be included in the concept of acquiring a two-dimensional tomographic image.
  • the term tomographic image may include the concept of a cross-sectional image.
  • the feature area may include areas treated as features in medical images, such as lung nodules, fractures, subarachnoid hemorrhage and cerebral infarction. Multiple feature regions may be detected in one medical image. A known method can be applied to the extraction of the feature region.
  • the feature area detection function can generate a mask image of the feature area.
  • a known method can be applied to the generation of the mask image.
  • the processing unit for detecting the feature region pixels constituting the medical image may be applied, or a region including a plurality of pixels may be applied.
  • the medical image is shown as a tomographic image 100 in FIG. 3 and the like.
  • the feature region is illustrated in FIG. 3 and the like using reference numeral 102.
  • the display grid is a grid applied when defining a display frame to be superimposed and displayed on the feature area.
  • the display grid has a structure in which a plurality of display unit grids are arranged in a two-dimensional or three-dimensional manner. That is, a two-dimensional display grid can be applied to a two-dimensional image.
  • a three-dimensional display grid can be applied to a three-dimensional image.
  • the display grid setting function includes setting the number of display unit grids in each dimension and setting the size of the display unit grids.
  • the size of the display unit cell is a positive integer multiple of the size of the pixels constituting the medical image, and a size exceeding the size of the processing unit for detecting the feature area is applied.
  • the display unit lattice is two or more pixels.
  • the area of the display unit grid is applied to the size of the display unit grid.
  • the volume of the display unit grid is applied to the size of the display unit grid.
  • the alignment function is a function for aligning the subject and the display grid in the medical image.
  • An example of alignment is an embodiment in which the position of the center of gravity of the subject and the position of the center of gravity of the display grid are aligned.
  • the display frame is a frame that surrounds the feature area and is superimposed on the medical image. Enclosing the feature area here may mean that the feature area is included inside the display frame, or the display frame may intersect with the outer edge of the feature area.
  • a two-dimensional shape is applied to the display frame. Further, in the case of a three-dimensional display grid, a three-dimensional shape is applied to the display frame.
  • the display image signal generation function is a function of generating a display image signal representing a display image to be displayed using the display 22.
  • the display image signal includes a display image signal representing a medical image and a display image signal representing a display frame.
  • the display image signal may include a display image signal representing a feature area such as a mask image and a display image signal representing a display grid.
  • the display image signal transmission function is a function of transmitting a display image signal representing an image to be displayed using the display 22 to the medical image viewer device 20.
  • the display 22 displays a medical image or the like based on the displayed image signal.
  • the display image signal described in the embodiment is an example of the display signal.
  • the medical image storage device 18 stores medical images to which incidental information specified by the DICOM standard is added.
  • the medical image may be raw data acquired by using a modality such as a CT imaging device 28 and an MRI imaging device 30 for photographing a subject, or may be volume data generated from the raw data.
  • the medical image storage device 18 may apply a large-capacity storage device.
  • DICOM is an abbreviation for Digital Imaging and Communication in Medicine.
  • the medical image viewer device 20 is used when the user observes a medical image.
  • the medical image viewer device 20 includes a display 22 and an input device 24.
  • the display 22 displays an image represented by a display image signal acquired from the medical image processing device 12.
  • the display 22 can display a medical image processed by the medical image processing device 12 and a medical image stored in the medical image storage device 18 based on a command of the medical image processing device 12.
  • the input device 24 transmits an input signal according to the user's operation to the medical image processing device 12.
  • the input device 24 may apply operating members such as a keyboard, mouse and joystick.
  • a touch panel type display 22 may be applied to integrally configure the display 22 and the input device 24.
  • the medical image display system 10 is communicably connected to a modality such as a CT imaging device 28 via a network 26.
  • a LAN Local Area Network
  • the network 26 may apply a premises LAN in a hospital or the like.
  • the network 26 may include an external network such as a hospital.
  • Modality may include PET devices, ultrasonic diagnostic devices, CR devices, and the like.
  • PET is an abbreviation for Positron Emission Tomography.
  • CR is an abbreviation for Computed Radiography.
  • FIG. 2 is a flowchart showing the procedure of the medical image processing method according to the embodiment.
  • the processor 14 acquires the medical image to be processed from the medical image storage device 18 and the like.
  • the medical image may be acquired by acquiring data in a format that can be processed by the processor 14, or by applying an aspect of acquiring data in an arbitrary format and converting it into data in a format that can be processed by the processor 14. good.
  • the process proceeds to the feature region detection step S12.
  • the processor 14 automatically detects the feature area from the acquired medical image. After the feature area detection step S12, the process proceeds to the display grid setting step S14. In the feature region detection step S12, the processor 14 may detect a feature region based on the designated feature type.
  • a medical image in which the feature region is automatically detected may be acquired.
  • the processor 14 instead of the feature region detection step S12, acquires information on the feature region incidental to the medical image.
  • the processor 14 sets the display grid for the medical image.
  • the user can set the number of display unit grids and the size of the display unit grids in each dimension by using the input device 24 shown in FIG.
  • the order of the feature area detection step S12 and the display grid setting step S14 may be exchanged, or both may be performed in parallel. After the display grid setting step S14, the process proceeds to the alignment step S16.
  • the processor 14 aligns the subject and the display grid in the medical image.
  • the center of gravity of the medical image and the center of gravity of the display grid are aligned can be applied.
  • the position of the center of gravity in a two-dimensional medical image such as a tomographic image can be defined using a two-dimensional Cartesian coordinate system.
  • the position of the center of gravity in a three-dimensional medical image can be defined using a three-dimensional Cartesian coordinate system.
  • the processor 14 sets the display frame according to the size and position of the feature area detected in the feature area detection step S12.
  • the processor 14 specifies one or more display unit grids including the feature area, and sets the contour in the aggregate of the specified one or more display unit grids as the display frame.
  • the processor 14 can set a display frame for each feature area. Further, the processor 14 may set one display frame for one feature area. That is, the processor 14 may set one or more display frames for each of the plurality of feature areas. In the tomographic image, there may be a case where two or more feature regions that are separated and visually recognized are the same feature region.
  • the processor 14 can set one display frame for a plurality of feature areas having the same source. For example, when the type of the characteristic region is bleeding, one display frame can be set for the characteristic region corresponding to a plurality of bleedings having the same source of bleeding. After the display frame setting step S18, the process proceeds to the display image signal generation step S20.
  • the processor 14 In the display image signal generation step S20, the processor 14 generates a display image signal representing the display image to be displayed on the display 22 shown in FIG. That is, the processor 14 generates the display image signal of the medical image and the display image signal of the display frame. The processor 14 may generate the display image signal of the display grid and the display image signal of the display unit grid. After the display image signal generation step S20, the process proceeds to the display image signal transmission step S22.
  • the processor 14 transmits the display image signal generated in the display image signal generation step S20 to the medical image viewer device 20. After the display image signal transmission step S22, the processor 14 ends the procedure of the image processing method.
  • the medical image viewer device 20 uses the display 22 to display a display image on which a display frame surrounding the feature area is superimposed on the medical image.
  • the medical image viewer device 20 may use the display 22 to superimpose and display the mask image and the display grid of the feature region on the medical image.
  • FIG. 3 is a schematic view of a display grid set on a medical image.
  • the figure shows a tomographic image 100 of the brain taken by using a CT imaging device 28 as a medical image, and a tomographic image 100 in which subarachnoid hemorrhage is detected as a characteristic region 102.
  • the mask image 103 which is the detection result of the feature region 102, is superimposed and displayed.
  • a polygonal shape can be applied to the feature area 102. That is, the planar shape of the contour of the feature region 102 in which the representative points of the pixels forming the edge of the feature region 102 are connected by using a line segment is a polygonal shape. The center of gravity of the pixel can be applied to the representative point of the pixel. Image processing such as smoothing may be performed on the contour of the feature region 102, the contour of the feature region 102 may be formed by using at least one of a curve and a line segment, and the feature region 102 may have an arbitrary shape.
  • the processor 14 shown in FIG. 1 is a display grid 110 having the same size as the tomographic image 100 with respect to the tomographic image 100 shown in FIG. 3, has a number of divisions of 5 ⁇ 5, and has 25 display unit grids 112.
  • the display grid 110 is set.
  • FIG. 3 illustrates a two-dimensional display grid 110 in which two-dimensional display unit grids 112 are arranged in a two-dimensional manner with respect to a tomographic image 100 which is a two-dimensional medical image.
  • a cell is an example of a two-dimensional display unit cell 112.
  • the number of divisions of the display grid 110 may be finer than 5x5 such as 7x7 and 9x9, or coarser than 5x5 such as 3x3.
  • the number of divisions of the display grid 110 may be 2 ⁇ 2 or more.
  • the number of divisions of the display grid 110 does not have to be 1, such as 3 ⁇ 4 and 9 ⁇ 16.
  • the display grid 110 may use any of the four corners as a reference for alignment with the tomographic image 100.
  • the upper left end position 111 is used as a reference for alignment.
  • the processor 14 shown in FIG. 1 can superimpose the upper left end position 101 of the tomographic image 100 and the upper left end position 111 of the display grid 110 to align the tomographic image 100 and the display grid 110.
  • the display grid 110 may have a shape that surrounds the entire subject, and the display grid 110 does not have to surround the entire tomographic image 100.
  • FIG. 4 is an explanatory diagram of an embodiment in which the medical image and the display grid are aligned using the center of gravity of the subject region.
  • the processor 14 shown in FIG. 1 extracts the subject region 104 from the tomographic image 100, calculates the center of gravity 106 of the subject region 104, superimposes the center of gravity 114 of the display grid 110 on the center of gravity 106 of the subject region 104, and displays the image. Alignment between the grid 110 and the tomographic image 100 can be performed.
  • FIG. 5 is an explanatory diagram showing the correspondence between the center of gravity of the three-dimensional image and the center of gravity of the tomographic image.
  • the thickness of the tomographic image 100A and the like is simplified and illustrated.
  • the voxels constituting the three-dimensional image 120 can be set as the three-dimensional display unit lattice 123.
  • the three-dimensional display grid 121 is configured by arranging the display unit grids 123 in a three-dimensional manner. The same applies to the three-dimensional image 120 shown in FIG.
  • the three-dimensional image 120 corresponding to the tomographic image 100 may be generated using raw data, or may be generated using the tomographic image 100A to the tomographic image 100E. The same applies to the tomographic image 100A to the tomographic image 100E shown in FIG.
  • FIG. 5 shows a tomographic image 100A, a tomographic image 100B, a tomographic image 100C, a tomographic image 100D, and a tomographic image 100E corresponding to the three-dimensional image 120.
  • the tomographic image 100A to the tomographic image 100E may be collectively referred to as the tomographic image 100.
  • the center of gravity 124 of the subject 122 of the three-dimensional image 120 is projected, and the center of gravity 106 is defined.
  • the positions of the respective centers of gravity 106 are the same. That is, the alignment of the display grid 110 is collectively performed for each of the tomographic image 100A to the tomographic image 100E.
  • Reference numeral 126 indicates a line passing through the center of gravity 124 in the three-dimensional image 120.
  • Reference numeral 108 is a line passing through each center of gravity 106 from the tomographic image 100A to the tomographic image 100E.
  • the line 108 is orthogonal to each of the tomographic image 100A to the tomographic image 100E.
  • FIG. 6 is an explanatory diagram of another example of the correspondence between the center of gravity of the three-dimensional image and the center of gravity of the tomographic image.
  • the center of gravity 106A is defined based on the subject area 104A.
  • each of the tomographic image 100B to the tomographic image 100E is based on the subject area 104B, the subject area 104C, the subject area 104D, and the subject area 104E, respectively, based on the respective center of gravity 106B, center of gravity 106C, center of gravity 106D, and center of gravity 106E. Is stipulated.
  • Each of the tomographic image 100A to the tomographic image 100E shown in FIG. 6 is individually aligned with the display grid 110.
  • FIG. 7 is an explanatory diagram showing an example of a display grid setting screen.
  • the setting screen 200 shown in FIG. 1 is displayed using the display 22 shown in FIG.
  • the user can see the setting screen 200 displayed by using the display 22 and operate the input device 24 to set the parameters of the display grid.
  • the setting screen 200 includes a medical image display area 202 and a parameter display area 204.
  • a medical image is displayed in the medical image display area 202.
  • FIG. 7 shows a tomographic image 100 of the brain shown in FIG. 3 and the like as a medical image.
  • the center of gravity 106 of the tomographic image 100 and the center of gravity 124 of the display grid 110 shown in FIG. 4 and the like are not shown.
  • the parameter display area 204 includes a reference display area 206, a division number display area 208, and a feature area display area 210.
  • the parameter display area 204 may include a button for switching between display and non-display of information to be displayed in the medical image display area 202.
  • the reference display area 206 the reference for alignment between the tomographic image 100 and the display grid 110 is displayed.
  • the reference display area 206 can display information input by the user using the input device 24 shown in FIG.
  • the processor 14 may apply the criteria input by the user to perform the alignment of the tomographic image 100 and the display grid 110.
  • the reference display area 206 displays the predetermined alignment reference.
  • the user may change the pre-defined alignment criteria using the input device 24.
  • the number of divisions display area 208 displays the number of divisions of the display grid 110.
  • FIG. 7 shows the number of divisions in the two-dimensional display grid 110.
  • the division number display area 208 can display information input by the user using the input device 24 shown in FIG.
  • the processor 14 may set the display grid 110 using the number of divisions input by the user.
  • the processor 14 sets the display grid 110 using the value of the number of divisions specified in advance
  • the value of the number of divisions specified in advance is displayed in the number of divisions display area 208.
  • the user can change the predetermined number of divisions by using the input device 24.
  • the feature area display area 210 displays the type of the feature area 102.
  • the processor 14 displays the type of the feature area 102 to be detected in the feature area display area 210.
  • the processor 14 may set the number of divisions of the display grid 110 according to the type of the feature area 102. In other words, the processor 14 can set the size of the display unit cell 112 according to the type of the feature area 102.
  • the processor 14 can set a reference for alignment between the tomographic image 100 and the display grid 110 according to the type of the feature area 102, the number of divisions of the display grid 110, and the like.
  • the reference display area 206, the number of divisions display area 208, and the feature area display area 210 may apply an embodiment in which a pull-down menu is applied to select an arbitrary option from a plurality of predetermined options.
  • the alignment of the three-dimensional image 120 and the display grid 121 shown in FIGS. 5 and 6 can be performed in the same manner as the alignment of the two-dimensional tomographic image 100 and the two-dimensional display grid 110.
  • FIG. 8 is a schematic view of a display frame superimposed on a medical image.
  • the display frame 116 is superimposed on the tomographic image 100 shown in the figure.
  • the display frame 116 represents an outline in a set of a plurality of display unit lattices 112 including a feature area 102.
  • the display frame 116 may apply the contours of all the display unit lattices 112 including the feature area 102.
  • the lines constituting the display frame 116 may be applied with any thickness and any color.
  • the lines forming the display frame 116 may be different from the lines forming the display grid 110 and the lines forming the display unit grid 112.
  • Hatching may be applied to the inside of the display frame 116.
  • a pattern such as dot hatching may be applied, or a fill of an arbitrary color may be applied.
  • a fill such as semi-transparency that allows the tomographic image 100 to pass through is preferable.
  • the line representing the contour of the display grid 110 and the line representing the contour of the display unit grid 112 may be hidden.
  • a line representing the contour of the display unit grid 112 inside the display frame 116 is displayed, and a line representing the contour of the display unit grid 112 inside the display frame 116 is displayed. May be hidden. Further, when the display frame 116 is superimposed on the tomographic image 100, the mask image 103 may be hidden.
  • the processor 14 may have a function of switching between display and non-display of the mask image 103, the outline of the display grid 110, the display unit grid 112, and the like.
  • the processor 14 may switch between displaying and hiding the mask image 103 and the like based on the input of the user using the input device 24.
  • the processor 14 may use the display 22 to display a screen for switching between display and non-display of the mask image 103 and the like.
  • the three-dimensional display frame in the three-dimensional image 120 shown in FIGS. 5 and 6 can be set in the same manner as the two-dimensional display frame 116. It should be noted that the illustration of the three-dimensional display frame superimposed on the three-dimensional image 120 is omitted.
  • FIG. 9 is an explanatory diagram of a modified example of the display frame.
  • the contour of the display frame 116A shown in FIG. 9 is located outside the contour of the set of display unit lattices 112 constituting the display frame 116A. Thereby, the deterioration of the visibility of the feature region 102 can be suppressed. Note that the feature area 102 is not shown in FIG.
  • FIG. 10 is an explanatory diagram of the action and effect of the embodiment.
  • the tomographic image 300A to which the bounding box 304 that collectively surrounds the plurality of characteristic regions 302 is applied to the tomographic image 300 showing the brain as the subject shown in FIG. 10 has the characteristic region at any position inside the bounding box 304. It is difficult to know if 302 exists.
  • the tomographic image 300B on which the mask image 306 corresponding to the feature region 302 is superimposed it is difficult to comprehensively emphasize the plurality of feature regions 302, and there is a concern that the feature region 302 as small as several pixels may be overlooked. ..
  • the medical image display system 10 sets the display grid 110 on the medical image such as the tomographic image 100 as shown in FIG.
  • display grid 110 display unit grids 112 that exceed the size of the processing unit of the feature area 102 are arranged.
  • the tomographic image 100 is a display frame 116 surrounding the feature area 102, and a display frame 116 using one or more display unit lattices 112 is superimposed and displayed. Thereby, the detection result of the feature area 102 can be displayed without impairing the reliability of the detection of the feature area 102.
  • a mode different from the lines forming the display grid 110 and the lines forming the display unit grid 112 is applied to the display frame 116.
  • the thickness, color, and line type of the lines constituting the display unit grid 112 are different from those of the lines constituting the display grid 110. This makes it possible to make the display frame 116 stand out more than the display grid 110 and the display unit grid 112.
  • the mask image 103 is superimposed on the tomographic image 100.
  • the display frame 116 and the mask image 103 can be used together, and the visibility of the feature area 102 can be improved.
  • the size of the display unit cell is set according to the type of the feature area. As a result, the display frame 116 corresponding to the type of the feature area 102 can be displayed.
  • the center of gravity 106 of the tomographic image 100 and the center of gravity 114 of the display grid 110 are applied to the alignment of the tomographic image 100 and the display grid 110. As a result, accurate alignment between the tomographic image 100 and the display grid 110 can be performed.
  • a three-dimensional display grid 121 and a three-dimensional display frame can be set for the three-dimensional image 120. Thereby, the detection result of the feature region can be displayed without impairing the reliability of the detection of the feature region in the three-dimensional image 120.
  • any shape can be applied to the feature region 102.
  • the display frame 116 can be superimposed on the feature area 102 having an arbitrary shape.
  • the hardware-like structure of the processing unit that executes the processing of the medical image display system 10 and the medical image processing device 12 described in the above embodiment is various processors.
  • Various processors include a CPU (Central Processing Unit), a PLD (Programmable Logic Device), an ASIC (Application Specific Integrated Circuit), and the like.
  • the CPU is a general-purpose processor that executes programs and functions as various processing units.
  • the PLD is a processor whose circuit configuration can be changed after manufacturing.
  • An example of PLD is FPGA (Field Programmable Gate Array).
  • An ASIC is a dedicated electric circuit having a circuit configuration specially designed to perform a specific process.
  • One processing unit may be composed of one of these various processors, or may be composed of two or more processors of the same type or different types.
  • one processing unit may be configured by using a plurality of FPGAs and the like.
  • One processing unit may be configured by combining one or more FPGAs and one or more CPUs.
  • a plurality of processing units may be configured by using one processor.
  • one processor is configured by combining one or more CPUs and software, and one processor functions as a plurality of processing units.
  • Such a form is represented by a computer such as a client terminal device and a server device.
  • An example is a mode in which a processor that realizes the functions of the entire system including a plurality of processing units by using one IC chip is used.
  • a processor that realizes the functions of the entire system including a plurality of processing units by using one IC chip is used.
  • Such a form is typified by a system on chip (System On Chip) and the like.
  • IC is an abbreviation for Integrated Circuit.
  • the system-on-chip may be described as SoC by using the abbreviation of System On Chip.
  • the various processing units are configured by using one or more of the above-mentioned various processors as a hardware structure.
  • the hardware structure of various processors is, more specifically, an electric circuit (circuitry) in which circuit elements such as semiconductor elements are combined.
  • a program can be configured to realize various functions of the medical image display system 10 and the medical image processing device 12 and each step of the image processing method described in the present specification on a computer.
  • the computer is made to realize the processing corresponding to the medical image acquisition function, the feature area detection function, the display grid setting function, the alignment function, the display frame setting function, the display image signal generation function, and the display image signal transmission function shown in FIG.
  • the program can be configured.
  • the constituent requirements can be appropriately changed, added, or deleted without departing from the gist of the present invention.
  • the present invention is not limited to the embodiments described above, and many modifications can be made by a person having ordinary knowledge in the art within the technical idea of the present invention.
  • the embodiments, modifications, and applications may be combined as appropriate.
  • Medical image display system 12 Medical image processing device 14 Processor 16 Memory 18 Medical image storage device 20 Medical image viewer device 22 Display 24 Input device 26 Network 28 CT imaging device 30 MRI imaging device 100 Fault image 100A Fault image 100B Fault image 100C Fault Image 100D Tomographic image 100E Tomographic image 102 Feature area 103 Mask image 104 Subject area 104A Subject area 104B Subject area 104C Subject area 104D Subject area 104E Subject area 106 Center of gravity 106A Center of gravity 106B Center of gravity 106C Center of gravity 106D Center of gravity 106E Center of gravity 108 Line 110 Display grid 112 Display unit grid 114 Center of gravity 116 Display frame 116A Display frame 120 Three-dimensional image 121 Display grid 122 Subject 123 Display unit grid 124 Center of gravity 126 Line 200 Setting screen 202 Medical image display area 204 Parameter display area 206 Reference display area 208 Division number display area 210 Feature area display area 300 Tomographic image 300A Tomographic image 300B Tomographic image 300C Tomographic image 302 Feature area 304 Bound

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

L'invention concerne un dispositif de traitement d'image, un système d'affichage d'image, un procédé de traitement d'image et un programme qui peuvent afficher un résultat de détection sans affecter la fiabilité de détection d'une zone de caractéristique. La présente invention : acquiert une image médicale (100) obtenue par imagerie d'un sujet ; détecte une zone de caractéristique (102) à partir de l'image médicale ; définit, dans une image médicale, un réseau d'affichage (110) dans lequel des réseaux unitaires d'affichage (112), ayant chacun une taille qui est au moins deux multiples entiers de pixels formant l'image médicale et une taille supérieure à une unité de traitement dans la détection de la zone de caractéristique, sont disposés en réseau ; et affiche un cadre d'affichage (116), qui a un ou plusieurs réseaux unitaires d'affichage correspondant à la taille de la zone caractéristique, en superposition avec la zone caractéristique à une position correspondant à la position de la zone caractéristique.
PCT/JP2021/015671 2020-04-21 2021-04-16 Dispositif de traitement d'image, système d'affichage d'image, procédé de traitement d'image, et programme WO2021215357A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022517016A JP7430249B2 (ja) 2020-04-21 2021-04-16 画像処理装置、画像表示システム、画像処理方法及びプログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-075382 2020-04-21
JP2020075382 2020-04-21

Publications (1)

Publication Number Publication Date
WO2021215357A1 true WO2021215357A1 (fr) 2021-10-28

Family

ID=78269229

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/015671 WO2021215357A1 (fr) 2020-04-21 2021-04-16 Dispositif de traitement d'image, système d'affichage d'image, procédé de traitement d'image, et programme

Country Status (2)

Country Link
JP (1) JP7430249B2 (fr)
WO (1) WO2021215357A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116912213A (zh) * 2023-07-20 2023-10-20 中国人民解放军总医院第六医学中心 一种医学Dicom图像边缘轮廓多边检测算法和检测系统
WO2023243992A1 (fr) * 2022-06-13 2023-12-21 서울대학교병원 Appareil et procédé pour l'extraction d'informations médicales

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11235336A (ja) * 1997-11-26 1999-08-31 General Electric Co <Ge> コンピュータ断層撮影システム
JP2004073488A (ja) * 2002-08-19 2004-03-11 Canon Inc 放射線画像処理装置
JP2011120747A (ja) * 2009-12-11 2011-06-23 Fujifilm Corp 画像表示装置および方法並びにプログラム
JP2017215876A (ja) * 2016-06-01 2017-12-07 富士通株式会社 類似画像検索プログラム、類似画像検索方法、及び類似画像検索装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3501414B2 (ja) * 1994-03-17 2004-03-02 東芝医用システムエンジニアリング株式会社 磁気共鳴診断装置
CN101626726B (zh) 2007-02-02 2013-01-09 阿波罗医学影像技术控股有限公司 医学成像中病灶的识别与分析

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11235336A (ja) * 1997-11-26 1999-08-31 General Electric Co <Ge> コンピュータ断層撮影システム
JP2004073488A (ja) * 2002-08-19 2004-03-11 Canon Inc 放射線画像処理装置
JP2011120747A (ja) * 2009-12-11 2011-06-23 Fujifilm Corp 画像表示装置および方法並びにプログラム
JP2017215876A (ja) * 2016-06-01 2017-12-07 富士通株式会社 類似画像検索プログラム、類似画像検索方法、及び類似画像検索装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023243992A1 (fr) * 2022-06-13 2023-12-21 서울대학교병원 Appareil et procédé pour l'extraction d'informations médicales
CN116912213A (zh) * 2023-07-20 2023-10-20 中国人民解放军总医院第六医学中心 一种医学Dicom图像边缘轮廓多边检测算法和检测系统
CN116912213B (zh) * 2023-07-20 2024-04-19 中国人民解放军总医院第六医学中心 一种医学Dicom图像边缘轮廓多边检测算法和检测系统

Also Published As

Publication number Publication date
JP7430249B2 (ja) 2024-02-09
JPWO2021215357A1 (fr) 2021-10-28

Similar Documents

Publication Publication Date Title
JP6766045B2 (ja) トモシンセシスデータから合成マンモグラムを生成する方法
JP5637928B2 (ja) 医用画像表示装置
JP4891577B2 (ja) 医用画像表示装置
JP2002336241A (ja) ネットワーク環境における三次元画像表示装置
WO2021215357A1 (fr) Dispositif de traitement d&#39;image, système d&#39;affichage d&#39;image, procédé de traitement d&#39;image, et programme
US9569820B2 (en) Method and apparatus for image correction
JP7129869B2 (ja) 疾患領域抽出装置、方法及びプログラム
US20200202486A1 (en) Medical image processing apparatus, medical image processing method, and medical image processing program
KR20130018168A (ko) 의료 이미지 데이터 세트의 정합 품질의 시각화 방법 및 장치
JP6949535B2 (ja) 情報処理装置、情報処理システム、情報処理方法及びプログラム
US20190236783A1 (en) Image processing apparatus, image processing method, and program
Douglas et al. Augmented reality and virtual reality: Initial successes in diagnostic radiology
US8933926B2 (en) Image processing apparatus, method, and program
JP2024038203A (ja) 画像処理装置、画像表示システム、画像処理装置の作動方法及びプログラム
US10896501B2 (en) Rib developed image generation apparatus using a core line, method, and program
WO2022209298A1 (fr) Dispositif de traitement d&#39;image, procédé de traitement d&#39;image et programme
US12053290B2 (en) Brain atlas creation apparatus, brain atlas creation method, and brain atlas creation program
JP6645904B2 (ja) 医用画像表示装置及び表示プログラム
US10796433B2 (en) Interpretation support apparatus, operation method thereof, and non-transitory computer readable medium
JP7170850B2 (ja) 疑似アンギオ画像生成装置、方法およびプログラム
JP7394959B2 (ja) 医用画像処理装置、医用画像処理方法及びプログラム、医用画像表示システム
US12033366B2 (en) Matching apparatus, matching method, and matching program
US20230281761A1 (en) Image processing apparatus, image processing method, and non-transitory storage medium
US20230022549A1 (en) Image processing apparatus, method and program, learning apparatus, method and program, and derivation model
CN115546174B (zh) 图像处理方法、装置、计算设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21792527

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022517016

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21792527

Country of ref document: EP

Kind code of ref document: A1