WO2019087969A1 - Endoscope system, reporting method, and program - Google Patents

Endoscope system, reporting method, and program Download PDF

Info

Publication number
WO2019087969A1
WO2019087969A1 PCT/JP2018/039901 JP2018039901W WO2019087969A1 WO 2019087969 A1 WO2019087969 A1 WO 2019087969A1 JP 2018039901 W JP2018039901 W JP 2018039901W WO 2019087969 A1 WO2019087969 A1 WO 2019087969A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
notification
feature
endoscope
unit
Prior art date
Application number
PCT/JP2018/039901
Other languages
French (fr)
Japanese (ja)
Inventor
加來 俊彦
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Priority to JP2019550323A priority Critical patent/JP6840263B2/en
Publication of WO2019087969A1 publication Critical patent/WO2019087969A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an endoscope system, a notification method, and a program, and more particularly to display of a virtual endoscopic image.
  • the endoscopic image is an image captured using an imaging device such as a CCD (Charge Coupled Device).
  • the endoscopic image is an image in which the color and texture of the inside of the tubular structure are clearly expressed.
  • an endoscopic image is a two-dimensional image representing the inside of a tubular structure. For this reason, it is difficult to grasp which position in the tubular structure the endoscopic image represents.
  • the virtual endoscopic image may be used as a navigation image to guide the endoscope to a target position in the tubular structure.
  • CT is an abbreviation of Computed Tomography.
  • MRI is an abbreviation of Magnetic Resonance Imaging.
  • the image of the tubular structure is extracted from the three-dimensional inspection image, and the correspondence between the image of the tubular structure and the actual endoscopic image which is an actual endoscopic image acquired by imaging using the endoscope is acquired.
  • a method has been proposed in which a virtual endoscopic image at the current position of the endoscope is generated from the three-dimensional inspection image of the tubular structure and displayed.
  • Patent Document 1 describes an endoscopic observation support device that detects a lesion such as a polyp from three-dimensional image data and reports that the observation position of the endoscope has reached the vicinity of the lesion.
  • Patent Document 2 extracts volume data representing a region to be imaged, identifies the position and direction of the tip of the endoscope probe in the volume data, and generates and displays a virtual endoscopic image in an arbitrary field of view.
  • a medical imaging device is described.
  • the medical image processing apparatus described in Patent Document 2 specifies the shape and position of a tumor candidate based on volume data, and displays a marker superimposed on the position of the tumor candidate. This enables the operator to recognize the presence or absence of a tumor candidate using a marker.
  • Patent Document 3 describes a medical image display apparatus that detects a blind area in a developed image of a luminal organ in a subject and notifies the operator of the presence or absence of the blind area.
  • the medical image processing device described in Patent Document 3 displays, when there is a blind spot area, character information indicating that the blind spot area is present.
  • Patent Document 3 describes another display mode in which the position of the blind spot area is colored and displayed using a marker when the blind spot area is present.
  • Patent document 4 produces
  • An endoscope system is described that matches the composition of a mirror actual image and a virtual endoscopic image.
  • the endoscope system described in Patent Document 4 detects a characteristic shape from a virtual endoscopic image. Next, the pixel value of the area corresponding to the characteristic shape of the virtual endoscopic image in the color endoscopic image is changed. This realizes a display form distinguishable from other areas.
  • Patent Document 5 describes a medical image processing apparatus that acquires volume data from a CT apparatus and generates and displays a three-dimensional image from the acquired volume data.
  • the medical image processing apparatus described in Patent Document 5 receives an input operation for marking a feature portion of a three-dimensional image displayed on the display unit, and the mark is displayed on the display unit.
  • Patent Document 5 describes that a characteristic part can be automatically set using image analysis.
  • Patent Document 6 describes an endoscope apparatus including an endoscope for observing the inside of a subject, and a monitor for displaying an endoscope image acquired using the endoscope.
  • the endoscope apparatus described in Patent Document 6 acquires an image corresponding to a subject image captured using the endoscope, and executes processing for detecting a lesion site in the acquired image every time an image is acquired. Do.
  • JP 2014-230612 A JP 2011-139797 A International Publication No. 2010/074058 JP, 2006-61274, A JP, 2016-143194, A JP, 2008-301968, A
  • Patent Document 1 reports that the observation position of the endoscope has reached the vicinity of a lesion, no response is made to a lesion located at a blind spot in the observation range of the endoscope. Then, the invention described in Patent Document 1 may overlook a lesion located at a blind spot in the observation range of the endoscope.
  • Patent Document 2 displays a marker superimposed on the position of a tumor candidate, correspondence is not made in the case where the position of the tumor candidate is a blind spot in the observation range of the endoscope. Then, the invention described in Patent Document 2 may overlook a lesion located at a blind spot in the observation range of the endoscope.
  • Patent Document 3 informs the operator of the presence or absence of a blind area in a developed image regardless of the presence or absence of a lesion, and does not notify the presence or absence of a lesion in a blind area in a developed image. Then, the invention described in Patent Document 3 may overlook a lesion located at a blind spot in the observation range of the endoscope.
  • Patent Document 4 changes the pixel value of a characteristic region in a virtual endoscopic image to enable distinction between a characteristic region and another region, and an endoscope image It does not apply to the discovery of the lesion etc. located in the blind spot of the observation range of the endoscope using.
  • Patent Document 5 Although the invention described in Patent Document 5 is capable of automatically setting the characteristic portion to the endoscopic image, Patent Document 5 describes the case where the characteristic region is a blind spot in the observation range of the endoscope. There is no. Then, the invention described in Patent Document 5 may overlook a lesion located at a blind spot in the observation range of the endoscope.
  • Patent Document 6 can detect a lesion site in frame image units constituting an endoscopic image
  • Patent Document 6 a case where the lesion site becomes a blind spot in the observation range of the endoscope
  • the invention described in Patent Document 6 may overlook a lesion located at a blind spot in the observation range of the endoscope.
  • Patent Document 1 to Patent Document 6 there is a problem of overlooking a lesion or the like that is difficult to detect in endoscopy such as overlooking a lesion located at a blind spot in the observation range of the endoscope. Needs to be addressed.
  • the present invention has been made in view of such circumstances, and in an endoscopic examination using an endoscope, an endoscope system, a notification method, and a program capable of suppressing the oversight of a lesion or the like which is difficult to detect. Intended to be provided.
  • An endoscope system is configured to image an observation target of a subject using an endoscope and a first image input unit that inputs a virtual endoscopic image generated from a three-dimensional image of the subject.
  • a second image input unit for inputting a real endoscopic image obtained by the user, a matching unit for correlating the virtual endoscopic image and the real endoscopic image, and a first of the prescriptions from the virtual endoscopic image
  • a first feature region extraction unit that extracts a first feature region that matches the condition
  • a second feature region extraction unit that extracts a second feature region that matches a second condition that corresponds to the first condition from the real endoscopic image
  • an informing unit for informing when the first feature area is not associated with the second feature area.
  • the first feature area is extracted from the virtual endoscopic image.
  • the virtual endoscopic image is associated with the real endoscopic image.
  • Notification is performed when the first feature area is not associated with the second feature area.
  • the user may recognize that the first feature area is not associated with the second feature area due to the notification.
  • the first image input unit may input a virtual endoscopic image generated in advance, or acquires a three-dimensional inspection image, generates a virtual endoscopic image from the acquired three-dimensional inspection image, and generates You may input a virtual endoscopic image.
  • a three-dimensional inspection image a three-dimensional inspection image obtained by tomographic imaging of an object using a CT apparatus can be mentioned.
  • a virtual endoscope a virtual large intestine endoscope which uses a large intestine as a subject is mentioned.
  • the aspect provided with the 1st condition setting part which sets the 1st condition applied to extraction of the 1st feature field is preferred.
  • the aspect provided with the 2nd condition setting part which sets the 2nd condition applied to extraction of a 2nd feature area is preferable.
  • a 2nd aspect is an endoscope system of a 1st aspect, Comprising: A 1st characteristic area is located in the observation range of an endoscope, when a 1st characteristic area is matched with a 2nd characteristic area. Informing that the first feature area is associated with the second feature area, and notifying that the first feature area is associated with the second feature area when the first feature area is not associated with the second feature area And at least one of the notification method and the notification level may be changed in comparison with the notification level.
  • notification can be made by distinguishing the case where the first feature area is associated with the second feature area and the case where the first feature area is not associated with the second feature area.
  • a 3rd aspect is provided with the display part which displays a real endoscopic image in the endoscope system of a 2nd aspect, and the alerting
  • the first notification image to be displayed and a second notification image notifying that the first feature region is associated with the second feature region located in the observation range of the endoscope are displayed on the display unit, and the second notification image Further, the first notification image may be enlarged and displayed.
  • the case where the first feature region is not associated with the second feature region is emphasized.
  • a 4th aspect is the endoscope system of a 2nd aspect.
  • WHEREIN The display part which displays an actual endoscopic image is provided, and the alerting
  • the first notification image to be displayed and the second notification image notifying that the first feature region is associated with the second feature region are displayed on the display unit, and the first notification image changes color from the second notification image It may be configured to
  • the fourth aspect in contrast to the case where the first feature region is associated with the second feature region, the case where the first feature region is not associated with the second feature region is emphasized.
  • a 5th aspect is a endoscope system of a 2nd aspect.
  • WHEREIN The display part which displays a real endoscope image is provided, and the alerting
  • the first notification image and a second notification image indicating that the first feature region is associated with the second feature region are displayed on the display unit, and the first notification image is displayed in a blinking manner, while the second notification image is displayed. It may be configured to light up and display.
  • the case where the first feature region is not associated with the second feature region is emphasized.
  • a 6th aspect is the endoscope system of a 2nd aspect.
  • WHEREIN The display part which displays a real endoscope image is provided, and the alerting
  • the first notification image and the second notification image indicating that the first feature region is associated with the second feature region are displayed on the display unit in a blinking manner, and the blinking period of the first notification image with respect to the second notification image May be shortened.
  • the case where the first feature region is associated with the second feature region is emphasized.
  • the endoscope system according to a seventh aspect is the endoscope system according to any one of the third aspect to the sixth aspect, wherein the display unit is configured to display the first notification image and the second notification image generated separately from the real endoscope image. It may be configured to be displayed superimposed on the endoscopic image.
  • the seventh aspect it is possible to highlight an actual endoscopic image without processing the actual endoscopic image.
  • the display unit displays a virtual endoscopic image, and a position of the endoscope in the virtual endoscopic image It may be configured to be displayed.
  • the operator of the endoscope can recognize the position of the virtual endoscopic image corresponding to the observation position in the real endoscopic image.
  • a ninth aspect is the endoscope system according to any one of the third to seventh aspects, wherein the display unit is configured to display a virtual endoscopic image and to display information of the first feature area. Good.
  • the operator of the endoscope can recognize the first feature area in the virtual endoscopic image.
  • the display unit may be configured to display the first feature area in an enlarged manner.
  • the first feature area in the virtual endoscopic image can be easily viewed.
  • the display may display the first feature area in a blinking manner.
  • the first feature area in the virtual endoscopic image can be easily viewed.
  • the notification sound output unit for outputting the notification sound is provided, and the notification unit uses the notification sound output unit to perform the first operation.
  • the first notification sound may be output to indicate that the feature region is not associated with the second feature region.
  • the first notification sound is output when the first feature area is not associated with the second feature area.
  • the thirteenth aspect is the endoscope system according to the twelfth aspect, wherein the notification unit indicates that the first feature region is associated with the second feature region using the notification sound output unit, and the first notification sound A second notification sound different from the above may be output.
  • notification can be made by distinguishing the case where the first feature region is not associated with the second feature region and the case where the first feature region is associated with the second feature region.
  • the notification unit may be configured to increase the volume of the first notification sound with respect to the second notification sound.
  • the fourteenth aspect in contrast to the case where the first feature region is associated with the second feature region, the case where the first feature region is not associated with the second feature region is emphasized.
  • the notification unit detects the real endoscope from the area of the real endoscope image correlated with the first feature area.
  • the notification level may be changed as the distance to the observation position of the image becomes shorter.
  • the area of the real endoscope image associated with the first feature area is approaching the observation area of the endoscope.
  • the notification level when changing the notification level.
  • the notification level may be increased continuously or may be increased stepwise.
  • the region of the real endoscope image associated with the first feature region in the fifteenth aspect is the second feature region associated with the first feature region, and the actual inside associated with the first feature region. At least one of the non-extraction regions of the endoscopic image is included.
  • the non-extraction area of the real endoscope image represents an area which is not extracted as the second feature area from the real endoscope image.
  • the first feature region extraction unit predetermines from a virtual endoscopic image when observing a real endoscopic image.
  • the first feature area may be extracted.
  • extraction of the first feature region from the virtual endoscopic image can be omitted. This reduces the processing load of image processing.
  • the first feature region extraction unit observes a real endoscopic image when observing a real endoscopic image.
  • the first feature area may be sequentially extracted from the virtual endoscopic image in accordance with.
  • the seventeenth aspect it is possible to acquire a virtual endoscopic image in which the first feature region is not extracted. Thereby, the processing load on the first image input unit is reduced.
  • An eighteenth aspect is the endoscope system according to any one of the first to seventeenth aspects, wherein the first feature region extraction unit extracts a plurality of first feature regions using the same first condition.
  • the plurality of first feature areas may be collectively managed.
  • the first feature region extraction unit applies, as the first condition, information on a position in a virtual endoscopic image. It is also good.
  • the first feature region extraction unit may extract the first feature region based on the information of the position in the virtual endoscopic image.
  • the first feature area extraction unit may apply a position of a blind spot in the observation range of the endoscope as the position information.
  • the first feature region extraction unit can extract the first feature region at the position of the blind spot in the observation range of the endoscope. Therefore, with regard to the position of the blind spot in the observation range of the endoscope, the oversight of the area to be extracted as the second feature area can be suppressed.
  • the first feature area extraction unit sets the position of the blind spot in the observation range of the endoscope as the first feature area. It can be extracted.
  • the first feature region extraction unit may be configured to apply the back side of the fold as the position information.
  • the first feature region extraction unit can extract the first feature region at the position on the back side of the fold.
  • the second feature region extraction unit may extract a lesion as the second feature region.
  • a twenty-third aspect is the endoscope system according to any one of the first aspect to the twenty-second aspect, wherein the second feature region extraction unit applies an extraction rule generated using machine learning to obtain a real endoscope.
  • the second feature area may be extracted from the mirror image.
  • the accuracy of the second feature region extraction in the real endoscopic image can be improved. This may improve the accuracy of endoscopy.
  • the informing method comprises: acquiring a first image input step of inputting a virtual endoscopic image generated from a three-dimensional image of the subject; and imaging an observation target of the subject using the endoscope.
  • the second image input process of inputting the acquired real endoscopic image, the correlating process of correlating the virtual endoscopic image and the real endoscopic image, and the virtual endoscopic image to the prescribed first condition A first feature area extraction step of extracting a first feature area that matches the second feature area extraction step of extracting a second feature area that matches a second condition corresponding to the first condition from the real endoscopic image; And a notifying step of notifying when the first feature region is not associated with the second feature region.
  • the same matters as the matters specified in the second to twenty-third aspects can be combined as appropriate.
  • the component carrying the processing or function specified in the endoscope system can be grasped as the component of the notification method carrying the processing or function corresponding thereto.
  • the program according to the twenty-fifth aspect comprises, in a computer, a first image input function of inputting a virtual endoscopic image generated from a three-dimensional image of a subject, and imaging an observation target of the subject using an endoscope
  • the second image input function to input the obtained real endoscopic image, the correlating function to associate the virtual endoscopic image with the real endoscopic image, and the virtual endoscopic image are met with a prescribed first condition
  • the same matters as the matters specified in the second to twenty-third aspects can be combined as appropriate.
  • the component carrying the processing or function specified in the endoscope system can be grasped as the component of the program carrying the processing or function corresponding to this.
  • a twenty-fifth aspect is a system having at least one or more processors and at least one or more memories, the first image input function for inputting a virtual endoscopic image generated from a three-dimensional image of a subject
  • the system may be configured as a system that implements a second feature area extraction function of extracting two feature areas and a notification function of notifying when the first feature area is not associated with the second feature area.
  • the first feature area is extracted from the virtual endoscopic image.
  • the virtual endoscopic image is associated with the real endoscopic image.
  • Notification is performed when the first feature area is not associated with the second feature area.
  • the user may recognize that the first feature area is not associated with the second feature area due to the notification. This makes it possible to suppress the oversight of a lesion or the like that is difficult to detect in endoscopy, which is grasped as a region to be extracted as the second feature region in endoscopy using an endoscope.
  • FIG. 1 is a schematic view showing an entire configuration of an endoscope system.
  • FIG. 2 is a functional block diagram showing functions of the medical image processing apparatus.
  • FIG. 3 is a functional block diagram showing the function of the medical image analysis processing unit.
  • FIG. 4 is a functional block diagram showing the function of the image storage unit.
  • FIG. 5 is a schematic view of a CTC image.
  • FIG. 6 is a schematic view of an endoscopic image.
  • FIG. 7 is a schematic view showing a blind spot in the observation range of the endoscope.
  • FIG. 8 is an explanatory view of first feature area extraction.
  • FIG. 9 is an explanatory diagram of second feature region extraction.
  • FIG. 10 is a schematic view showing an example of association of lesions.
  • FIG. 11 is a schematic view showing an example of corrugation correspondence.
  • FIG. 12 is a schematic view showing an example of the arrangement of the folds using the fold numbers.
  • FIG. 13 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of non notification.
  • FIG. 14 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of the first notification.
  • FIG. 15 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of the second notification.
  • FIG. 16 is a flowchart showing the procedure of the notification method.
  • FIG. 17 is an explanatory diagram of notification according to the first modification.
  • FIG. 18 is an explanatory diagram of notification according to the second modified example.
  • FIG. 19 is an explanatory diagram of notification according to the third modification.
  • FIG. 20 is an explanatory diagram of another display example of the first feature area.
  • FIG. 21 is a functional block diagram showing functions of a medical image processing apparatus for realizing notification according to another embodiment.
  • FIG. 1 is a schematic view showing an entire configuration of an endoscope system.
  • An endoscope system 9 shown in FIG. 1 includes an endoscope 10, a light source device 11, a processor 12, a display device 13, a medical image processing device 14, an operation device 15, and a monitor device 16. Prepare.
  • the endoscope system 9 is communicably connected to the image storage device 18 via the network 17.
  • the endoscope 10 is an electronic endoscope.
  • the endoscope 10 is a flexible endoscope.
  • the endoscope 10 includes an insertion unit 20, an operation unit 21, and a universal cord 22.
  • the insert 20 comprises a distal end and a proximal end.
  • the insertion unit 20 is inserted into the subject.
  • the operator holds the operation unit 21 to perform various operations.
  • the operation unit 21 is continuously provided on the proximal end side of the insertion unit 20.
  • the insertion part 20 is formed in a long and narrow shape as a whole.
  • the insertion portion 20 includes a flexible portion 25, a bending portion 26, and a tip portion 27.
  • the insertion portion 20 is configured by connecting the flexible portion 25, the bending portion 26, and the distal end portion 27 in series.
  • the flexible portion 25 has flexibility in order from the proximal side to the distal side of the insertion portion 20.
  • the bending portion 26 has a structure that can be bent when the operation portion 21 is operated.
  • the distal end portion 27 incorporates a photographing optical system and an imaging device 28 which are not shown.
  • the imaging device 28 is a CMOS imaging device or a CCD imaging device.
  • CMOS is an abbreviation of Complementary Metal Oxide Semiconductor, which is the English language for Complementary Metal Oxide Semiconductor.
  • CCD is an abbreviation of Charge Coupled Device, which is an English notation for charge coupled devices.
  • An observation window (not shown) is disposed on the distal end surface 27 a of the distal end portion 27.
  • the observation window is an opening formed in the distal end surface 27 a of the distal end portion 27.
  • a photographing optical system (not shown) is disposed behind the observation window. Image light of a region to be observed is incident on the imaging surface of the imaging element 28 through an observation window, a photographing optical system, and the like.
  • the imaging device 28 images the image light of the observed region incident on the imaging surface of the imaging device 28 and outputs an imaging signal.
  • imaging as used herein includes the meaning of converting image light into an electrical signal.
  • the operation unit 21 includes various operation members.
  • the various operating members are operated by the operator.
  • the operation unit 21 includes two types of bending operation knobs 29.
  • the bending operation knob 29 is used when bending the bending portion 26.
  • the operation unit 21 includes an air / water feed button 30 and a suction button 31.
  • the air / water supply button 30 is used at the time of air / water operation.
  • the suction button 31 is used at the time of suction operation.
  • the operation unit 21 includes a still image photographing instruction unit 32 and a treatment instrument introduction port 33.
  • the still image photographing instruction unit 32 is used when instructing the photographing of the still image 39 of the region to be observed.
  • the treatment instrument introduction port 33 is an opening for inserting the treatment instrument into the inside of the treatment instrument insertion path passing through the inside of the insertion portion 20. The treatment tool insertion path and the treatment tool are not shown.
  • the universal cord 22 is a connection cord that connects the endoscope 10 to the light source device 11.
  • the universal cord 22 includes the light guide 35 passing through the inside of the insertion portion 20, the signal cable 36, and a fluid tube (not shown).
  • an end of the universal cord 22 includes a connector 37 a connected to the light source device 11 and a connector 37 b branched from the connector 37 a and connected to the processor 12.
  • the connector 37 a When the connector 37 a is connected to the light source device 11, the light guide 35 and a fluid tube (not shown) are inserted into the light source device 11. Thereby, necessary illumination light, water, and gas are supplied from the light source device 11 to the endoscope 10 through the light guide 35 and the fluid tube (not shown).
  • illumination light is emitted from the illumination window (not shown) of the distal end surface 27 a of the distal end portion 27 toward the region to be observed.
  • gas or water is jetted from an air / water supply nozzle (not shown) of the distal end surface 27a of the distal end portion 27 toward an observation window (not shown) of the distal end surface 27a.
  • the signal cable 36 and the processor 12 are electrically connected.
  • an imaging signal of the region to be observed is output from the imaging element 28 of the endoscope 10 to the processor 12 through the signal cable 36, and a control signal is output from the processor 12 to the endoscope 10.
  • a flexible endoscope has been described as an example of the endoscope 10, but various types of electronic devices capable of capturing moving images of a region to be observed such as a rigid endoscope can be used as the endoscope 10
  • An endoscope may be used.
  • the light source device 11 supplies illumination light to the light guide 35 of the endoscope 10 via the connector 37a.
  • the illumination light may be white light or light of a specific wavelength band.
  • the illumination light may combine white light and light of a specific wavelength band.
  • the light source device 11 is configured to be able to appropriately select light of a wavelength band according to the purpose of observation as illumination light.
  • the white light may be light of a white wavelength band or light of a plurality of wavelength bands.
  • the specific wavelength band is a band narrower than the white wavelength band.
  • light of a specific wavelength band light of one type of wavelength band may be applied, or light of a plurality of wavelength bands may be applied.
  • the particular wavelength band may be called special light.
  • the processor 12 controls the operation of the endoscope 10 via the connector 37 b and the signal cable 36.
  • the processor 12 also acquires an imaging signal from the imaging element 28 of the endoscope 10 via the connector 37 b and the signal cable 36.
  • the processor 12 applies a specified frame rate to acquire an imaging signal output from the endoscope 10.
  • the processor 12 generates a moving image 38 of the region to be observed based on the imaging signal acquired from the endoscope 10. Furthermore, when the still image photographing instruction unit 32 is operated by the operation unit 21 of the endoscope 10, the processor 12 observes the object based on the imaging signal acquired from the imaging device 28 in parallel with the generation of the moving image 38. A still image 39 of the site is generated. The still image 39 may be generated at a high resolution with respect to the resolution of the moving image 38.
  • the processor 12 When generating the moving image 38 and the still image 39, the processor 12 performs image quality correction to which digital signal processing such as white balance adjustment and shading correction is applied.
  • the processor 12 may add incidental information defined by the DICOM (Digital Imaging and Communications in Medicine) standard to the moving image 38 and the still image 39.
  • DICOM Digital Imaging and Communications in Medicine
  • the moving image 38 and the still image 39 are in-vivo images of the inside of a subject, that is, the inside of a living body. Furthermore, when the moving image 38 and the still image 39 are images obtained by imaging using light of a specific wavelength band, both are special light images. Then, the processor 12 outputs the generated moving image 38 and still image 39 to each of the display device 13 and the medical image processing device 14. The processor 12 may output the moving image 38 and the still image 39 to the image storage device 18 via the network 17 in accordance with a communication protocol conforming to the DICOM standard.
  • the display device 13 is connected to the processor 12.
  • the display device 13 displays the moving image 38 and the still image 39 input from the processor 12.
  • a user such as a doctor performs an operation of advancing and retracting the insertion unit 20 while checking the moving image 38 displayed on the display device 13 and detects the still image photographing instruction unit 32 when a lesion etc. is detected in the observed region. It is possible to operate to perform still image shooting of a region to be observed.
  • the medical image processing apparatus 14 uses a computer.
  • a keyboard, a mouse or the like connectable to a computer is used.
  • the connection between the controller device 15 and the computer may be either a wired connection or a wireless connection.
  • the monitor device 16 uses various monitors connectable to a computer.
  • a diagnosis support apparatus such as a workstation and a server apparatus may be used.
  • the controller device 15 and the monitor device 16 are provided for each of a plurality of terminals connected to a work station or the like.
  • a medical care operation support apparatus that supports creation of a medical report or the like may be used.
  • the medical image processing apparatus 14 acquires a moving image 38 and stores the moving image 38.
  • the medical image processing apparatus 14 acquires a still image 39 and stores the still image 39.
  • the medical image processing apparatus 14 performs reproduction control of the moving image 38 and reproduction control of the still image 39.
  • the operating device 15 is used to input an operation instruction to the medical image processing apparatus 14.
  • the monitor device 16 displays the moving image 38 and the still image 39 under the control of the medical image processing apparatus 14.
  • the monitor device 16 functions as a display unit of various information in the medical image processing apparatus 14.
  • the image storage device 18 connected to the medical image processing device 14 via the network 17 stores the CTC image 19.
  • the CTC image 19 is generated using a CTC image generator (not shown).
  • CTC is a shorthand notation showing CT colonography (colonography) showing a large intestine three-dimensional CT examination.
  • a CTC image generator (not shown) generates a CTC image 19 from the three-dimensional inspection image.
  • the three-dimensional inspection image is generated from an imaging signal obtained by imaging a region to be inspected using a three-dimensional imaging device.
  • the three-dimensional imaging apparatus include a CT apparatus, an MRI apparatus, PET (Positron Emission Tomography), and an ultrasonic diagnostic apparatus.
  • the CTC image 19 is generated from a three-dimensional inspection image obtained by imaging the large intestine.
  • the endoscope system 9 may be communicably connected to the server device via the network 17.
  • the server apparatus can apply a computer that stores and manages various data.
  • the information stored in the image storage device 18 shown in FIG. 1 may be managed using a server device.
  • DICOM format, a protocol conforming to the DICOM standard, or the like can be applied to the storage format of the image data and the communication between the respective devices via the network 17.
  • FIG. 2 is a functional block diagram showing functions of the medical image processing apparatus.
  • the medical image processing apparatus 14 shown in FIG. 2 includes a computer (not shown).
  • the computer functions as an image acquisition unit 41, an information acquisition unit 42, a medical image analysis processing unit 43, and a display control unit 44 based on the execution of a program.
  • the medical image processing apparatus 14 includes a storage unit 47 that stores information used for various controls of the medical image processing apparatus 14.
  • the image acquisition unit 41 includes a CTC image acquisition unit 41a and an endoscope image acquisition unit 41b.
  • the CTC image acquisition unit 41a acquires a CTC image 19 via an image input / output interface (not shown).
  • the endoscopic image acquisition unit 41b acquires an endoscopic image 37 via an image input / output interface (not shown).
  • the connection form of the image input / output interface may be wired or wireless.
  • the CTC image acquisition unit 41a and the endoscope image acquisition unit 41b will be described in detail below.
  • the CTC image acquisition unit 41a acquires the CTC image 19 stored in the image storage device 18 shown in FIG.
  • the CTC image 19 acquired using the CTC image acquisition unit 41 a shown in FIG. 2 is stored in the image storage unit 48.
  • the CTC image acquisition unit 41a can apply the same configuration as the endoscopic image acquisition unit 41b described later.
  • Reference numeral 19b represents a viewpoint image.
  • the viewpoint image 19 b is an image of the field of view at the viewpoint set in the CTC image 19. The viewpoint is shown in FIG. Details of the viewpoint image and the viewpoint will be described later.
  • the term image in the present embodiment includes the concept of data representing an image or the concept of a signal.
  • the CTC image 19 is an example of a virtual endoscopic image.
  • the CTC image 19 corresponds to a virtual colonoscopy image.
  • the CTC image acquisition unit 41a is an example of a first image input unit that inputs a virtual endoscopic image.
  • the endoscopic image acquisition unit 41 b acquires an endoscopic image 37 generated using the processor 12 illustrated in FIG. 1.
  • the endoscopic image 37 includes the moving image 38 and the still image 39 shown in FIG.
  • the endoscopic image 37 generated using the processor 12 shown in FIG. 1 is acquired, but the endoscopic image 37 stored in an external storage device may be acquired.
  • the endoscopic image acquisition unit 41b illustrated in FIG. 2 may acquire the endoscopic image 37 via various information storage media such as a memory card.
  • the endoscopic image acquiring unit 41b acquires the moving image 38 and the still image 39 from the processor 12 illustrated in FIG.
  • the medical image processing apparatus 14 stores the moving image 38 and the still image 39 acquired by using the endoscopic image acquisition unit 41 b in the image storage unit 48.
  • Reference numeral 38a represents a plurality of frame images constituting the moving image 38.
  • the medical image processing apparatus 14 does not have to store all of the moving image 38 of the endoscopic image 37 input from the processor 12 or the like in the image storage unit 48, and the operation of the still image photographing instruction unit 32 shown in FIG.
  • the 1-minute moving image 38 before and after that may be stored in the image storage unit 48 shown in FIG.
  • the one minute before and after represents a period from one minute before photographing to one minute after photographing.
  • the endoscope image acquisition unit 41 b is an example of a second image input unit that inputs an actual endoscope image.
  • the endoscopic image 37 corresponds to a real endoscopic image.
  • the information acquisition unit 42 acquires information input from the outside via the operation device 15 or the like. For example, when the determination result determined by the user using the operation device 15 and the extraction result are input, the information acquisition unit 42 acquires the determination information of the user, the extraction information, and the like.
  • the medical image analysis processing unit 43 analyzes the CTC image 19. Further, the medical image analysis processing unit 43 analyzes the endoscopic image 37. Details of the analysis of the CTC image 19 and the endoscopic image 37 using the medical image analysis processing unit 43 will be described later.
  • the medical image analysis processing unit 43 performs an image analysis process using deep learning based on the deep learning algorithm 65.
  • the deep learning algorithm 65 is an algorithm including a known convolutional neural network method, an entire combined layer, and an output layer.
  • Deep learning is sometimes called deep learning.
  • a convolutional neural network is an iterative process of convolutional and pooling layers. Convolutional neural networks may be referred to as convolutional neural networks.
  • image analysis process using deep learning is a well-known technique, specific description is abbreviate
  • the display control unit 44 controls image display of the monitor device 16.
  • the display control unit 44 functions as a reproduction control unit 44a and an information display control unit 44b.
  • the reproduction control unit 44a performs reproduction control of the CTC image 19 acquired using the CTC image acquisition unit 41a and the endoscope image 37 acquired using the endoscopic image acquisition unit 41b.
  • the reproduction control unit 44a controls the monitor device 16 by executing a display control program.
  • the display control program is included in the program stored in the program storage unit 49.
  • the reproduction control unit 44a may switch between the two displays described above.
  • the reproduction control unit 44a may switch between the two displays described above.
  • FIGS. 1 and the endoscopic image 37 As a display example of the CTC image 19 and the endoscopic image 37, an example in which the CTC image 19 and the endoscopic image 37 are displayed in parallel in one screen is shown in FIGS.
  • the information display control unit 44 b performs display control of incidental information of the CTC image 19 and display control of incidental information of the endoscope image 37.
  • An example of incidental information of the CTC image 19 includes information representing the first feature area.
  • incidental information of the endoscopic image 37 information representing a second feature area can be mentioned.
  • the information display control unit 44 b performs display control of information necessary for various processes in the medical image analysis processing unit 43.
  • various processes in the medical image analysis processing unit 43 association processing between the CTC image 19 and the endoscope image 37, feature region extraction processing of the CTC image 19 and feature region extraction processing of the endoscope image 37 are cited.
  • the storage unit 47 includes an image storage unit 48.
  • the image storage unit 48 stores the CTC image 19 acquired by the medical image processing apparatus 14 and the endoscopic image 37.
  • the medical image processing apparatus 14 illustrated the aspect provided with the memory
  • an image storage device 18 communicably connected via the network 17 shown in FIG. 1 may be mentioned.
  • the storage unit 47 includes a program storage unit 49.
  • the program stored using the program storage unit 49 includes an application program for causing the medical image processing apparatus 14 to execute reproduction control of the moving image 38.
  • the program stored using the program storage unit 49 includes a program for causing the medical image processing apparatus 14 to execute the processing of the medical image analysis processing unit 43.
  • the medical image processing apparatus 14 may be configured using a plurality of computers or the like.
  • a plurality of computers and the like may be communicably connected via a network.
  • the plurality of computers referred to here may be separated in terms of hardware, may be integrally configured in terms of hardware, and may be separated functionally.
  • the various processors are processors that can change the circuit configuration after manufacturing a central processing unit (CPU) or a field programmable gate array (FPGA) that is a general-purpose processor that executes software and functions as various control units. It includes a dedicated electric circuit or the like which is a processor having a circuit configuration specially designed to execute a specific process such as a programmable logic device (PLD) or an application specific integrated circuit (ASIC).
  • PLD programmable logic device
  • ASIC application specific integrated circuit
  • software here is synonymous with a program.
  • One processing unit may be configured by one of these various processors, or may be configured using two or more processors of the same type or different types. Examples of two or more processors include a plurality of FPGAs, or a combination of a CPU and an FPGA. Also, the plurality of control units may be configured by one processor. As an example in which a plurality of control units are configured by one processor, first, as represented by a computer such as a client device and a server device, one combination of one or more CPUs and software is used. There is a form which comprises a processor and this processor functions as a plurality of control units.
  • IC is an abbreviation of Integrated Circuit, which is the English notation of integrated circuits.
  • FIG. 3 is a functional block diagram showing the function of the medical image analysis processing unit.
  • An endoscope 10 in the following description is illustrated in FIG.
  • the CTC image 19, the viewpoint image 19b, the endoscope image 37, and the frame image 38a are illustrated in FIG.
  • the medical image analysis processing unit 43 shown in FIG. 3 includes a first feature region extraction unit 50, a first condition setting unit 52, a second feature region extraction unit 54, a second condition setting unit 56, and an association unit. 58, a notification unit 59, and a notification image generation unit 60.
  • the first feature region extraction unit 50 extracts, from the CTC image 19, a first feature region that is a feature region that meets the defined first condition.
  • Examples of the first feature area of the CTC image 19 include a lesion, a fold, a change point between colons, and a blood vessel.
  • the blood vessel includes a running pattern of the blood vessel.
  • the function of the first feature area extraction unit 50 corresponds to a first feature area extraction function.
  • the first condition setting unit 52 sets a first condition.
  • the first condition is an extraction condition applied to the extraction process using the first feature region extraction unit 50.
  • the first condition setting unit 52 can set information input using the controller device 15 shown in FIG. 2 as a first condition.
  • the illustration of the first feature area described above is grasped as an illustration of the first condition.
  • the second feature area extraction unit 54 extracts, from the endoscopic image 37 shown in FIG. 2, a second feature area that is a feature area that meets the prescribed second condition. Similar to the first feature area of the CTC image 19, examples of the second feature area of the endoscopic image 37 include a lesion, a fold, a change point between colons, and a blood vessel.
  • the second feature area extraction unit 54 may automatically extract a second feature area that matches the second condition from the endoscopic image 37.
  • the second feature region extraction unit 54 may obtain an extraction result in which the user manually extracts a second feature region that matches the second condition from the endoscopic image 37.
  • the user may input the extraction result manually extracted using the information acquisition unit 42 shown in FIG.
  • the function of the second feature area extraction unit 54 corresponds to a second feature area extraction function.
  • the second condition setting unit 56 sets a second condition corresponding to the first condition as the extraction condition of the second feature area of the endoscope image 37.
  • the second condition corresponding to the first condition includes the same second condition as the first condition. For example, when a lesion is set as the first condition, a lesion may be set as the second condition.
  • first condition and the second condition specific lesions such as polyps and inflammation may be set instead of the generic concept of lesions.
  • the first condition and the second condition may be a combination of a plurality of conditions.
  • the associating unit 58 associates the CTC image 19 and the endoscopic image 37 shown in FIG.
  • the correspondence between the first feature area of the CTC image 19 and the second feature area of the endoscopic image 37 can be mentioned.
  • the first feature area of the CTC image 19 corresponding to the detected lesion is associated with the second feature area of the endoscopic image 37. .
  • the correspondence between the CTC image 19 and the endoscopic image 37 can use position information. For example, the reference position of the CTC image 19 and the reference position of the endoscopic image 37 are matched, and the CTC image 19 and the endoscopic image 37 are compared with each other using the distances from the reference positions. Correspondence with the endoscopic image 37 is possible.
  • the coordinate value of the CTC image 19 may be associated with the number of the frame image 38 a of the endoscopic image 37.
  • the correspondence between the CTC image 19 and the endoscopic image 37 includes the correspondence between the first feature area of the CTC image 19 and the non-extraction area of the endoscopic image 37. A specific example of the correspondence between the first feature area of the CTC image 19 and the non-extraction area of the endoscopic image 37 will be described later.
  • the function of the association unit 58 corresponds to the association function.
  • ⁇ notification part If the notification unit 59 determines that there is a non-extraction region not extracted from the endoscopic image 37 among the regions of the endoscopic image 37 associated with the first feature region 80 extracted from the CTC image 19. , To that effect.
  • the position of the blind spot of the observation range of endoscope 10 is mentioned as an example of a non-extraction field.
  • the notification unit 59 displays notification information on the monitor device 16 via the display control unit 44. As an example of the notification information, a notification image to be described later can be mentioned.
  • the function of the notification unit 59 corresponds to a notification function.
  • the notification image generation unit 60 generates a notification image for notifying of the presence of the second feature region of the endoscope image 37.
  • Examples of the notification image include a symbol attached to an arbitrary position of the second feature region, a closed curve representing an edge of the second feature region, and the like.
  • the notification image generation unit 60 generates a notification image that can be displayed superimposed on the endoscopic image 37 without processing the endoscopic image 37.
  • the first notification image 140 is illustrated in FIG. 14 as an example of the notification image.
  • a second notification image 142 is illustrated in FIG. Details of the notification image will be described later.
  • FIG. 4 is a functional block diagram showing the function of the image storage unit.
  • the image storage device 18 includes a first feature area storage unit 64, a second feature area storage unit 66, and an association result storage unit 68.
  • the first feature area storage unit 64 stores the information of the first feature area extracted from the CTC image 19 using the first feature area extraction unit 50 shown in FIG.
  • information representing the position of the first feature area in the CTC image 19 may be mentioned.
  • the position of the first feature region in the CTC image 19 can be identified using coordinate values at the coordinates set in the CTC image 19 and the viewpoint set in the CTC image 19 or the like.
  • the second feature area storage unit 66 stores the information of the second feature area extracted from the endoscopic image 37 using the second feature area extraction unit 54 shown in FIG. 3.
  • information of the second feature area information representing the position of the second feature area in the endoscopic image 37 can be mentioned.
  • the position of the second feature region in the endoscopic image 37 can be identified using the distance from the reference position of the object to be observed using detection information of a sensor provided in the endoscope 10.
  • the association result storage unit 68 stores the result of association between the CTC image 19 and the endoscopic image 37 executed using the association unit 58 shown in FIG. For example, the result of associating the information of the position of the first feature area of the CTC image 19 with the information of the position of the second feature area of the endoscopic image 37 can be stored.
  • FIG. 5 is a schematic view of a CTC image.
  • the whole image 19a shown in FIG. 5 is one form of the CTC image 19 representing the whole of a large intestine which is a region to be observed.
  • the observation site has the same meaning as the subject and the observation target of the subject.
  • Entire image 19a is placed one or more viewpoints P on the path 19c that is set, from the start position P S, while changing the sequentially viewpoint P to the goal position P G, the inside of the lumen from the viewpoint P It is an image assuming that it saw.
  • the pass 19c may be generated by thinning the entire image 19a.
  • a known thinning method can be applied to the thinning processing. Although a plurality of viewpoints P are illustrated in FIG. 5, the arrangement and the number of the viewpoints P can be appropriately determined according to the inspection condition and the like.
  • a viewpoint image representing an image of a field of view at the designated viewpoint P can be displayed. Note that the viewpoint image at each viewpoint P is illustrated in FIG. 8 are denoted by the reference numerals 19b 1, and reference numeral 19b 2.
  • a viewpoint image in which the imaging direction of the endoscope 10 is reflected can be generated.
  • a viewpoint image reflecting the imaging direction of the endoscope 10 may be generated for each of a plurality of imaging directions.
  • the entire image 19a shown in FIG. 5 and the viewpoint image not shown in FIG. 5 are included in the concept of the CTC image 19 shown in FIG.
  • the CTC image 19 whose whole image 19a is shown in FIG. 5 has three-dimensional coordinates not shown.
  • the three-dimensional coordinates set in the CTC image 19 can be three-dimensional coordinates having an arbitrary reference position of the CTC image 19 as an origin.
  • Three-dimensional coordinates can apply arbitrary three-dimensional coordinates, such as rectangular coordinates, polar coordinates, and cylindrical coordinates. Note that illustration of three-dimensional coordinates is omitted.
  • FIG. 5 In virtual colonoscopy, a large intestine is imaged using a CT apparatus to acquire a CT image of the large intestine, and a lesion etc. is detected using a CTC image 19 generated by performing image processing on the CT image of the large intestine. .
  • Virtual colonoscopy in conjunction with the movement of the endoscope 10, the pointer 19d likened to the endoscope 10 from the start position P S to a goal position P G, it is moved on the path 19c.
  • the arrows shown in FIG. 5 indicate the moving direction of the pointer 19d.
  • FIG. 5 shows, by applying the cecum as the start position P S, and shows an example of applying the anus as goal position P G. That is, in FIG. 5, insert the endoscope 10 to the start position P S, the virtual colonoscopy when moving while venting the endoscope 10 to the goal position P G position shown schematically.
  • the position of the pointer 19 d is derived from the movement condition of the endoscope 10.
  • Examples of movement conditions of the endoscope 10 include the movement speed of the endoscope 10 and a movement vector representing the movement direction of the endoscope 10.
  • the endoscope 10 can grasp the position inside the observation site using a sensor (not shown). In addition, the endoscope 10 can derive the movement speed of the endoscope 10 and the movement vector representing the movement direction using a sensor (not shown). Furthermore, the endoscope 10 can derive the orientation of the endoscope 10 using a sensor (not shown).
  • Endoscopy detects lesions such as polyps from the endoscopic image 37. That is, in the endoscopy, the endoscope 10 is used to look at a moving image 38 generated in real time, and specify the position, shape, and the like of a lesion. The endoscopic examination may use a reproduced image of the endoscopic image 37.
  • FIG. 6 is a schematic view of an endoscopic image.
  • an optional frame image 38a constituting the moving image 38 is shown in FIG.
  • the frame image 38a shown in FIG. 6 is a two-dimensional image.
  • the frame image 38a has color information and texture information.
  • endoscopic examination is strong in detecting flat lesions, differences in surface condition, and the like.
  • endoscopy is not good at finding a lesion on the back side of a ridged structure such as a fold.
  • FIG. 7 is a schematic view showing a blind spot in the observation range of the endoscope.
  • FIG. 7 illustrates a schematic cross section 100 along the path 19 c of the CTC image 19 and a schematic cross section 120 of the endoscopic image 37 corresponding to the cross section 100 of the CTC image 19.
  • the endoscope 10A and the endoscope 10B illustrated using a two-dot chain line represent the endoscope 10 at the observation position which has already been observed.
  • the endoscope 10 illustrated using a solid line represents the endoscope 10 at the observation position during observation. Arrow lines indicate the moving direction of the endoscope 10.
  • the CTC image 19 Since the CTC image 19 has three-dimensional information, virtual colonoscopy is strong in detecting convex shapes such as polyps. In addition, it is also strong in detecting polyps and the like hidden behind the folds. For example, the CTC image 19 can detect either the polyp 104 located on the back side of the fold 102 and the polyp 106 located on the front side of the fold 102. However, in the viewpoint image, the polyp 106 located behind the fold 102 may not be displayed.
  • the endoscopic examination allows detection of the polyp 126 located on the front side of the fold 122, but the polyp 124 located on the back side of the fold 122. Are not good at detecting
  • the polyp 126 on the front side of the fold 122 is located in the observation range of the endoscope 10 B or the endoscope 10.
  • the endoscope 10A can detect the polyp 126.
  • the polyp 124 on the back of the fold 122 is located at a blind spot in the observation range of the endoscope 10A, the endoscope 10B, and the endoscope 10.
  • the endoscope 10A, the endoscope 10B, and the endoscope 10 have difficulty in detecting the polyp 124 on the back side of the fold 122.
  • the endoscope 10, the endoscope 10A, and the endoscope 10B all have difficulty in detecting the polyp 124 on the back side of the fold 122.
  • FIG. 8 is an explanatory view of first feature area extraction.
  • FIG. 8 illustrates a viewpoint image 19 b 1 and a viewpoint image 19 b 2 at an arbitrary viewpoint P in the CTC image 19.
  • the concept including the viewpoint image 19 b 1 and the viewpoint image 19 b 2 shown in FIG. 8 is the viewpoint image 19 b.
  • the first feature region 80 is extracted from the CTC image 19 shown in FIG. 8 using the first feature region extraction unit 50 shown in FIG. Also, the first feature region extraction process can detect the polyp 106 located on the back side of the crimp 102 shown in FIG. 7 as a first feature region 80.
  • the process of extracting the first feature area 80 from the CTC image 19 can apply a known feature area extraction technique. The same applies to second feature region extraction described later.
  • a known feature area extraction technique there is an example in which feature quantities for each of a plurality of areas are calculated, and an area matching the first condition is specified as an extraction target area according to the feature quantity for each area. .
  • the feature amount for each area can be calculated using the pixel value of each pixel included in each area.
  • a convex polyp is extracted as the first feature region 80.
  • the first feature area 80 of the CTC image 19 can specify coordinate values in three-dimensional coordinates set in the CTC image 19.
  • the plurality of first feature regions 80 are extracted, the plurality of first feature regions 80 are associated with the first condition and collectively managed.
  • the first feature area 80 may be classified into a plurality of attributes.
  • a lesion extracted as the first feature region 80 may be classified according to the position, with information on the position of the lesion as the classification condition.
  • An example of information on the location of a lesion is the information on the front or back of a fold. That is, the lesion extracted as the first feature region 80 may be classified into a lesion on the front of the fold and a lesion on the back of the fold.
  • the lesion on the front of the fold and the lesion on the back of the fold may be applied to extract two types of first feature regions.
  • FIG. 9 is an explanatory diagram of second feature region extraction. 9 shows, among the endoscopic image 37 illustrates the arbitrary frame image 38a 1. The extraction result of the second feature area can be handled as the result of the endoscopy.
  • the frame image 38a 1 shown in FIG. 9 may be used a still image 39.
  • the second feature area 70 is extracted from the endoscopic image 37 using the second feature area extraction unit 54 illustrated in FIG. 3.
  • Frame image 38a 1 shown in FIG. 9 polyps convex shape is extracted as a second feature region 70.
  • the information of the first feature area 80 shown in FIG. 8 is stored in the first feature area storage unit 64 shown in FIG. 4 as the extraction result of the first feature area. Further, the information of the second feature area 70 shown in FIG. 9 is stored in the second feature area storage unit 66 shown in FIG. 4 as the extraction result of the second feature area.
  • FIG. 12 is a schematic view showing an example of association of lesions. It shows an example in which the second characteristic region 70 is a polyp of the convex shape in the frame image 38a 1 of the endoscope image 37 is detected in FIG. 10.
  • viewpoint image 19b 1 shown in FIG. 10 is a viewpoint image 19b 1 shown in FIG.
  • Viewpoint image 19b 2 shown in FIG. 10 is a viewpoint image 19b 2 shown in FIG.
  • the first feature area 80 shown in FIG. 10 is the first feature area 80 shown in FIG.
  • the frame image 38a 1 shown in FIG. 10 is a frame image 38a 1 shown in FIG.
  • the second feature area 70 shown in FIG. 10 is the second feature area 70 shown in FIG.
  • the associating unit 58 shown in FIG. 3 displays a CTC image of the first feature area 80 corresponding to the second feature area 70. Search from 19 When the first feature area 80 of the CTC image 19 corresponding to the second feature area 70 of the endoscopic image 37 is detected, the first feature area 80 of the CTC image 19 and the second feature area of the endoscopic image 37 Correspond with 70.
  • the association unit 58 shown in FIG. 3 uses the information on the position of the CTC image 19 and the information on the position of the endoscope image 37 to use the first feature area 80 of the CTC image 19 and the first feature area 80 of the endoscope image 37.
  • the information of the image of the CTC image 19 is applied instead of the information of the position of the CTC image 19 which can be associated with the two characteristic regions 70, and the endoscope image is used instead of the information of the position of the endoscopic image 37
  • the information of 37 images may be applied.
  • the associating unit 58 illustrated in FIG. 3 illustrates the association result of the first feature region 80 of the CTC image 19 illustrated in FIG. 10 and the second feature region 70 of the endoscopic image 37 in FIG.
  • the association result storage unit 68 is stored.
  • the concept of the correspondence between the CTC image 19 and the endoscopic image 37 includes the concept of forming a combination of the components of the CTC image 19 and the components of the endoscopic image 37.
  • the concept of the correspondence between the CTC image 19 and the endoscopic image 37 may include the concept of searching for and identifying the component of the CTC image 19 corresponding to the component of the endoscopic image 37. .
  • FIG. 11 is a schematic view showing an example of corrugation correspondence.
  • the frame image 38 a 11 shown in FIG. 11 is extracted as a second feature area 72.
  • Viewpoint image 19b 11 are folds are extracted as the first feature area 82.
  • FIG. 12 the viewpoint image 19 b 12 and the viewpoint image 19 b 13 at the viewpoint P continuous to the viewpoint P of the viewpoint image 19 b 11 are illustrated.
  • the associating unit 58 illustrated in FIG. 3 associates the first feature area 82 and the second feature area 72 illustrated in FIG.
  • the association unit 58 shown in FIG. 3 stores the association result between the first feature area 82 and the second feature area 72 shown in FIG. 11 in the association result storage unit 68 shown in FIG.
  • FIG. 12 is a schematic view showing an example of the arrangement of the folds using the fold numbers.
  • the number of folds does not change. Therefore, it is possible to set the reference fold and to associate the CTC image 19 with the endoscopic image 37 using the fold number.
  • the frame image 38a 21 shown in FIG. 12 is extracted as a second feature area 74.
  • the viewpoint image 19 b 21 is extracted as a first feature region 84.
  • the viewpoint image 19 b 22 and the viewpoint image 19 b 23 shown in FIG. 12 are also extracted as the second feature region. Note that illustration of the viewpoint image 19b 22 and the second feature area of the viewpoint image 19b 23 is omitted.
  • N 1 attached to the viewpoint image 19b 21 is an integer representing a fold number. The same applies to n 2 attached to the viewpoint image 19 b 22 , n 3 attached to the viewpoint image 19 b 23 , and n 1 attached to the frame image 38 a 21 .
  • the associating unit 58 shown in FIG. 3 matches the second feature region 74 shown in FIG.
  • the first feature area 84 is associated with it.
  • the association unit 58 shown in FIG. 3 stores the association result with the second feature area 74 and the first feature area 84 shown in FIG. 12 in the association result storage unit 68 shown in FIG.
  • FIG. 13 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of non notification.
  • the monitor device 16 shown in FIG. 13 displays the endoscopic image 37 and displays the CTC image 19 corresponding to the endoscopic image 37.
  • the viewpoint image 19 b 31 of the CTC image 19 corresponds to the frame image 38 a 31 of the endoscopic image 37.
  • the endoscopic image 37 displayed on the monitor device 16 is sequentially updated according to the progress of the endoscopic examination. Further, the CTC image 19 is sequentially updated in accordance with the update of the endoscopic image 37. There may be a delay within the allowable range between the CTC image 19 and the endoscopic image 37.
  • the monitor device 16 does not display the first notification image 140 and the second notification image 142 described later.
  • FIG. 14 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of the first notification.
  • the first notification image 140 is displayed as the frame image 38a 32 of the endoscopic image 37 shown in FIG.
  • the first notification image 140 is displayed when a polyp (not shown) is present on the back side of the fold 150 but the polyp is not shown in the frame image 38a 32 .
  • the CTC image 19 shows that the fold 160 is The polyp on the back side is extracted as a first feature area 80d.
  • the viewpoint image 19b 32 displays the same view as the frame image 38a 32 , the polyp extracted as the first feature region 80d is not displayed. Dashed line representing a first feature area 80d indicates that the first characteristic region 80d is not displayed in the view image 19b 32.
  • the first feature region 80 d illustrated with broken lines in FIG. 14 is associated with the non-extraction region 76 not extracted as the second feature region 70 from the endoscopic image 37.
  • An example of the non-extraction area 76 is an area located at a blind spot in the observation range of the endoscope 10 in the endoscopic image 37.
  • the non-extraction area 76 is an area from which the second feature area 70 is to be extracted.
  • the non-extraction area 76 is an area where the second feature area 70 is not actually extracted because the area is located at a blind spot of the observation range of the endoscope 10.
  • the notification unit 59 illustrated in FIG. 3 sets the position of the non-extraction area 76 of the endoscopic image 37 displayed on the monitor device 16 as the first notification. , And displays the first notification image 140 in an overlay manner. In addition, the first notification image 140 is displayed at the position of the non-extraction area 76 of the endoscopic image 37.
  • the first notification image 140 illustrated in FIG. 14 is an example, and the shape and the like may be arbitrarily defined.
  • the first notification image 140 may be displayed on the frame images 38a before and after the frame image 38a 32 . That is, at any timing from the timing when the non-extraction area 76 enters the field of view of the endoscope 10 to the timing when the non-extraction area 76 deviates from the field of view of the endoscope 10 according to the progress of the endoscopic examination
  • the first notification image 140 can be displayed.
  • the fold 150 of the endoscopic image 37 shown in FIG. 14 corresponds to the cross-section 120 fold 122 shown in FIG. 7.
  • the fold 160 of the CTC image 19 corresponds to the cross-section 100 fold 122 shown in FIG. 7. The same applies to the folds 150 and 160 shown in FIG.
  • FIG. 15 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of the second notification.
  • the second notification is performed when the second feature region 70 is extracted from the endoscopic image 37.
  • a polyp is extracted as the second feature region 70, and the second notification image 142 is displayed as the second notification.
  • the notification unit 59 illustrated in FIG. 3 is configured to display the second feature area 70 of the endoscopic image 37 displayed on the monitor device 16.
  • the second notification image 142 is overlaid and displayed at the position of.
  • the second feature area 70 of the frame image 38 a 33 of the endoscopic image 37 shown in FIG. 15 is associated with the first feature area 80 e of the viewpoint image 19 b 33 of the CTC image 19.
  • the first feature area 80 e is a polyp on the front side of the fold 160.
  • the first notification image 140 shown in FIG. 14 has the notification level changed with respect to the second notification image 142 shown in FIG. Specifically, the first notification image 140 shown in FIG. 14 has the notification level raised with respect to the second notification image 142 shown in FIG. 15, and the first notification image 140 shown in FIG. The size of the second notification image 142 shown in FIG. The details of the difference between the notification levels of the first notification and the second notification will be described later.
  • the first feature region 80d extracted as a polyp on the back side of the fold 160 and the first feature region 80e extracted as a polyp on the front side of the fold 160 shown in the present embodiment have the condition of polyp and the front side of the fold
  • a first condition combining the back side condition may be applied and extracted in advance from the CTC image 19.
  • FIG. 16 is a flowchart showing the procedure of the notification method.
  • a CTC image input process S10 is performed.
  • the CTC image 19 is input using the CTC image acquisition unit 41a shown in FIG.
  • the CTC image 19 is stored in the image storage unit 48.
  • the CTC image input process S10 shown in FIG. 16 is an example of a first image input process.
  • the process proceeds to a first feature area extraction process S12.
  • the first feature region is extracted from the CTC image 19 using the first feature region extraction unit 50 shown in FIG.
  • the information of the first feature area is stored in the first feature area storage unit 64 shown in FIG.
  • the process proceeds to an endoscopic image input step S14.
  • the endoscopic image input step S14 the endoscopic image 37 is input using the endoscopic image acquisition unit 41b shown in FIG.
  • the endoscopic image input process S14 shown in FIG. 16 is an example of a second image input process.
  • the process proceeds to a second feature area extraction process S16.
  • the second feature area is extracted from the endoscopic image 37 using the second feature area storage unit 66 shown in FIG.
  • the endoscopic image input process S14 and the second feature area extraction process S16 shown in FIG. 16 can be grasped as an endoscopic examination. That is, the endoscopic image acquisition unit 41b illustrated in FIG. 2 sequentially inputs the moving image 38 captured using the endoscope 10, and displays the endoscopic image 37 as the endoscopic image 37 on the monitor device 16 illustrated in FIG. Do.
  • the second feature region extraction unit 54 illustrated in FIG. 3 automatically extracts a lesion as the second feature region 70 from the endoscopic image 37.
  • the second feature area extraction unit 54 may extract a lesion as the second feature area 70 from the endoscopic image 37 based on the extraction information input by the user using the operation device 15.
  • the first feature area extraction unit 50 shown in FIG. 3 executes the extraction of the first feature area 80 from the CTC image 19 in parallel with the acquisition of the endoscopic image 37 and the extraction of the second feature area 70. Do.
  • the first feature region 80 may be extracted in advance and stored.
  • the CTC image 19 and the endoscopic image 37 are associated using the associating unit 58 shown in FIG. That is, the associating unit 58 associates the first feature region 80 with the second feature region 70, or associates the first feature region 80 with the non-extraction region 76.
  • the result of the matching in the matching step S18 shown in FIG. 16 is stored in the matching result storage unit 68 shown in FIG.
  • the determination step S20 is performed. In the determination step S20, it is determined using the notification unit 59 shown in FIG. 5 whether to execute the first notification described using FIG. 14 or the second notification shown in FIG.
  • the first feature area 80 When the first feature area 80 is associated with the non-extraction area 76 of the endoscopic image 37 in the determination step S20 shown in FIG. In the case of a Yes determination, it progresses to 1st alerting
  • a lesion such as a polyp extracted from the CTC image 19 as the first feature region 80 is a blind spot in the observation range of the endoscope 10 or the endoscope It may be determined whether it is the position of ten observation ranges.
  • the notification unit 59 can execute the first notification when a lesion such as a polyp extracted from the CTC image 19 as the first feature region 80 is located at a blind spot in the observation range of the endoscope 10.
  • the second notification can be performed.
  • the first notification step S22 executes the first notification described using FIG. 14 using the notification unit 59 shown in FIG. After the first notification step S22 shown in FIG. 16, the process proceeds to an inspection end determination step S26.
  • the second notification step S24 executes the second notification described using FIG. 15 using the notification unit 59 shown in FIG. After the second notification step S24 shown in FIG. 16, the process proceeds to an inspection end determination step S26.
  • the examination end determination step S26 it is determined using the medical image processing apparatus 14 shown in FIG. 3 whether or not the endoscopy is completed. If the medical image processing apparatus 14 determines that the endoscopy is completed in the examination end determination step S26, the determination is Yes. If the determination is Yes, the medical image processing apparatus 14 ends the notification method.
  • the examination end determination step S26 when the medical image processing apparatus 14 determines that the endoscopic examination is continued, the result is No. If the determination is No, the medical image processing apparatus 14 continues the notification method. That is, in the case of No determination in the examination end determination step S26, the process proceeds to the endoscopic image input step S14. Thereafter, each process from the endoscopic image input process S14 to the examination completion determination process S26 is executed until the determination in the examination completion determination process S26 becomes Yes.
  • a lesion such as a polyp is extracted as the first feature region 80 from the CTC image 19.
  • the CTC image 19 and the endoscopic image 37 are associated with each other.
  • the first notification is performed. Due to the first notification, the user can recognize the presence of a lesion such as a polyp which is not extracted from the endoscopic image 37, for example, at a blind spot in the observation range of the endoscope 10. Thereby, in endoscopy using the endoscope 10, it is possible to suppress the oversight of a lesion such as a polyp at a position at which the observation range of the endoscope 10 becomes a blind spot.
  • observation of the endoscope 10 is performed by pushing a fold or the like that obstructs the observation range of the endoscope 10 It becomes possible to observe the blind spot of the range.
  • the second notification is performed.
  • the first notification changes the notification level with respect to the second notification and raises the notification level.
  • the first notification image and the second notification image are overlaid on the endoscopic image 37.
  • the first notification image 140 or the second notification image 142 can be superimposed and displayed on the endoscopic image 37 without processing the endoscopic image 37 itself.
  • FIG. 17 is an explanatory diagram of notification according to the first modification.
  • the density of the first notification image 144 shown in FIG. 17 is changed with respect to the second notification image 146.
  • the first notification image 144 has a dark density applied to the second notification image 146.
  • the colors of the first notification image 144 and the second notification image 146 may be changed. For example, black is used for the first notification image 144, and yellow is used for the second notification image 146. That is, a color with high visibility in the endoscopic image 37 is applied to the first notification image 144 as compared to the second notification image 146.
  • At least one of the density and the color of the first notification image 144 and the second notification image 146 is changed. Thereby, the visibility of the first notification image 144 with respect to the second notification image 146 can be enhanced.
  • FIG. 18 is an explanatory diagram of notification according to the second modified example.
  • the first notification image 147 shown in FIG. 18 is displayed blinking.
  • the second notification image 148 is lit and displayed.
  • the lighting display can be grasped as a normal display.
  • the first notification image 147 is blinked and the second notification image 148 is lit and displayed. Thereby, the visibility of the first notification image 147 with respect to the second notification image 148 can be increased.
  • FIG. 19 is an explanatory diagram of notification according to the third modification.
  • the first notification image 147A shown in FIG. 19 is displayed blinking.
  • the second notification image 147B is also blinked and displayed.
  • the blinking cycle of the first notification image 147A is shorter than that of the second notification image 147B.
  • the first notification image 147A is blinked and displayed, and the second notification image 147B is blinked and displayed.
  • a flashing cycle of the first notification image 147A is shorter than that of the second notification image 147B. Thereby, the visibility of the first notification image 147A can be increased with respect to the second notification image 147B.
  • the first modification described above can be combined with the second modification or the third modification as appropriate.
  • the first notification image 140 may be emphasized such as increasing in size continuously or in stages as the non-extraction area 76 of the endoscopic image 37 approaches the observation area of the endoscope 10 .
  • the first notification image 140 and the second notification image 142 may be displayed on the CTC image 19.
  • the first notification image 140 and the second notification image 142 displayed on the CTC image 19 can be displayed in the same manner as the first notification image 140 and the second notification image 142 displayed on the endoscope image 37. is there.
  • FIG. 20 is an explanatory diagram of another display example of the first feature area.
  • FIG. 20 shows an example of displaying the first feature area 80 in the CTC image 19.
  • the CTC image 19 shown in FIG. 20 corresponds to the entire image 19a shown in FIG.
  • the path 19 c illustrated with thin lines represents the path 19 c in the area where the endoscope 10 has already finished observation. Further, a path 19 c illustrated by using a thick line represents a path 19 c of an area to be observed by the endoscope 10 from now.
  • the first feature area 80a represents the first feature area 80 that the endoscope 10 has already observed.
  • the first feature area 80 b represents a first feature area 80 to be observed next by the endoscope 10.
  • the first feature area 80b that the endoscope 10 observes next is highlighted.
  • the highlighting can be applied to enlargement, color change, blinking, and the like.
  • the first feature area 80c represents a first feature area 80 that the endoscope 10 observes next to the first feature area 80b. After the endoscope 10 observes the first feature area 80b, the first feature area 80c is highlighted.
  • the monitor device 16 may display the CTC image 19 shown in FIG. 20 instead of the CTC image 19 shown in FIG. Thereby, the position of the first feature area 80 existing near the position of the endoscope 10 can be grasped.
  • the combination with the first notification image 140 or the like shown in FIG. 14 contributes to the detection of a lesion such as a polyp which is not detected as the second feature region 70 from the endoscopic image 37.
  • FIG. 21 is a functional block diagram showing functions of a medical image processing apparatus for realizing notification according to another embodiment.
  • the notification using the notification sound is performed.
  • a notification sound control unit 200 and a sound source 202 are added to the medical image processing apparatus 14 shown in FIG.
  • a speaker 204 is added to the endoscope system 9 shown in FIG.
  • the notification sound control unit 200 outputs a notification sound generated using the sound source 202 via the speaker 204.
  • the notification sound may be voice.
  • the notification sound may apply a warning sound.
  • the notification sound control unit 200 associates the first feature region 80 of the CTC image 19 with the second feature region 70 of the endoscope image 37, such as a region located in the observation range of the endoscope 10, for example.
  • the first feature area 80 of the CTC image 19 is associated with the non-extraction area 76 of the endoscope image 37, for example, an area located at a blind spot in the observation range of the endoscope 10
  • the notification sound may be emphasized. As an example of emphasizing the notification sound, the volume may be raised.
  • the notification sound control unit 200, the sound source 202, and the speaker 204 are examples of components of the notification sound output unit.
  • notification when the first feature region 80 of the CTC image 19 is associated with the non-extraction region 76 of the endoscopic image 37, notification using a sound is performed. Thereby, notification can be performed without processing the endoscope image 37 itself.
  • the medical image processing apparatus 14 illustrated in FIG. 2 may include a CTC image generation unit that generates a CTC image 19 from a three-dimensional inspection image such as a CT image.
  • the medical image processing apparatus 14 may acquire a three-dimensional inspection image via the CTC image acquisition unit 41a, and generate the CTC image 19 using the CTC image generation unit.
  • the viewpoint P shown in FIG. 5 is not limited to above the path 19c.
  • the viewpoint P can be set at an arbitrary position.
  • the viewing direction of the viewpoint image 19 b can be arbitrarily set corresponding to the imaging direction of the endoscope 10.
  • the viewpoint image 19 b may be a two-dimensional inspection image obtained by converting a three-dimensional inspection image of an arbitrary cross section of the entire image 19 a into a two-dimensional image.
  • First example Extraction of the first feature region 80 may use a three-dimensional inspection image used to generate the CTC image 19.
  • the first feature region 80 may be extracted and stored in advance.
  • the pre-extracted first feature area 80 may be searchably stored using information on the position of the first feature area 80 as an index.
  • the extraction of the second feature area may be performed by reproducing the moving image 38.
  • a first example of a particular wavelength band is the blue or green band in the visible range.
  • the wavelength band of the first example includes a wavelength band of 390 nanometers or more and 450 nanometers or less, or 530 nanometers or more and 550 nanometers or less, and the light of the first example is 390 nanometers or more and 450 nanometers or less, or It has a peak wavelength within the wavelength band of 530 nanometers or more and 550 nanometers or less.
  • a second example of a particular wavelength band is the red band in the visible range.
  • the wavelength band of the second example includes a wavelength band of 585 nanometers or more and 615 nanometers or less, or 610 nanometers or more and 730 nanometers or less, and the light of the second example is 585 nanometers or more and 615 nanometers or less, or It has a peak wavelength within the wavelength band of 610 nanometers or more and 730 nanometers or less.
  • the third example of the specific wavelength band includes wavelength bands in which the absorption coefficient is different between oxygenated hemoglobin and reduced hemoglobin, and the light of the third example has peak wavelengths in wavelength bands where the absorption coefficient is different between oxygenated hemoglobin and reduced hemoglobin.
  • the wavelength band of this third example includes wavelength bands of 400 ⁇ 10 nanometers, 440 ⁇ 10 nanometers, 470 ⁇ 10 nanometers, or 600 nanometers to 750 nanometers, and the light of the third example is It has a peak wavelength in a wavelength band of 400 ⁇ 10 nm, 440 ⁇ 10 nm, 470 ⁇ 10 nm, or 600 nm or more and 750 nm or less.
  • a fourth example of the specific wavelength band is a wavelength band of excitation light which is used to observe fluorescence emitted from a fluorescent substance in the living body and which excites the fluorescent substance.
  • it is a wavelength band of 390 nanometers or more and 470 nanometers or less.
  • observation of fluorescence may be called fluorescence observation.
  • the fifth example of the specific wavelength band is a wavelength band of infrared light.
  • the wavelength band of the fifth example includes a wavelength band of 790 nm or more and 820 nm or less, or 905 nm or more and 970 nm or less, and the light of the fifth example is 790 nm or more and 820 nm or less, Or has a peak wavelength in a wavelength band of 905 nm or more and 970 nm or less.
  • the processor 12 may generate a special light image having information of a specific wavelength band based on a normal light image obtained by imaging using white light. Note that the generation referred to here includes acquisition. In this case, the processor 12 functions as a special light image acquisition unit. Then, the processor 12 obtains a signal of a specific wavelength band by performing an operation based on the color information of red, green and blue or cyan, magenta and yellow contained in the normal light image.
  • red, green and blue may be represented as RGB (Red, Green, Blue).
  • cyan, magenta and yellow may be expressed as CMY (Cyan, Magenta, Yellow).
  • the processor 12 may generate a feature image such as a known oxygen saturation image based on at least one of the normal light image and the special light image.
  • the second feature region extraction unit 54 illustrated in FIG. 3 performs machine learning by using the correspondence between the first feature region 80 of the CTC image 19 and the non-extraction region 76 of the endoscopic image 37 as learning data. It is possible to update the extraction rule of 2 feature areas. In machine learning, the deep learning algorithm 65 shown in FIG. 2 is applied.
  • the image processing method described above can be configured as a program that implements functions corresponding to the respective steps in the image processing method using a computer.
  • a program that causes a computer to realize a CTC image input function, a first feature area extraction function, an endoscope image input function, a second feature area extraction function, an association function, and a storage function can be configured.
  • the CTC image input function corresponds to the first image input function.
  • the endoscope image input function corresponds to the second image input function.

Abstract

Provided are an endoscope system, a reporting method, and a program capable of preventing, in endoscopy using an endoscope, oversight of lesions that may be difficult to be detected. An endoscope system according to the present invention comprises: a first image input unit (41a) for inputting a virtual endoscopic image; a second image input unit (41b) for inputting a real endoscopic image; a correlating unit (58) for correlating the virtual endoscopic image to the real endoscopic image; a first characteristic area-extracting unit (50) for extracting a first characteristic area that satisfies a prescribed first condition from the virtual endoscopic image; a second characteristic area-extracting unit (54) for extracting a second characteristic area that satisfies a second condition corresponding to the first condition from the real endoscopic image; and a reporting unit (59) for reporting that the first characteristic area is not correlated to the second characteristic area.

Description

内視鏡システム、報知方法、及びプログラムEndoscope system, notification method, and program
 本発明は内視鏡システム、報知方法、及びプログラムに係り、特に仮想内視鏡画像の表示に関する。 The present invention relates to an endoscope system, a notification method, and a program, and more particularly to display of a virtual endoscopic image.
 近年、内視鏡を用いて患者の大腸等の管状構造物を観察又は処置を行う技術が注目されている。内視鏡画像はCCD(Charge Coupled Device)等の撮像素子を用いて撮影された画像である。そして、内視鏡画像は管状構造物内部の色、及び質感が鮮明に表現された画像である。一方、内視鏡画像は、管状構造物の内部を表す2次元画像である。このため、内視鏡画像が管状構造物内のどの位置を表しているものかを把握することが困難である。 In recent years, a technique for observing or treating a tubular structure such as a large intestine of a patient using an endoscope has attracted attention. The endoscopic image is an image captured using an imaging device such as a CCD (Charge Coupled Device). The endoscopic image is an image in which the color and texture of the inside of the tubular structure are clearly expressed. On the other hand, an endoscopic image is a two-dimensional image representing the inside of a tubular structure. For this reason, it is difficult to grasp which position in the tubular structure the endoscopic image represents.
 そこで、CT装置又はMRI装置等のモダリティを用いた断層撮影をして取得された3次元検査画像を用いて、実際に内視鏡を用いて撮影した画像と類似した仮想内視鏡画像を生成する手法が提案されている。 Therefore, using a three-dimensional inspection image acquired by tomographic imaging using a modality such as a CT apparatus or an MRI apparatus, a virtual endoscopic image similar to an image actually captured using an endoscope is generated. Methods have been proposed.
 仮想内視鏡画像は、内視鏡を管状構造物内の目標とする位置まで導くためのナビゲーション画像として用いられる場合がある。なお、CTはComputed Tomographyの省略語である。また、MRIはMagnetic Resonance Imagingの省略語である。 The virtual endoscopic image may be used as a navigation image to guide the endoscope to a target position in the tubular structure. CT is an abbreviation of Computed Tomography. Also, MRI is an abbreviation of Magnetic Resonance Imaging.
 このため、3次元検査画像から管状構造物の画像を抽出し、管状構造物の画像と内視鏡を用いて撮影を行い取得した実際の内視鏡画像である実内視鏡画像との対応付けを行い、内視鏡の現在位置における仮想内視鏡画像を管状構造物の3次元検査画像から生成して表示する手法が提案されている。 Therefore, the image of the tubular structure is extracted from the three-dimensional inspection image, and the correspondence between the image of the tubular structure and the actual endoscopic image which is an actual endoscopic image acquired by imaging using the endoscope is acquired. A method has been proposed in which a virtual endoscopic image at the current position of the endoscope is generated from the three-dimensional inspection image of the tubular structure and displayed.
 特許文献1は、3次元画像データからポリープなどの病変を検出し、内視鏡の観察位置が病変付近に到達したことを報知する内視鏡観察支援装置が記載されている。 Patent Document 1 describes an endoscopic observation support device that detects a lesion such as a polyp from three-dimensional image data and reports that the observation position of the endoscope has reached the vicinity of the lesion.
 特許文献2は、撮像対象の部位を表すボリュームデータを抽出し、ボリュームデータにおける内視鏡プローブの先端の位置と方向とを特定し、任意の視野における仮想内視鏡画像を生成し、表示する医用画像処理装置が記載されている。 Patent Document 2 extracts volume data representing a region to be imaged, identifies the position and direction of the tip of the endoscope probe in the volume data, and generates and displays a virtual endoscopic image in an arbitrary field of view. A medical imaging device is described.
 特許文献2に記載の医用画像処理装置は、ボリュームデータに基づいて腫瘍候補の形状と位置とを特定し、腫瘍候補の位置にマーカーを重ねて表示する。これにより、操作者はマーカーを用いて腫瘍候補の有無存在を認識することが可能となる。 The medical image processing apparatus described in Patent Document 2 specifies the shape and position of a tumor candidate based on volume data, and displays a marker superimposed on the position of the tumor candidate. This enables the operator to recognize the presence or absence of a tumor candidate using a marker.
 特許文献3は、被検体における管腔臓器の展開画像中の死角領域を検出し、死角領域の有無を操作者に報知する医用画像表示装置が記載されている。特許文献3に記載の医用画像処理装置は、死角領域が存在する場合、死角領域が存在することを表す文字情報を表示する。特許文献3には、死角領域が存在する場合に、死角領域の位置を、マーカーを用いて色付けして表示する他の表示態様が記載されている。 Patent Document 3 describes a medical image display apparatus that detects a blind area in a developed image of a luminal organ in a subject and notifies the operator of the presence or absence of the blind area. The medical image processing device described in Patent Document 3 displays, when there is a blind spot area, character information indicating that the blind spot area is present. Patent Document 3 describes another display mode in which the position of the blind spot area is colored and displayed using a marker when the blind spot area is present.
 特許文献4は、身体3次元領域における内視鏡の相対的な位置及び姿勢の情報、及びボリュームデータに基づいて仮想内視鏡画像を生成し、内視鏡を用いて取得されるカラー内視鏡実画像と、仮想内視鏡画像との構図を一致させる内視鏡システムが記載されている。 Patent document 4 produces | generates a virtual endoscopic image based on the information of the relative position and attitude | position of an endoscope in a body three-dimensional area | region, and volume data, and the color endoscope image acquired using an endoscope An endoscope system is described that matches the composition of a mirror actual image and a virtual endoscopic image.
 特許文献4に記載の内視鏡システムは、仮想内視鏡画像から特徴的な形状を検出する。次いで、カラー内視鏡画像のうち、仮想内視鏡画像の特徴的な形状に相当する領域の画素値を変更する。これにより、他の領域と区別可能な表示形態を実現している。 The endoscope system described in Patent Document 4 detects a characteristic shape from a virtual endoscopic image. Next, the pixel value of the area corresponding to the characteristic shape of the virtual endoscopic image in the color endoscopic image is changed. This realizes a display form distinguishable from other areas.
 特許文献5は、CT装置からボリュームデータを取得し、取得したボリュームデータから3次元画像を生成し表示する医用画像処理装置が記載されている。特許文献5に記載の医用画像処理装置は、表示部に表示された3次元画像の特徴部位に対してマークを付すための入力操作を受け付け、表示部にマークが表示される。特許文献5には、画像解析を用いて自動的に特徴部位を設定可能であることが記載されている。 Patent Document 5 describes a medical image processing apparatus that acquires volume data from a CT apparatus and generates and displays a three-dimensional image from the acquired volume data. The medical image processing apparatus described in Patent Document 5 receives an input operation for marking a feature portion of a three-dimensional image displayed on the display unit, and the mark is displayed on the display unit. Patent Document 5 describes that a characteristic part can be automatically set using image analysis.
 特許文献6は、被験者の内部を観察する内視鏡、内視鏡を用いて取得した内視鏡画像を表示させるモニタを備える内視鏡装置が記載されている。特許文献6に記載の内視鏡装置は、内視鏡を用いて撮像した被写体像に応じた画像を取得し、取得した画像内における病変部位を検出する処理を、画像を取得する毎に実行する。 Patent Document 6 describes an endoscope apparatus including an endoscope for observing the inside of a subject, and a monitor for displaying an endoscope image acquired using the endoscope. The endoscope apparatus described in Patent Document 6 acquires an image corresponding to a subject image captured using the endoscope, and executes processing for detecting a lesion site in the acquired image every time an image is acquired. Do.
特開2014-230612号公報JP 2014-230612 A 特開2011-139797号公報JP 2011-139797 A 国際公開第2010/074058号International Publication No. 2010/074058 特開2006-61274号公報JP, 2006-61274, A 特開2016-143194号公報JP, 2016-143194, A 特開2008-301968号公報JP, 2008-301968, A
 しかしながら、内視鏡の観察範囲には、体腔の内壁に存在するひだの裏側などの死角が存在する。内視鏡を用いた内視鏡検査は内視鏡の観察範囲の死角に位置する病変を見落としてしまう可能性がある。したがって、内視鏡の観察範囲の死角に位置する病変の見落としを防ぐため対処が必要となる。 However, in the observation range of the endoscope, there are blind spots such as the back side of the folds present on the inner wall of the body cavity. Endoscopic examination using an endoscope may overlook a lesion located at a blind spot in the observation range of the endoscope. Therefore, measures are required to prevent the oversight of the lesion located at the blind spot in the observation range of the endoscope.
 特許文献1に記載の発明は、内視鏡の観察位置に病変付近に到達したことを報知するものの、内視鏡の観察範囲の死角に位置する病変に対する対応がなされていない。そうすると、特許文献1に記載の発明は、内視鏡の観察範囲の死角に位置する病変を見落とすことがあり得る。 Although the invention described in Patent Document 1 reports that the observation position of the endoscope has reached the vicinity of a lesion, no response is made to a lesion located at a blind spot in the observation range of the endoscope. Then, the invention described in Patent Document 1 may overlook a lesion located at a blind spot in the observation range of the endoscope.
 特許文献2に記載の発明は、腫瘍候補の位置にマーカーを重ねて表示するものの、腫瘍候補の位置が内視鏡の観察範囲の死角の場合の対応がなされていない。そうすると、特許文献2に記載の発明は、内視鏡の観察範囲の死角に位置する病変を見落とすことがあり得る。 Although the invention described in Patent Document 2 displays a marker superimposed on the position of a tumor candidate, correspondence is not made in the case where the position of the tumor candidate is a blind spot in the observation range of the endoscope. Then, the invention described in Patent Document 2 may overlook a lesion located at a blind spot in the observation range of the endoscope.
 特許文献3に記載の発明は、病変の有無に関わらず展開画像中の死角領域の有無を操作者に報知するものであり、展開画像中の死角領域について病変の有無を報知するものではない。そうすると、特許文献3に記載の発明は、内視鏡の観察範囲の死角に位置する病変を見落とすことがあり得る。 The invention described in Patent Document 3 informs the operator of the presence or absence of a blind area in a developed image regardless of the presence or absence of a lesion, and does not notify the presence or absence of a lesion in a blind area in a developed image. Then, the invention described in Patent Document 3 may overlook a lesion located at a blind spot in the observation range of the endoscope.
 特許文献4に記載の発明は、仮想内視鏡画像において特徴的な領域の画素値を変更して、特徴的な領域と他の領域との区別を可能とするものであり、内視鏡画像を用いた内視鏡の観察範囲の死角に位置する病変等の発見に適用されるものではない。 The invention described in Patent Document 4 changes the pixel value of a characteristic region in a virtual endoscopic image to enable distinction between a characteristic region and another region, and an endoscope image It does not apply to the discovery of the lesion etc. located in the blind spot of the observation range of the endoscope using.
 特許文献5に記載の発明は、内視鏡画像に対して自動的に特徴部位を設定可能であるものの、特許文献5には、特徴領域が内視鏡の観察範囲の死角となる場合に関する記載はない。そうすると、特許文献5に記載の発明は、内視鏡の観察範囲の死角に位置する病変を見落とすことがあり得る。 Although the invention described in Patent Document 5 is capable of automatically setting the characteristic portion to the endoscopic image, Patent Document 5 describes the case where the characteristic region is a blind spot in the observation range of the endoscope. There is no. Then, the invention described in Patent Document 5 may overlook a lesion located at a blind spot in the observation range of the endoscope.
 特許文献6に記載の発明は、内視鏡画像を構成するフレーム画像単位で病変部位の検出が可能であるものの、特許文献6には、病変部位が内視鏡の観察範囲の死角となる場合に関する記載はない。そうすると、特許文献6に記載の発明は、内視鏡の観察範囲の死角に位置する病変を見落とすことがあり得る。 Although the invention described in Patent Document 6 can detect a lesion site in frame image units constituting an endoscopic image, in Patent Document 6, a case where the lesion site becomes a blind spot in the observation range of the endoscope There is no description about Then, the invention described in Patent Document 6 may overlook a lesion located at a blind spot in the observation range of the endoscope.
 すなわち、特許文献1から特許文献6に記載の発明には、内視鏡の観察範囲の死角に位置する病変を見落とし等の、内視鏡検査において検出が困難な病変等の見落としという課題が存在し、この課題に対する対処が必要である。 That is, in the inventions described in Patent Document 1 to Patent Document 6, there is a problem of overlooking a lesion or the like that is difficult to detect in endoscopy such as overlooking a lesion located at a blind spot in the observation range of the endoscope. Needs to be addressed.
 本発明はこのような事情に鑑みてなされたもので、内視鏡を用いた内視鏡検査において、検出が困難な病変等の見落としを抑制し得る内視鏡システム、報知方法、及びプログラムを提供することを目的とする。 The present invention has been made in view of such circumstances, and in an endoscopic examination using an endoscope, an endoscope system, a notification method, and a program capable of suppressing the oversight of a lesion or the like which is difficult to detect. Intended to be provided.
 上記目的を達成するために、次の発明態様を提供する。 The following invention aspects are provided in order to achieve the said objective.
 第1態様に係る内視鏡システムは、被検体の3次元画像から生成される仮想内視鏡画像を入力する第1画像入力部と、内視鏡を用いて被検体の観察対象を撮像して得られた実内視鏡画像を入力する第2画像入力部と、仮想内視鏡画像と実内視鏡画像とを対応付けする対応付け部と、仮想内視鏡画像から規定の第1条件に合致する第1特徴領域を抽出する第1特徴領域抽出部と、実内視鏡画像から第1条件に対応する第2条件に合致する第2特徴領域を抽出する第2特徴領域抽出部と、第1特徴領域が第2特徴領域と対応付けされていない場合に報知を行う報知部と、を備えた内視鏡システムである。 An endoscope system according to a first aspect is configured to image an observation target of a subject using an endoscope and a first image input unit that inputs a virtual endoscopic image generated from a three-dimensional image of the subject. A second image input unit for inputting a real endoscopic image obtained by the user, a matching unit for correlating the virtual endoscopic image and the real endoscopic image, and a first of the prescriptions from the virtual endoscopic image A first feature region extraction unit that extracts a first feature region that matches the condition, and a second feature region extraction unit that extracts a second feature region that matches a second condition that corresponds to the first condition from the real endoscopic image And an informing unit for informing when the first feature area is not associated with the second feature area.
 第1態様によれば、仮想内視鏡画像から第1特徴領域を抽出する。仮想内視鏡画像と実内視鏡画像とを対応付ける。第1特徴領域が第2特徴領域と対応付けされていない場合に報知を行う。報知に起因して、第1特徴領域が第2特徴領域と対応付けされていないことを、ユーザが認識し得る。これにより、内視鏡を用いた内視鏡検査において第2特徴領域として抽出されるべき領域として把握される、内視鏡検査において検出が困難な病変等の見落としを抑制し得る。 According to the first aspect, the first feature area is extracted from the virtual endoscopic image. The virtual endoscopic image is associated with the real endoscopic image. Notification is performed when the first feature area is not associated with the second feature area. The user may recognize that the first feature area is not associated with the second feature area due to the notification. Thereby, it is possible to suppress the oversight of a lesion or the like which is difficult to detect in the endoscopy, which is grasped as a region to be extracted as the second feature region in the endoscopy using the endoscope.
 第1画像入力部は、予め生成された仮想内視鏡画像を入力してもよいし、3次元検査画像を取得し、取得した3次元検査画像から仮想内視鏡画像を生成し、生成された仮想内視鏡画像を入力してもよい。 The first image input unit may input a virtual endoscopic image generated in advance, or acquires a three-dimensional inspection image, generates a virtual endoscopic image from the acquired three-dimensional inspection image, and generates You may input a virtual endoscopic image.
 3次元検査画像の例として、CT装置を用いて被検体を断層撮影して得られた3次元検査画像が挙げられる。仮想内視鏡の一例として、大腸を被検体とする仮想大腸内視鏡が挙げられる。 As an example of a three-dimensional inspection image, a three-dimensional inspection image obtained by tomographic imaging of an object using a CT apparatus can be mentioned. As an example of a virtual endoscope, a virtual large intestine endoscope which uses a large intestine as a subject is mentioned.
 第1特徴領域の抽出に適用される第1条件を設定する第1条件設定部を備える態様が好ましい。また、第2特徴領域の抽出に適用される第2条件を設定する第2条件設定部を備える態様が好ましい。 The aspect provided with the 1st condition setting part which sets the 1st condition applied to extraction of the 1st feature field is preferred. Moreover, the aspect provided with the 2nd condition setting part which sets the 2nd condition applied to extraction of a 2nd feature area is preferable.
 第2態様は、第1態様の内視鏡システムにおいて、報知部は、第1特徴領域が第2特徴領域と対応付けされた場合に、第1特徴領域が内視鏡の観察範囲に位置する第2特徴領域に対応付けされたことを報知し、第1特徴領域が第2特徴領域と対応付けされていない場合は、第1特徴領域が第2特徴領域と対応付けされた場合における報知方法、及び報知レベルと比較して、報知方法、及び報知レベルの少なくともいずれかを変更する構成としてもよい。 A 2nd aspect is an endoscope system of a 1st aspect, Comprising: A 1st characteristic area is located in the observation range of an endoscope, when a 1st characteristic area is matched with a 2nd characteristic area. Informing that the first feature area is associated with the second feature area, and notifying that the first feature area is associated with the second feature area when the first feature area is not associated with the second feature area And at least one of the notification method and the notification level may be changed in comparison with the notification level.
 第2態様によれば、第1特徴領域が第2特徴領域と対応付けされた場合と、第1特徴領域が第2特徴領域と対応付けされていない場合とを区別して報知し得る。 According to the second aspect, notification can be made by distinguishing the case where the first feature area is associated with the second feature area and the case where the first feature area is not associated with the second feature area.
 第3態様は、第2態様の内視鏡システムにおいて、実内視鏡画像を表示する表示部を備え、報知部は、第1特徴領域が第2特徴領域と対応付けされていないことを報知する第1報知画像、及び第1特徴領域が内視鏡の観察範囲に位置する第2特徴領域に対応付けされたことを報知する第2報知画像を表示部に表示し、且つ第2報知画像より第1報知画像を拡大して表示する構成としてもよい。 A 3rd aspect is provided with the display part which displays a real endoscopic image in the endoscope system of a 2nd aspect, and the alerting | reporting part alert | reports that the 1st feature area is not matched with the 2nd feature area. The first notification image to be displayed and a second notification image notifying that the first feature region is associated with the second feature region located in the observation range of the endoscope are displayed on the display unit, and the second notification image Further, the first notification image may be enlarged and displayed.
 第3態様によれば、第1特徴領域が第2特徴領域に対応付けされている場合に対して、第1特徴領域が第2特徴領域に対応付けされていない場合が強調される。 According to the third aspect, in contrast to the case where the first feature region is associated with the second feature region, the case where the first feature region is not associated with the second feature region is emphasized.
 第4態様は、第2態様の内視鏡システムにおいて、実内視鏡画像を表示する表示部を備え、報知部は、第1特徴領域が第2特徴領域と対応付けされていないことを報知する第1報知画像、及び第1特徴領域が第2特徴領域と対応付けされたことを報知する第2報知画像を表示部に表示し、且つ第1報知画像は第2報知画像と色を変更する構成としてもよい。 A 4th aspect is the endoscope system of a 2nd aspect. WHEREIN: The display part which displays an actual endoscopic image is provided, and the alerting | reporting part alert | reports that the 1st feature area is not matched with the 2nd feature area. The first notification image to be displayed and the second notification image notifying that the first feature region is associated with the second feature region are displayed on the display unit, and the first notification image changes color from the second notification image It may be configured to
 第4態様によれば、第1特徴領域が第2特徴領域に対応付けされている場合に対して、第1特徴領域が第2特徴領域に対応付けされていない場合が強調される。 According to the fourth aspect, in contrast to the case where the first feature region is associated with the second feature region, the case where the first feature region is not associated with the second feature region is emphasized.
 第5態様は、第2態様の内視鏡システムにおいて、実内視鏡画像を表示する表示部を備え、報知部は、第1特徴領域が第2特徴領域と対応付けされていないことを表す第1報知画像、及び第1特徴領域が第2特徴領域に対応付けされたことを表す第2報知画像を表示部に表示し、且つ第1報知画像を点滅表示する一方、第2報知画像を点灯表示する構成としてもよい。 A 5th aspect is a endoscope system of a 2nd aspect. WHEREIN: The display part which displays a real endoscope image is provided, and the alerting | reporting part represents that the 1st feature area is not matched with the 2nd feature area. The first notification image and a second notification image indicating that the first feature region is associated with the second feature region are displayed on the display unit, and the first notification image is displayed in a blinking manner, while the second notification image is displayed. It may be configured to light up and display.
 第5態様によれば、第1特徴領域が第2特徴領域に対応付けされている場合に対して、第1特徴領域が第2特徴領域に対応付けされていない場合が強調される。 According to the fifth aspect, in contrast to the case where the first feature region is associated with the second feature region, the case where the first feature region is not associated with the second feature region is emphasized.
 第6態様は、第2態様の内視鏡システムにおいて、実内視鏡画像を表示する表示部を備え、報知部は、第1特徴領域が第2特徴領域と対応付けされていないことを表す第1報知画像、及び第1特徴領域が第2特徴領域に対応付けされたことを表す第2報知画像を表示部に点滅表示し、且つ第2報知画像に対して第1報知画像の点滅周期を短くする構成としてもよい。 A 6th aspect is the endoscope system of a 2nd aspect. WHEREIN: The display part which displays a real endoscope image is provided, and the alerting | reporting part represents that the 1st feature area is not matched with the 2nd feature area. The first notification image and the second notification image indicating that the first feature region is associated with the second feature region are displayed on the display unit in a blinking manner, and the blinking period of the first notification image with respect to the second notification image May be shortened.
 第6態様によれば、第1特徴領域が第2特徴領域に対応付けされている場合に対して、第1特徴領域が第2特徴領域に対応付けされている場合が強調される。 According to the sixth aspect, in contrast to the case where the first feature region is associated with the second feature region, the case where the first feature region is associated with the second feature region is emphasized.
 第7態様は、第3態様から第6態様のいずれか一態様の内視鏡システムにおいて、表示部は、実内視鏡画像と別に生成された第1報知画像、及び第2報知画像を実内視鏡画像に重畳表示させる構成としてもよい。 The endoscope system according to a seventh aspect is the endoscope system according to any one of the third aspect to the sixth aspect, wherein the display unit is configured to display the first notification image and the second notification image generated separately from the real endoscope image. It may be configured to be displayed superimposed on the endoscopic image.
 第7態様によれば、実内視鏡画像を加工することなく、実内視鏡画像の強調表示を行うことが可能である。 According to the seventh aspect, it is possible to highlight an actual endoscopic image without processing the actual endoscopic image.
 第8態様は、第3態様から第7態様のいずれか一態様の内視鏡システムにおいて、表示部は、仮想内視鏡画像を表示し、且つ仮想内視鏡画像における内視鏡の位置を表示する構成としてもよい。 In an endoscope system according to an eighth aspect, in any one of the third aspect to the seventh aspect, the display unit displays a virtual endoscopic image, and a position of the endoscope in the virtual endoscopic image It may be configured to be displayed.
 第8態様によれば、内視鏡の操作者は、実内視鏡画像における観察位置に対応する仮想内視鏡画像の位置の認識が可能である。 According to the eighth aspect, the operator of the endoscope can recognize the position of the virtual endoscopic image corresponding to the observation position in the real endoscopic image.
 第9態様は、第3態様から第7態様のいずれか一態様の内視鏡システムにおいて、表示部は、仮想内視鏡画像を表示し、且つ第1特徴領域の情報を表示する構成としてもよい。 A ninth aspect is the endoscope system according to any one of the third to seventh aspects, wherein the display unit is configured to display a virtual endoscopic image and to display information of the first feature area. Good.
 第9態様によれば、内視鏡の操作者は、仮想内視鏡画像における第1特徴領域の認識が可能である。 According to the ninth aspect, the operator of the endoscope can recognize the first feature area in the virtual endoscopic image.
 第10態様は、第9態様の内視鏡システムにおいて、表示部は、第1特徴領域を拡大して表示する構成としてもよい。 According to a tenth aspect, in the endoscope system of the ninth aspect, the display unit may be configured to display the first feature area in an enlarged manner.
 第10態様によれば、仮想内視鏡画像における第1特徴領域が視認し易くなる。 According to the tenth aspect, the first feature area in the virtual endoscopic image can be easily viewed.
 第11態様は、第9態様の内視鏡システムにおいて、表示部は、第1特徴領域を点滅表示する構成としてもよい。 In an endoscope system according to an eleventh aspect, in the endoscope system according to the ninth aspect, the display may display the first feature area in a blinking manner.
 第11態様によれば、仮想内視鏡画像における第1特徴領域が視認し易くなる。 According to the eleventh aspect, the first feature area in the virtual endoscopic image can be easily viewed.
 第12態様は、第1態様から第11態様のいずれか一態様の内視鏡システムにおいて、報知音を出力する報知音出力部を備え、報知部は、報知音出力部を用いて、第1特徴領域が第2特徴領域と対応付けされていないことを表す第1報知音を出力する構成としてもよい。 According to a twelfth aspect, in the endoscope system according to any one of the first aspect to the eleventh aspect, the notification sound output unit for outputting the notification sound is provided, and the notification unit uses the notification sound output unit to perform the first operation. The first notification sound may be output to indicate that the feature region is not associated with the second feature region.
 第12態様によれば、第1特徴領域が第2特徴領域と対応付けされていない場合に第1報知音を出力する。これにより、第1特徴領域が第2特徴領域と対応付けされていない場合に、第2特徴領域として抽出されるべき領域の見落としを抑制し得る。 According to the twelfth aspect, the first notification sound is output when the first feature area is not associated with the second feature area. Thereby, when the first feature area is not associated with the second feature area, it is possible to suppress the oversight of the area to be extracted as the second feature area.
 また、実内視鏡画像に対して加工等を施す必要がなく、実内視鏡画像への報知の影響を及ぼし難い。 In addition, it is not necessary to process the actual endoscopic image, and it is difficult to affect the notification of the actual endoscopic image.
 第13態様は、第12態様の内視鏡システムにおいて、報知部は、報知音出力部を用いて、第1特徴領域が第2特徴領域に対応付けされたことを表し、且つ第1報知音と異なる第2報知音を出力する構成としてもよい。 The thirteenth aspect is the endoscope system according to the twelfth aspect, wherein the notification unit indicates that the first feature region is associated with the second feature region using the notification sound output unit, and the first notification sound A second notification sound different from the above may be output.
 第13態様によれば、第1特徴領域が第2特徴領域と対応付けされていない場合と、第1特徴領域が第2特徴領域と対応付けされた場合とを区別して報知し得る。 According to the thirteenth aspect, notification can be made by distinguishing the case where the first feature region is not associated with the second feature region and the case where the first feature region is associated with the second feature region.
 第14態様は、第13態様の内視鏡システムにおいて、報知部は、第2報知音に対して第1報知音の音量を大きくする構成としてもよい。 According to a fourteenth aspect, in the endoscope system of the thirteenth aspect, the notification unit may be configured to increase the volume of the first notification sound with respect to the second notification sound.
 第14態様によれば、第1特徴領域が第2特徴領域に対応付けされている場合に対して、第1特徴領域が第2特徴領域に対応付けされていない場合が強調される。 According to the fourteenth aspect, in contrast to the case where the first feature region is associated with the second feature region, the case where the first feature region is not associated with the second feature region is emphasized.
 第15態様は、第2態様から第14態様のいずれか一態様の内視鏡システムにおいて、報知部は、第1特徴領域と対応付けがされた実内視鏡画像の領域から実内視鏡画像の観察位置までの間の距離が短くなるに従い報知レベルを変更する構成としてもよい。 According to a fifteenth aspect, in the endoscope system according to any one of the second aspect to the fourteenth aspect, the notification unit detects the real endoscope from the area of the real endoscope image correlated with the first feature area. The notification level may be changed as the distance to the observation position of the image becomes shorter.
 第15態様によれば、第1特徴領域と対応付けがされた実内視鏡画像の領域が内視鏡の観察領域に近づいていることを認識し得る。 According to the fifteenth aspect, it can be recognized that the area of the real endoscope image associated with the first feature area is approaching the observation area of the endoscope.
 第15態様において、報知レベルを変更する際に報知レベル大きくする態様が好ましい。報知レベル大きくする際に、連続的に報知レベル大きくしてもよいし、段階的に報知レベル大きくしてもよい。 In the fifteenth aspect, it is preferable to increase the notification level when changing the notification level. When the notification level is increased, the notification level may be increased continuously or may be increased stepwise.
 第15態様における第1特徴領域と対応付けがされた実内視鏡画像の領域は、第1特徴領域と対応付けされた第2特徴領域、及び第1特徴領域と対応付けがされた実内視鏡画像の非抽出領域の少なくともいずれか一方が含まれる。実内視鏡画像の非抽出領域とは、実内視鏡画像から第2特徴領域として抽出されていない領域を表す。 The region of the real endoscope image associated with the first feature region in the fifteenth aspect is the second feature region associated with the first feature region, and the actual inside associated with the first feature region. At least one of the non-extraction regions of the endoscopic image is included. The non-extraction area of the real endoscope image represents an area which is not extracted as the second feature area from the real endoscope image.
 第16態様は、第1態様から第15態様のいずれか一態様の内視鏡システムにおいて、第1特徴領域抽出部は、実内視鏡画像の観察の際に、予め仮想内視鏡画像から第1特徴領域を抽出する構成としてもよい。 According to a sixteenth aspect, in the endoscope system according to any one of the first aspect to the fifteenth aspect, the first feature region extraction unit predetermines from a virtual endoscopic image when observing a real endoscopic image. The first feature area may be extracted.
 第16態様によれば、仮想内視鏡画像から第1特徴領域の抽出を省略できる。これにより、画像処理の処理負荷が軽減される。 According to the sixteenth aspect, extraction of the first feature region from the virtual endoscopic image can be omitted. This reduces the processing load of image processing.
 第17態様は、第1態様から第15態様のいずれか一態様の内視鏡システムにおいて、第1特徴領域抽出部は、実内視鏡画像の観察の際に、実内視鏡画像の観察に対応して逐次仮想内視鏡画像から第1特徴領域を抽出する構成としてもよい。 According to a seventeenth aspect, in the endoscope system according to any one of the first to fifteenth aspects, the first feature region extraction unit observes a real endoscopic image when observing a real endoscopic image. The first feature area may be sequentially extracted from the virtual endoscopic image in accordance with.
 第17態様によれば、第1特徴領域が非抽出の仮想内視鏡画像の取得が可能である。これにより、第1画像入力部の処理負荷が軽減される。 According to the seventeenth aspect, it is possible to acquire a virtual endoscopic image in which the first feature region is not extracted. Thereby, the processing load on the first image input unit is reduced.
 第18態様は、第1態様から第17態様のいずれか一態様の内視鏡システムにおいて、第1特徴領域抽出部は、同一の第1条件を用いて複数の第1特徴領域を抽出した場合、複数の第1特徴領域を一括して管理する構成としてもよい。 An eighteenth aspect is the endoscope system according to any one of the first to seventeenth aspects, wherein the first feature region extraction unit extracts a plurality of first feature regions using the same first condition. The plurality of first feature areas may be collectively managed.
 第18態様によれば、同一の第1条件を用いて抽出された複数の第1特徴領域を一括して管理することが可能である。 According to the eighteenth aspect, it is possible to manage a plurality of first feature regions extracted using the same first condition collectively.
 第19態様は、第1態様から第18態様のいずれか一態様の内視鏡システムにおいて、第1特徴領域抽出部は、第1条件として仮想内視鏡画像における位置の情報を適用する構成としてもよい。 According to a nineteenth aspect, in the endoscope system according to any one of the first to eighteenth aspects, the first feature region extraction unit applies, as the first condition, information on a position in a virtual endoscopic image. It is also good.
 第19態様によれば、第1特徴領域抽出部は、仮想内視鏡画像における位置の情報に基づいて第1特徴領域を抽出し得る。 According to the nineteenth aspect, the first feature region extraction unit may extract the first feature region based on the information of the position in the virtual endoscopic image.
 第20態様は、第19態様の内視鏡システムにおいて、第1特徴領域抽出部は、位置の情報として内視鏡の観察範囲の死角の位置を適用する構成としてもよい。 According to a twentieth aspect, in the endoscope system of the nineteenth aspect, the first feature area extraction unit may apply a position of a blind spot in the observation range of the endoscope as the position information.
 第20態様によれば、第1特徴領域抽出部は、内視鏡の観察範囲の死角の位置における第1特徴領域の抽出が可能となる。これにより、内視鏡の観察範囲の死角の位置について、第2特徴領域として抽出されるべき領域の見落としを抑制し得る。 According to the twentieth aspect, the first feature region extraction unit can extract the first feature region at the position of the blind spot in the observation range of the endoscope. Thereby, with regard to the position of the blind spot in the observation range of the endoscope, the oversight of the area to be extracted as the second feature area can be suppressed.
 第20態様において、第1特徴領域抽出部は、第1条件として内視鏡の観察範囲の死角の位置が設定された場合に、内視鏡の観察範囲の死角の位置を第1特徴領域として抽出し得る。 In the twentieth aspect, when the position of the blind spot in the observation range of the endoscope is set as the first condition, the first feature area extraction unit sets the position of the blind spot in the observation range of the endoscope as the first feature area. It can be extracted.
 第21態様は、第19態様又は第20態様の内視鏡システムにおいて、第1特徴領域抽出部は、位置の情報としてひだの裏側を適用する構成としてもよい。 According to a twenty-first aspect, in the endoscope system of the nineteenth aspect or the twentieth aspect, the first feature region extraction unit may be configured to apply the back side of the fold as the position information.
 第21態様によれば、第1特徴領域抽出部は、ひだの裏側の位置における第1特徴領域の抽出が可能となる。 According to the twenty-first aspect, the first feature region extraction unit can extract the first feature region at the position on the back side of the fold.
 第22態様は、第1態様から第21態様のいずれか一態様の内視鏡システムにおいて、第2特徴領域抽出部は、第2特徴領域として病変を抽出する構成としてもよい。 According to a twenty-second aspect, in the endoscope system according to any one of the first aspect to the twenty-first aspect, the second feature region extraction unit may extract a lesion as the second feature region.
 第22態様によれば、内視鏡を用いた内視鏡検査において、第2特徴領域として抽出されるべき領域の病変の見落としを抑制し得る。 According to the twenty-second aspect, in endoscopy using an endoscope, it is possible to suppress oversight of a lesion in a region to be extracted as the second feature region.
 第23態様は、第1態様から第22態様のいずれか一態様の内視鏡システムにおいて、第2特徴領域抽出部は、機械学習を用いて生成された抽出規則を適用して、実内視鏡画像から第2特徴領域を抽出する構成としてもよい。 A twenty-third aspect is the endoscope system according to any one of the first aspect to the twenty-second aspect, wherein the second feature region extraction unit applies an extraction rule generated using machine learning to obtain a real endoscope. The second feature area may be extracted from the mirror image.
 第23態様によれば、実内視鏡画像における第2特徴領域抽出の精度が向上し得る。これにより、内視鏡検査の精度が向上し得る。 According to the twenty-third aspect, the accuracy of the second feature region extraction in the real endoscopic image can be improved. This may improve the accuracy of endoscopy.
 第24態様に係る報知方法は、被検体の3次元画像から生成される仮想内視鏡画像を入力する第1画像入力工程と、内視鏡を用いて被検体の観察対象を撮像して得られた実内視鏡画像を入力する第2画像入力工程と、仮想内視鏡画像と実内視鏡画像とを対応付けする対応付け工程と、仮想内視鏡画像から規定の第1条件に合致する第1特徴領域を抽出する第1特徴領域抽出工程と、実内視鏡画像から第1条件に対応する第2条件に合致する第2特徴領域を抽出する第2特徴領域抽出工程と、第1特徴領域が第2特徴領域と対応付けされていない場合に報知を行う報知工程と、を含む報知方法である。 The informing method according to the twenty-fourth aspect comprises: acquiring a first image input step of inputting a virtual endoscopic image generated from a three-dimensional image of the subject; and imaging an observation target of the subject using the endoscope. The second image input process of inputting the acquired real endoscopic image, the correlating process of correlating the virtual endoscopic image and the real endoscopic image, and the virtual endoscopic image to the prescribed first condition A first feature area extraction step of extracting a first feature area that matches the second feature area extraction step of extracting a second feature area that matches a second condition corresponding to the first condition from the real endoscopic image; And a notifying step of notifying when the first feature region is not associated with the second feature region.
 第24態様によれば、第1態様と同様の効果を得ることができる。 According to the twenty-fourth aspect, the same effect as the first aspect can be obtained.
 第24態様において、第2態様から第23態様で特定した事項と同様の事項を適宜組み合わせることができる。その場合、内視鏡システムにおいて特定される処理や機能を担う構成要素は、これに対応する処理や機能を担う報知方法の構成要素として把握することができる。 In the twenty-fourth aspect, the same matters as the matters specified in the second to twenty-third aspects can be combined as appropriate. In that case, the component carrying the processing or function specified in the endoscope system can be grasped as the component of the notification method carrying the processing or function corresponding thereto.
 第25態様に係るプログラムは、コンピュータに、被検体の3次元画像から生成される仮想内視鏡画像を入力する第1画像入力機能、内視鏡を用いて被検体の観察対象を撮像して得られた実内視鏡画像を入力する第2画像入力機能、仮想内視鏡画像と実内視鏡画像とを対応付けする対応付け機能、仮想内視鏡画像から規定の第1条件に合致する第1特徴領域を抽出する第1特徴領域抽出機能、実内視鏡画像から第1条件に対応する第2条件に合致する第2特徴領域を抽出する第2特徴領域抽出機能、及び第1特徴領域が第2特徴領域と対応付けされていない場合に報知を行う報知機能、を実現させるプログラムある。 The program according to the twenty-fifth aspect comprises, in a computer, a first image input function of inputting a virtual endoscopic image generated from a three-dimensional image of a subject, and imaging an observation target of the subject using an endoscope The second image input function to input the obtained real endoscopic image, the correlating function to associate the virtual endoscopic image with the real endoscopic image, and the virtual endoscopic image are met with a prescribed first condition A first feature area extracting function of extracting a first feature area to be extracted; a second feature area extracting function of extracting a second feature area matching a second condition corresponding to the first condition from the real endoscope image; There is a program for realizing a notification function of notifying when the feature area is not associated with the second feature area.
 第25態様によれば、第1態様と同様の効果を得ることができる。 According to the twenty-fifth aspect, the same effect as the first aspect can be obtained.
 第25態様において、第2態様から第23態様で特定した事項と同様の事項を適宜組み合わせることができる。その場合、内視鏡システムにおいて特定される処理や機能を担う構成要素は、これに対応する処理や機能を担うプログラムの構成要素として把握することができる。 In the twenty-fifth aspect, the same matters as the matters specified in the second to twenty-third aspects can be combined as appropriate. In that case, the component carrying the processing or function specified in the endoscope system can be grasped as the component of the program carrying the processing or function corresponding to this.
 第25態様は、少なくとも1つ以上のプロセッサと、少なくとも1つ以上のメモリとを有するシステムであって、被検体の3次元画像から生成される仮想内視鏡画像を入力する第1画像入力機能、内視鏡を用いて被検体の観察対象を撮像して得られた実内視鏡画像を入力する第2画像入力機能、仮想内視鏡画像と実内視鏡画像とを対応付けする対応付け機能、仮想内視鏡画像から規定の第1条件に合致する第1特徴領域を抽出する第1特徴領域抽出機能、実内視鏡画像から第1条件に対応する第2条件に合致する第2特徴領域を抽出する第2特徴領域抽出機能、及び第1特徴領域が第2特徴領域と対応付けされていない場合に報知を行う報知機能を実現させるシステムとして構成し得る。 A twenty-fifth aspect is a system having at least one or more processors and at least one or more memories, the first image input function for inputting a virtual endoscopic image generated from a three-dimensional image of a subject A second image input function of inputting a real endoscopic image obtained by imaging an observation target of a subject using an endoscope, and correspondence of associating a virtual endoscopic image with a real endoscopic image Attaching function, a first feature region extracting function of extracting a first feature region that meets a prescribed first condition from a virtual endoscopic image, a second feature that matches a second condition corresponding to the first condition from a real endoscope image The system may be configured as a system that implements a second feature area extraction function of extracting two feature areas and a notification function of notifying when the first feature area is not associated with the second feature area.
 本発明によれば、仮想内視鏡画像から第1特徴領域を抽出する。仮想内視鏡画像と実内視鏡画像とを対応付ける。第1特徴領域が第2特徴領域と対応付けされていない場合に報知を行う。報知に起因して、第1特徴領域が第2特徴領域と対応付けされていないことを、ユーザが認識し得る。これにより、内視鏡を用いた内視鏡検査において第2特徴領域として抽出されるべき領域として把握される、内視鏡検査において検出が困難な病変等の見落としを抑制し得る。 According to the present invention, the first feature area is extracted from the virtual endoscopic image. The virtual endoscopic image is associated with the real endoscopic image. Notification is performed when the first feature area is not associated with the second feature area. The user may recognize that the first feature area is not associated with the second feature area due to the notification. This makes it possible to suppress the oversight of a lesion or the like that is difficult to detect in endoscopy, which is grasped as a region to be extracted as the second feature region in endoscopy using an endoscope.
図1は内視鏡システムの全体構成を示す概略図である。FIG. 1 is a schematic view showing an entire configuration of an endoscope system. 図2は医療画像処理装置の機能を示す機能ブロック図である。FIG. 2 is a functional block diagram showing functions of the medical image processing apparatus. 図3は医療画像解析処理部の機能を示す機能ブロック図である。FIG. 3 is a functional block diagram showing the function of the medical image analysis processing unit. 図4は画像記憶部の機能を示す機能ブロック図である。FIG. 4 is a functional block diagram showing the function of the image storage unit. 図5はCTC画像の模式図である。FIG. 5 is a schematic view of a CTC image. 図6は内視鏡画像の模式図である。FIG. 6 is a schematic view of an endoscopic image. 図7は内視鏡の観察範囲の死角を示す模式図である。FIG. 7 is a schematic view showing a blind spot in the observation range of the endoscope. 図8は第1特徴領域抽出の説明図である。FIG. 8 is an explanatory view of first feature area extraction. 図9は第2特徴領域抽出の説明図である。FIG. 9 is an explanatory diagram of second feature region extraction. 図10は病変の対応付けの例を示す模式図である。FIG. 10 is a schematic view showing an example of association of lesions. 図11はひだの対応付けの例を示す模式図である。FIG. 11 is a schematic view showing an example of corrugation correspondence. 図12はひだの番号を用いたひだの対応付けの例を示す模式図である。FIG. 12 is a schematic view showing an example of the arrangement of the folds using the fold numbers. 図13は非報知の場合の内視鏡画像、及び仮想内視鏡画像の模式図である。FIG. 13 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of non notification. 図14は第1報知の場合の内視鏡画像、及び仮想内視鏡画像の模式図である。FIG. 14 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of the first notification. 図15は第2報知の場合の内視鏡画像、及び仮想内視鏡画像の模式図である。FIG. 15 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of the second notification. 図16は報知方法の手順を示すフローチャートである。FIG. 16 is a flowchart showing the procedure of the notification method. 図17は第1変形例に係る報知の説明図である。FIG. 17 is an explanatory diagram of notification according to the first modification. 図18は第2変形例に係る報知の説明図である。FIG. 18 is an explanatory diagram of notification according to the second modified example. 図19は第3変形例に係る報知の説明図である。FIG. 19 is an explanatory diagram of notification according to the third modification. 図20は第1特徴領域の他の表示例の説明図である。FIG. 20 is an explanatory diagram of another display example of the first feature area. 図21は他の実施形態に係る報知を実現する医療画像処理装置の機能を示す機能ブロック図である。FIG. 21 is a functional block diagram showing functions of a medical image processing apparatus for realizing notification according to another embodiment.
 以下、添付図面に従って本発明の好ましい実施の形態について詳説する。本明細書では、同一の構成要素には同一の参照符号を付して、重複する説明を省略する。 Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the attached drawings. In the present specification, the same components are denoted by the same reference numerals and redundant description will be omitted.
 [内視鏡システムの全体構成]
 図1は内視鏡システムの全体構成を示す概略図である。図1に示した内視鏡システム9は、内視鏡10と、光源装置11と、プロセッサ12と、表示装置13と、医療画像処理装置14と、操作装置15と、モニタ装置16と、を備える。内視鏡システム9は、ネットワーク17を介して画像記憶装置18と通信可能に接続される。
[Overall Configuration of Endoscope System]
FIG. 1 is a schematic view showing an entire configuration of an endoscope system. An endoscope system 9 shown in FIG. 1 includes an endoscope 10, a light source device 11, a processor 12, a display device 13, a medical image processing device 14, an operation device 15, and a monitor device 16. Prepare. The endoscope system 9 is communicably connected to the image storage device 18 via the network 17.
 内視鏡10は電子内視鏡である。また、内視鏡10は軟性内視鏡である。内視鏡10は挿入部20と、操作部21と、ユニバーサルコード22と、を備える。挿入部20は先端と基端とを備える。挿入部20は被検体内に挿入される。操作部21は術者が把持して各種操作を行う。操作部21は挿入部20の基端側に連設される。挿入部20は、全体が細径で長尺状に形成されている。 The endoscope 10 is an electronic endoscope. The endoscope 10 is a flexible endoscope. The endoscope 10 includes an insertion unit 20, an operation unit 21, and a universal cord 22. The insert 20 comprises a distal end and a proximal end. The insertion unit 20 is inserted into the subject. The operator holds the operation unit 21 to perform various operations. The operation unit 21 is continuously provided on the proximal end side of the insertion unit 20. The insertion part 20 is formed in a long and narrow shape as a whole.
 挿入部20は、軟性部25と、湾曲部26と、先端部27と、を備える。挿入部20は、軟性部25と、湾曲部26と、先端部27とが連設されて構成される。軟性部25は、挿入部20の基端側から先端側に向けて順に可撓性を有する。湾曲部26は、操作部21が操作された場合に湾曲可能な構造を有する。先端部27は、図示しない撮影光学系及び撮像素子28等が内蔵される。 The insertion portion 20 includes a flexible portion 25, a bending portion 26, and a tip portion 27. The insertion portion 20 is configured by connecting the flexible portion 25, the bending portion 26, and the distal end portion 27 in series. The flexible portion 25 has flexibility in order from the proximal side to the distal side of the insertion portion 20. The bending portion 26 has a structure that can be bent when the operation portion 21 is operated. The distal end portion 27 incorporates a photographing optical system and an imaging device 28 which are not shown.
 撮像素子28は、CMOS型撮像素子又はCCD型撮像素子である。CMOSは、相補型金属酸化膜半導体を表す英語表記である、Complementary Metal Oxide Semiconductorの省略語である。CCDは、電荷結合素子を表す英語表記である、Charge Coupled Deviceの省略語である。 The imaging device 28 is a CMOS imaging device or a CCD imaging device. CMOS is an abbreviation of Complementary Metal Oxide Semiconductor, which is the English language for Complementary Metal Oxide Semiconductor. CCD is an abbreviation of Charge Coupled Device, which is an English notation for charge coupled devices.
 先端部27の先端面27aは、図示しない観察窓が配置される。観察窓は、先端部27の先端面27aに形成された開口である。観察窓の後方には、図示しない撮影光学系が配置される。撮像素子28の撮像面は、観察窓、及び撮影光学系等を介して、被観察部位の像光が入射する。撮像素子28は、撮像素子28の撮像面に入射した被観察部位の像光を撮像して、撮像信号を出力する。ここでいう撮像は、像光を電気信号へ変換するという意味が含まれる。 An observation window (not shown) is disposed on the distal end surface 27 a of the distal end portion 27. The observation window is an opening formed in the distal end surface 27 a of the distal end portion 27. A photographing optical system (not shown) is disposed behind the observation window. Image light of a region to be observed is incident on the imaging surface of the imaging element 28 through an observation window, a photographing optical system, and the like. The imaging device 28 images the image light of the observed region incident on the imaging surface of the imaging device 28 and outputs an imaging signal. The term “imaging” as used herein includes the meaning of converting image light into an electrical signal.
 操作部21は、各種操作部材を備える。各種操作部材は術者により操作される。具体的には、操作部21は、2種類の湾曲操作ノブ29を備える。湾曲操作ノブ29は、湾曲部26の湾曲操作の際に用いられる。 The operation unit 21 includes various operation members. The various operating members are operated by the operator. Specifically, the operation unit 21 includes two types of bending operation knobs 29. The bending operation knob 29 is used when bending the bending portion 26.
 操作部21は、送気送水ボタン30と、吸引ボタン31と、を備える。送気送水ボタン30は、送気送水操作の際に用いられる。吸引ボタン31は、吸引操作の際に用いられる。 The operation unit 21 includes an air / water feed button 30 and a suction button 31. The air / water supply button 30 is used at the time of air / water operation. The suction button 31 is used at the time of suction operation.
 操作部21は、静止画像撮影指示部32と、処置具導入口33と、を備える。静止画像撮影指示部32は、被観察部位の静止画像39の撮影指示を行う際に用いられる。処置具導入口33は、挿入部20の内部を挿通している処置具挿通路の内部に処置具を挿入する開口である。なお、処置具挿通路、及び処置具の図示は省略する。 The operation unit 21 includes a still image photographing instruction unit 32 and a treatment instrument introduction port 33. The still image photographing instruction unit 32 is used when instructing the photographing of the still image 39 of the region to be observed. The treatment instrument introduction port 33 is an opening for inserting the treatment instrument into the inside of the treatment instrument insertion path passing through the inside of the insertion portion 20. The treatment tool insertion path and the treatment tool are not shown.
 ユニバーサルコード22は、内視鏡10を光源装置11に接続する接続コードである。ユニバーサルコード22は、挿入部20の内部を挿通しているライトガイド35、信号ケーブル36、及び図示しない流体チューブを内包している。 The universal cord 22 is a connection cord that connects the endoscope 10 to the light source device 11. The universal cord 22 includes the light guide 35 passing through the inside of the insertion portion 20, the signal cable 36, and a fluid tube (not shown).
 また、ユニバーサルコード22の端部は、光源装置11に接続されるコネクタ37aと、コネクタ37aから分岐され、且つプロセッサ12に接続されるコネクタ37bと、を備える。 Further, an end of the universal cord 22 includes a connector 37 a connected to the light source device 11 and a connector 37 b branched from the connector 37 a and connected to the processor 12.
 コネクタ37aを光源装置11に接続した場合、ライトガイド35及び図示しない流体チューブが光源装置11に挿入される。これにより、ライトガイド35及び図示しない流体チューブを介して、光源装置11から内視鏡10に対して必要な照明光と水と気体とが供給される。 When the connector 37 a is connected to the light source device 11, the light guide 35 and a fluid tube (not shown) are inserted into the light source device 11. Thereby, necessary illumination light, water, and gas are supplied from the light source device 11 to the endoscope 10 through the light guide 35 and the fluid tube (not shown).
 その結果、先端部27の先端面27aの図示しない照明窓から被観察部位に向けて照明光が照射される。また、送気送水ボタン30の押下操作に応じて、先端部27の先端面27aの図示しない送気送水ノズルから先端面27aの図示しない観察窓に向けて気体又は水が噴射される。 As a result, illumination light is emitted from the illumination window (not shown) of the distal end surface 27 a of the distal end portion 27 toward the region to be observed. Further, in response to the pressing operation of the air / water supply button 30, gas or water is jetted from an air / water supply nozzle (not shown) of the distal end surface 27a of the distal end portion 27 toward an observation window (not shown) of the distal end surface 27a.
 コネクタ37bをプロセッサ12に接続した場合、信号ケーブル36とプロセッサ12とが電気的に接続される。これにより、信号ケーブル36を介して、内視鏡10の撮像素子28からプロセッサ12へ被観察部位の撮像信号が出力され、且つプロセッサ12から内視鏡10へ制御信号が出力される。 When the connector 37 b is connected to the processor 12, the signal cable 36 and the processor 12 are electrically connected. As a result, an imaging signal of the region to be observed is output from the imaging element 28 of the endoscope 10 to the processor 12 through the signal cable 36, and a control signal is output from the processor 12 to the endoscope 10.
 本実施形態では、内視鏡10として軟性内視鏡を例に挙げて説明を行ったが、内視鏡10として、硬性内視鏡等の被観察部位の動画撮影を可能な各種の電子内視鏡を用いてもよい。 In the present embodiment, a flexible endoscope has been described as an example of the endoscope 10, but various types of electronic devices capable of capturing moving images of a region to be observed such as a rigid endoscope can be used as the endoscope 10 An endoscope may be used.
 光源装置11は、コネクタ37aを介して、内視鏡10のライトガイド35へ照明光を供給する。照明光は、白色光、又は特定の波長帯域の光を適用可能である。照明光は、白色光、及び特定の波長帯域の光を組み合わせてもよい。光源装置11は、観察目的に応じた波長帯域の光を、照明光として適宜選択可能に構成される。 The light source device 11 supplies illumination light to the light guide 35 of the endoscope 10 via the connector 37a. The illumination light may be white light or light of a specific wavelength band. The illumination light may combine white light and light of a specific wavelength band. The light source device 11 is configured to be able to appropriately select light of a wavelength band according to the purpose of observation as illumination light.
 白色光は、白色の波長帯域の光又は複数の波長帯域の光のいずれでもよい。特定の波長帯域は、白色の波長帯域よりも狭い帯域である。特定の波長帯域の光は、1種類の波長帯域の光を適用してもよいし、複数の波長帯域の光を適用してもよい。特定の波長帯域は、特殊光と呼ばれる場合がある。 The white light may be light of a white wavelength band or light of a plurality of wavelength bands. The specific wavelength band is a band narrower than the white wavelength band. As light of a specific wavelength band, light of one type of wavelength band may be applied, or light of a plurality of wavelength bands may be applied. The particular wavelength band may be called special light.
 プロセッサ12は、コネクタ37b及び信号ケーブル36を介して、内視鏡10の動作を制御する。また、プロセッサ12は、コネクタ37b及び信号ケーブル36を介して、内視鏡10の撮像素子28から撮像信号を取得する。プロセッサ12は規定のフレームレートを適用して内視鏡10から出力された撮像信号を取得する。 The processor 12 controls the operation of the endoscope 10 via the connector 37 b and the signal cable 36. The processor 12 also acquires an imaging signal from the imaging element 28 of the endoscope 10 via the connector 37 b and the signal cable 36. The processor 12 applies a specified frame rate to acquire an imaging signal output from the endoscope 10.
 プロセッサ12は、内視鏡10から取得した撮像信号に基づき、被観察部位の動画像38を生成する。更に、プロセッサ12は、内視鏡10の操作部21にて静止画像撮影指示部32が操作された場合、動画像38の生成と並行して、撮像素子28から取得した撮像信号に基づき被観察部位の静止画像39を生成する。この静止画像39は、動画像38の解像度に対して高解像度に生成されていてもよい。 The processor 12 generates a moving image 38 of the region to be observed based on the imaging signal acquired from the endoscope 10. Furthermore, when the still image photographing instruction unit 32 is operated by the operation unit 21 of the endoscope 10, the processor 12 observes the object based on the imaging signal acquired from the imaging device 28 in parallel with the generation of the moving image 38. A still image 39 of the site is generated. The still image 39 may be generated at a high resolution with respect to the resolution of the moving image 38.
 動画像38及び静止画像39の生成の際に、プロセッサ12はホワイトバランス調整、及びシェーディング補正等のデジタル信号処理を適用した画質の補正を行う。プロセッサ12はDICOM(Digital Imaging and Communications in Medicine)規格で規定された付帯情報を動画像38及び静止画像39へ付加してもよい。 When generating the moving image 38 and the still image 39, the processor 12 performs image quality correction to which digital signal processing such as white balance adjustment and shading correction is applied. The processor 12 may add incidental information defined by the DICOM (Digital Imaging and Communications in Medicine) standard to the moving image 38 and the still image 39.
 動画像38及び静止画像39は、被検体内、すなわち生体内を撮影した生体内画像である。更に、動画像38及び静止画像39が、特定の波長帯域の光を用いて撮像して得られた画像である場合、両者は特殊光画像である。そして、プロセッサ12は、生成した動画像38及び静止画像39を、表示装置13と医療画像処理装置14とのそれぞれに出力する。プロセッサ12は、DICOM規格に準拠した通信プロトコルに従って、ネットワーク17を介して動画像38及び静止画像39を画像記憶装置18へ出力してもよい。 The moving image 38 and the still image 39 are in-vivo images of the inside of a subject, that is, the inside of a living body. Furthermore, when the moving image 38 and the still image 39 are images obtained by imaging using light of a specific wavelength band, both are special light images. Then, the processor 12 outputs the generated moving image 38 and still image 39 to each of the display device 13 and the medical image processing device 14. The processor 12 may output the moving image 38 and the still image 39 to the image storage device 18 via the network 17 in accordance with a communication protocol conforming to the DICOM standard.
 表示装置13は、プロセッサ12に接続されている。表示装置13は、プロセッサ12から入力された動画像38及び静止画像39を表示する。医師等のユーザは、表示装置13に表示される動画像38を確認しながら、挿入部20の進退操作等を行い、被観察部位に病変等を検出した場合には静止画像撮影指示部32を操作して被観察部位の静止画撮影を実行し得る。 The display device 13 is connected to the processor 12. The display device 13 displays the moving image 38 and the still image 39 input from the processor 12. A user such as a doctor performs an operation of advancing and retracting the insertion unit 20 while checking the moving image 38 displayed on the display device 13 and detects the still image photographing instruction unit 32 when a lesion etc. is detected in the observed region. It is possible to operate to perform still image shooting of a region to be observed.
 医療画像処理装置14は、コンピュータが用いられる。操作装置15はコンピュータに接続可能なキーボード及びマウス等が用いられる。操作装置15とコンピュータとの接続は有線接続、又は無線接続のいずれでもよい。モニタ装置16は、コンピュータに接続可能な各種モニタが用いられる。 The medical image processing apparatus 14 uses a computer. As the operation device 15, a keyboard, a mouse or the like connectable to a computer is used. The connection between the controller device 15 and the computer may be either a wired connection or a wireless connection. The monitor device 16 uses various monitors connectable to a computer.
 医療画像処理装置14として、ワークステーション及びサーバ装置等の診断支援装置を用いてもよい。この場合、操作装置15及びモニタ装置16は、それぞれワークステーション等に接続した複数の端末ごとに設けられる。更に、医療画像処理装置14として、医療レポート等の作成支援を行う診療業務支援装置を用いてもよい。 As the medical image processing apparatus 14, a diagnosis support apparatus such as a workstation and a server apparatus may be used. In this case, the controller device 15 and the monitor device 16 are provided for each of a plurality of terminals connected to a work station or the like. Further, as the medical image processing apparatus 14, a medical care operation support apparatus that supports creation of a medical report or the like may be used.
 医療画像処理装置14は、動画像38の取得、及び動画像38の記憶を行う。医療画像処理装置14は、静止画像39の取得、及び静止画像39の記憶を行う。医療画像処理装置14は、動画像38の再生制御、及び静止画像39の再生制御を行う。 The medical image processing apparatus 14 acquires a moving image 38 and stores the moving image 38. The medical image processing apparatus 14 acquires a still image 39 and stores the still image 39. The medical image processing apparatus 14 performs reproduction control of the moving image 38 and reproduction control of the still image 39.
 操作装置15は、医療画像処理装置14に対する操作指示の入力に用いられる。モニタ装置16は、医療画像処理装置14の制御の下、動画像38及び静止画像39の表示を行う。モニタ装置16は、医療画像処理装置14における各種情報の表示部として機能する。 The operating device 15 is used to input an operation instruction to the medical image processing apparatus 14. The monitor device 16 displays the moving image 38 and the still image 39 under the control of the medical image processing apparatus 14. The monitor device 16 functions as a display unit of various information in the medical image processing apparatus 14.
 ネットワーク17を介して、医療画像処理装置14と接続される画像記憶装置18は、CTC画像19が記憶される。CTC画像19は図示しないCTC画像生成装置を用いて生成される。なお、CTCは大腸3次元CT検査を表すCTコロノグラフィ(colonography)を表す省略表記である。 The image storage device 18 connected to the medical image processing device 14 via the network 17 stores the CTC image 19. The CTC image 19 is generated using a CTC image generator (not shown). In addition, CTC is a shorthand notation showing CT colonography (colonography) showing a large intestine three-dimensional CT examination.
 図示しないCTC画像生成装置は、3次元検査画像からCTC画像19を生成する。3次元検査画像は、3次元画像撮像装置を用いて検査対象部位を撮像して得られた撮像信号から生成される。3次元画像撮像装置の例として、CT装置、MRI装置、PET(Positron Emission Tomography)、及び超音波診断装置等が挙げられる。本実施形態では、大腸を撮影して得られた3次元検査画像からCTC画像19が生成される例を示す。 A CTC image generator (not shown) generates a CTC image 19 from the three-dimensional inspection image. The three-dimensional inspection image is generated from an imaging signal obtained by imaging a region to be inspected using a three-dimensional imaging device. Examples of the three-dimensional imaging apparatus include a CT apparatus, an MRI apparatus, PET (Positron Emission Tomography), and an ultrasonic diagnostic apparatus. In this embodiment, an example is shown in which the CTC image 19 is generated from a three-dimensional inspection image obtained by imaging the large intestine.
 内視鏡システム9は、ネットワーク17を介してサーバ装置と通信可能に接続されてもよい。サーバ装置は、各種データを記憶して管理するコンピュータを適用可能である。図1に示した画像記憶装置18に記憶される情報は、サーバ装置を用いて管理されてもよい。なお、画像データの格納形式、及びネットワーク17を経由した各装置間の通信は、DICOM規格、及びDICOM規格に準拠したプロトコル等を適用可能である。 The endoscope system 9 may be communicably connected to the server device via the network 17. The server apparatus can apply a computer that stores and manages various data. The information stored in the image storage device 18 shown in FIG. 1 may be managed using a server device. Note that DICOM format, a protocol conforming to the DICOM standard, or the like can be applied to the storage format of the image data and the communication between the respective devices via the network 17.
 [医療画像処理装置の機能]
 図2は医療画像処理装置の機能を示す機能ブロック図である。図2に示した医療画像処理装置14は、図示しないコンピュータを備える。コンピュータは、プログラムの実行に基づき、画像取得部41、情報取得部42、医療画像解析処理部43、及び表示制御部44として機能する。医療画像処理装置14は、医療画像処理装置14の各種制御に用いる情報を記憶する記憶部47を備える。
[Functions of Medical Image Processing Device]
FIG. 2 is a functional block diagram showing functions of the medical image processing apparatus. The medical image processing apparatus 14 shown in FIG. 2 includes a computer (not shown). The computer functions as an image acquisition unit 41, an information acquisition unit 42, a medical image analysis processing unit 43, and a display control unit 44 based on the execution of a program. The medical image processing apparatus 14 includes a storage unit 47 that stores information used for various controls of the medical image processing apparatus 14.
 〔画像取得部〕
 画像取得部41は、CTC画像取得部41a、及び内視鏡画像取得部41bを備える。CTC画像取得部41aは、図示しない画像入出力インターフェースを介して、CTC画像19を取得する。内視鏡画像取得部41bは、図示しない画像入出力インターフェースを介して、内視鏡画像37を取得する。画像入出力インターフェースの接続形態は有線でもよいし、無線でもよい。以下に、CTC画像取得部41a、及び内視鏡画像取得部41bについて詳細に説明する。
[Image acquisition unit]
The image acquisition unit 41 includes a CTC image acquisition unit 41a and an endoscope image acquisition unit 41b. The CTC image acquisition unit 41a acquires a CTC image 19 via an image input / output interface (not shown). The endoscopic image acquisition unit 41b acquires an endoscopic image 37 via an image input / output interface (not shown). The connection form of the image input / output interface may be wired or wireless. The CTC image acquisition unit 41a and the endoscope image acquisition unit 41b will be described in detail below.
 《CTC画像取得部》
 CTC画像取得部41aは、図1に示した画像記憶装置18に記憶されているCTC画像19を取得する。図2に示したCTC画像取得部41aを用いて取得したCTC画像19は、画像記憶部48に記憶される。CTC画像取得部41aは、後述する内視鏡画像取得部41bと同様の構成を適用可能である。符号19bは視点画像を表す。視点画像19bはCTC画像19に設定される視点における視野の画像である。視点は符号Pを付して図5に図示する。視点画像、及び視点の詳細は後述する。
<< CTC image acquisition unit >>
The CTC image acquisition unit 41a acquires the CTC image 19 stored in the image storage device 18 shown in FIG. The CTC image 19 acquired using the CTC image acquisition unit 41 a shown in FIG. 2 is stored in the image storage unit 48. The CTC image acquisition unit 41a can apply the same configuration as the endoscopic image acquisition unit 41b described later. Reference numeral 19b represents a viewpoint image. The viewpoint image 19 b is an image of the field of view at the viewpoint set in the CTC image 19. The viewpoint is shown in FIG. Details of the viewpoint image and the viewpoint will be described later.
 ここで、本実施形態における画像という用語は、画像を表すデータの概念、又は信号の概念が含まれる。CTC画像19は仮想内視鏡画像の一例である。CTC画像19は仮想大腸内視鏡画像に相当する。CTC画像取得部41aは仮想内視鏡画像を入力する第1画像入力部の一例である。 Here, the term image in the present embodiment includes the concept of data representing an image or the concept of a signal. The CTC image 19 is an example of a virtual endoscopic image. The CTC image 19 corresponds to a virtual colonoscopy image. The CTC image acquisition unit 41a is an example of a first image input unit that inputs a virtual endoscopic image.
 《内視鏡画像取得部》
 内視鏡画像取得部41bは、図1に示したプロセッサ12を用いて生成された内視鏡画像37を取得する。内視鏡画像37は図2に示した動画像38及び静止画像39が含まれる。本実施形態では、図1に示したプロセッサ12を用いて生成された内視鏡画像37を取得したが、外部の記憶装置に記憶されている内視鏡画像37を取得してもよい。図2に示した内視鏡画像取得部41bは、メモリーカード等の各種情報記憶媒体を介して内視鏡画像37を取得してもよい。
"Endoscope Image Acquisition Unit"
The endoscopic image acquisition unit 41 b acquires an endoscopic image 37 generated using the processor 12 illustrated in FIG. 1. The endoscopic image 37 includes the moving image 38 and the still image 39 shown in FIG. In the present embodiment, the endoscopic image 37 generated using the processor 12 shown in FIG. 1 is acquired, but the endoscopic image 37 stored in an external storage device may be acquired. The endoscopic image acquisition unit 41b illustrated in FIG. 2 may acquire the endoscopic image 37 via various information storage media such as a memory card.
 動画像38の撮影途中に静止画像39の撮影が行われた場合、内視鏡画像取得部41bは、図1に示したプロセッサ12から動画像38及び静止画像39を取得する。医療画像処理装置14は、内視鏡画像取得部41bを用いて取得した動画像38及び静止画像39を画像記憶部48に記憶する。符号38aは、動画像38を構成する複数のフレーム画像を表す。 When the still image 39 is captured during capturing of the moving image 38, the endoscopic image acquiring unit 41b acquires the moving image 38 and the still image 39 from the processor 12 illustrated in FIG. The medical image processing apparatus 14 stores the moving image 38 and the still image 39 acquired by using the endoscopic image acquisition unit 41 b in the image storage unit 48. Reference numeral 38a represents a plurality of frame images constituting the moving image 38.
 医療画像処理装置14は、プロセッサ12等から入力される内視鏡画像37の動画像38の全てを画像記憶部48に記憶させる必要はなく、図1に示した静止画像撮影指示部32の操作に応じて被観察部位の静止画撮影が行われた場合、その前後の1分間の動画像38を、図2に示した画像記憶部48に記憶させてもよい。前後の1分間は、撮影前1分間から撮影後1分間までの期間を表す。 The medical image processing apparatus 14 does not have to store all of the moving image 38 of the endoscopic image 37 input from the processor 12 or the like in the image storage unit 48, and the operation of the still image photographing instruction unit 32 shown in FIG. In the case where the still image of the region to be observed is taken accordingly, the 1-minute moving image 38 before and after that may be stored in the image storage unit 48 shown in FIG. The one minute before and after represents a period from one minute before photographing to one minute after photographing.
 内視鏡画像取得部41bは、実内視鏡画像を入力する第2画像入力部の一例である。内視鏡画像37は実内視鏡画像に相当する。 The endoscope image acquisition unit 41 b is an example of a second image input unit that inputs an actual endoscope image. The endoscopic image 37 corresponds to a real endoscopic image.
 〔情報取得部〕
 情報取得部42は、操作装置15等を介して外部から入力された情報を取得する。例えば、操作装置15を用いてユーザが判定した判定結果、及び抽出結果等が入力された場合に、情報取得部42はユーザの判定情報、及び抽出情報等を取得する。
[Information acquisition unit]
The information acquisition unit 42 acquires information input from the outside via the operation device 15 or the like. For example, when the determination result determined by the user using the operation device 15 and the extraction result are input, the information acquisition unit 42 acquires the determination information of the user, the extraction information, and the like.
 〔医療画像解析処理部〕
 医療画像解析処理部43はCTC画像19を解析する。また、医療画像解析処理部43は内視鏡画像37を解析する。医療画像解析処理部43を用いたCTC画像19、及び内視鏡画像37の解析の詳細は後述する。
[Medical image analysis processing unit]
The medical image analysis processing unit 43 analyzes the CTC image 19. Further, the medical image analysis processing unit 43 analyzes the endoscopic image 37. Details of the analysis of the CTC image 19 and the endoscopic image 37 using the medical image analysis processing unit 43 will be described later.
 〔深層学習アルゴリズム〕
 医療画像解析処理部43は、深層学習アルゴリズム65に基づき、深層学習を用いた画像解析処理を実施する。深層学習アルゴリズム65は、公知のコンボリューションニューラルネットワークの手法と、全結合層と、出力層とを含むアルゴリズムである。
Deep Learning Algorithm
The medical image analysis processing unit 43 performs an image analysis process using deep learning based on the deep learning algorithm 65. The deep learning algorithm 65 is an algorithm including a known convolutional neural network method, an entire combined layer, and an output layer.
 なお、深層学習はディープラーニングと呼ばれることがある。コンボリューションニューラルネットワークは、畳み込み層及びプーリング層の繰り返し処理である。コンボリューションニューラルネットワークは、畳み込みニューラルネットワークと呼ばれる場合がある。なお、深層学習を用いた画像解析処理は公知技術であるので、具体的な説明は省略する。深層学習は機械学習の一例である。 Deep learning is sometimes called deep learning. A convolutional neural network is an iterative process of convolutional and pooling layers. Convolutional neural networks may be referred to as convolutional neural networks. In addition, since the image analysis process using deep learning is a well-known technique, specific description is abbreviate | omitted. Deep learning is an example of machine learning.
 〔表示制御部〕
 表示制御部44は、モニタ装置16の画像表示を制御する。表示制御部44は、再生制御部44a及び情報表示制御部44bとして機能する。
[Display control unit]
The display control unit 44 controls image display of the monitor device 16. The display control unit 44 functions as a reproduction control unit 44a and an information display control unit 44b.
 《再生制御部》
 再生制御部44aは、CTC画像取得部41aを用いて取得したCTC画像19、及び内視鏡画像取得部41bを用いて取得した内視鏡画像37の再生制御を行う。再生制御部44aは、操作装置15を用いて画像を再生する操作がされた場合、表示制御プログラムを実行してモニタ装置16を制御する。表示制御プログラムはプログラム記憶部49に記憶されるプログラムに含まれる。
Reproduction control section
The reproduction control unit 44a performs reproduction control of the CTC image 19 acquired using the CTC image acquisition unit 41a and the endoscope image 37 acquired using the endoscopic image acquisition unit 41b. When an operation for reproducing an image is performed using the operation device 15, the reproduction control unit 44a controls the monitor device 16 by executing a display control program. The display control program is included in the program stored in the program storage unit 49.
 CTC画像19、及び内視鏡画像37の表示制御例として、CTC画像19、又は内視鏡画像37のいずれか一方を全画面表示する例が挙げられる。CTC画像19、及び内視鏡画像37の他の表示制御例として、CTC画像19、及び内視鏡画像37を一画面内に並列表示する例が挙げられる。再生制御部44aは、上述した2つの表示を切り替えてもよい。 As an example of display control of the CTC image 19 and the endoscopic image 37, an example in which either the CTC image 19 or the endoscopic image 37 is displayed on the full screen can be mentioned. As another display control example of the CTC image 19 and the endoscopic image 37, an example of displaying the CTC image 19 and the endoscopic image 37 in parallel in one screen can be mentioned. The reproduction control unit 44a may switch between the two displays described above.
 内視鏡画像37の表示制御例として、動画像38、又は静止画像39のいずれか一方を全画面表示する形態が挙げられる。内視鏡画像37の他の表示例として、動画像38、及び静止画像39を1画面内に並列表示する形態が挙げられる。再生制御部44aは、上述した2つの表示を切り替えてもよい。 As an example of display control of the endoscopic image 37, there is a mode in which either the moving image 38 or the still image 39 is displayed on the full screen. As another display example of the endoscopic image 37, there is a form in which the moving image 38 and the still image 39 are displayed in parallel in one screen. The reproduction control unit 44a may switch between the two displays described above.
 CTC画像19、及び内視鏡画像37の表示例として、CTC画像19、及び内視鏡画像37を一画面内に並列表示させる例を図14から図16等に示す。 As a display example of the CTC image 19 and the endoscopic image 37, an example in which the CTC image 19 and the endoscopic image 37 are displayed in parallel in one screen is shown in FIGS.
 《情報表示制御部》
 情報表示制御部44bは、CTC画像19の付帯情報の表示制御、及び内視鏡画像37の付帯情報の表示制御を行う。CTC画像19の付帯情報の一例として、第1特徴領域を表す情報が挙げられる。内視鏡画像37の付帯情報の例として、第2特徴領域を表す情報が挙げられる。
<< Information Display Control Unit >>
The information display control unit 44 b performs display control of incidental information of the CTC image 19 and display control of incidental information of the endoscope image 37. An example of incidental information of the CTC image 19 includes information representing the first feature area. As an example of incidental information of the endoscopic image 37, information representing a second feature area can be mentioned.
 情報表示制御部44bは、医療画像解析処理部43における各種処理の際に必要な情報の表示制御を行う。医療画像解析処理部43における各種処理の例として、CTC画像19と内視鏡画像37との対応付け処理、CTC画像19の特徴領域抽出処理、及び内視鏡画像37の特徴領域抽出処理が挙げられる。 The information display control unit 44 b performs display control of information necessary for various processes in the medical image analysis processing unit 43. As an example of various processes in the medical image analysis processing unit 43, association processing between the CTC image 19 and the endoscope image 37, feature region extraction processing of the CTC image 19 and feature region extraction processing of the endoscope image 37 are cited. Be
 医療画像解析処理部43における各種処理の他の例として、CTC画像19と内視鏡画像37との対応付け処理が挙げられる。ここに列挙した医療画像解析処理部43における各種処理の詳細は後述する。 As another example of the various processes in the medical image analysis processing unit 43, there is a process of associating the CTC image 19 with the endoscopic image 37. Details of various processes in the medical image analysis processing unit 43 listed here will be described later.
 〔記憶部〕
 記憶部47は、画像記憶部48を備える。画像記憶部48は、医療画像処理装置14が取得したCTC画像19、及び内視鏡画像37を記憶する。図2には、医療画像処理装置14が記憶部47を備える態様を例示したが、医療画像処理装置14とネットワークを介して通信可能に接続される記憶装置等が記憶部47を備えてもよい。上述した記憶装置の例として、図1に示したネットワーク17を介して通信可能に接続される画像記憶装置18が挙げられる。
[Storage unit]
The storage unit 47 includes an image storage unit 48. The image storage unit 48 stores the CTC image 19 acquired by the medical image processing apparatus 14 and the endoscopic image 37. Although the medical image processing apparatus 14 illustrated the aspect provided with the memory | storage part 47 in FIG. 2, the memory | storage device etc. which are connected communicably with the medical image processing apparatus 14 via a network may be provided with the memory | storage part 47 . As an example of the storage device described above, an image storage device 18 communicably connected via the network 17 shown in FIG. 1 may be mentioned.
 記憶部47は、プログラム記憶部49を備える。プログラム記憶部49を用いて記憶されるプログラムは、動画像38の再生制御を医療画像処理装置14に実行させるためのアプリケーションプログラムが含まれる。また、プログラム記憶部49を用いて記憶されるプログラムは、医療画像解析処理部43の処理を医療画像処理装置14に実行させるためのプログラムが含まれる。 The storage unit 47 includes a program storage unit 49. The program stored using the program storage unit 49 includes an application program for causing the medical image processing apparatus 14 to execute reproduction control of the moving image 38. The program stored using the program storage unit 49 includes a program for causing the medical image processing apparatus 14 to execute the processing of the medical image analysis processing unit 43.
 〔医療画像処理装置のハードウェア構成〕
 医療画像処理装置14は、複数のコンピュータ等を用いて構成してもよい。複数のコンピュータ等は、ネットワークを介して通信可能に接続されてもよい。ここでいう複数のコンピュータは、ハードウェア的に分離していてもよいし、ハードウェア的に一体に構成され、且つ機能的に分離されていてもよい。
[Hardware Configuration of Medical Image Processing Apparatus]
The medical image processing apparatus 14 may be configured using a plurality of computers or the like. A plurality of computers and the like may be communicably connected via a network. The plurality of computers referred to here may be separated in terms of hardware, may be integrally configured in terms of hardware, and may be separated functionally.
 〔各種制御部のハードウェア構成〕
 図2に示した医療画像処理装置14の各種制御を実行する制御部のハードウェア的な構造は、次に示すような各種のプロセッサ(processor)である。図3に示す医療画像解析処理部43についても同様である。
[Hardware configuration of various control units]
The hardware-like structure of the control unit that executes various controls of the medical image processing apparatus 14 shown in FIG. 2 is various processors as shown below. The same applies to the medical image analysis processing unit 43 shown in FIG.
 各種のプロセッサには、ソフトウェアを実行して各種の制御部として機能する汎用的なプロセッサであるCPU(Central Processing Unit)、FPGA(Field Programmable Gate Array)などの製造後に回路構成を変更可能なプロセッサであるPLD(Programmable Logic Device)、ASIC(Application Specific Integrated Circuit)などの特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路などが含まれる。なお、ここでいうソフトウェアはプログラムと同義である。 The various processors are processors that can change the circuit configuration after manufacturing a central processing unit (CPU) or a field programmable gate array (FPGA) that is a general-purpose processor that executes software and functions as various control units. It includes a dedicated electric circuit or the like which is a processor having a circuit configuration specially designed to execute a specific process such as a programmable logic device (PLD) or an application specific integrated circuit (ASIC). In addition, software here is synonymous with a program.
 1つの処理部は、これら各種のプロセッサのうちの1つで構成されていてもよいし、同種又は異種の2つ以上のプロセッサを用いて構成されてもよい。2つ以上のプロセッサの例として、複数のFPGA、或いはCPUとFPGAとの組み合わせが挙げられる。また、複数の制御部を1つのプロセッサで構成してもよい。複数の制御部を1つのプロセッサで構成する例としては、第1に、クライアント装置、及びサーバ装置等のコンピュータに代表されるように、1つ以上のCPUとソフトウェアとの組合せを用いて1つのプロセッサを構成し、このプロセッサが複数の制御部として機能する形態がある。第2に、SoC(System On Chip)などに代表されるように、複数の制御部を含むシステム全体の機能を1つのICチップで実現するプロセッサを使用する形態がある。このように、各種の制御部は、ハードウェア的な構造として、上述した各種のプロセッサを1つ以上用いて構成される。なお、ICは集積回路の英語表記であるIntegrated Circuitの省略語である。 One processing unit may be configured by one of these various processors, or may be configured using two or more processors of the same type or different types. Examples of two or more processors include a plurality of FPGAs, or a combination of a CPU and an FPGA. Also, the plurality of control units may be configured by one processor. As an example in which a plurality of control units are configured by one processor, first, as represented by a computer such as a client device and a server device, one combination of one or more CPUs and software is used. There is a form which comprises a processor and this processor functions as a plurality of control units. Second, as typified by SoC (System On Chip) or the like, there is a mode using a processor that realizes functions of the entire system including a plurality of control units in one IC chip. As described above, various control units are configured using one or more of the various processors described above as a hardware structure. Here, IC is an abbreviation of Integrated Circuit, which is the English notation of integrated circuits.
 〔医療画像解析処理部の詳細な説明〕
 図3は医療画像解析処理部の機能を示す機能ブロック図である。以下の説明における内視鏡10は図1に図示されている。また、CTC画像19、視点画像19b、内視鏡画像37、及びフレーム画像38aは図2に図示されている。
[Detailed Description of Medical Image Analysis Processing Unit]
FIG. 3 is a functional block diagram showing the function of the medical image analysis processing unit. An endoscope 10 in the following description is illustrated in FIG. The CTC image 19, the viewpoint image 19b, the endoscope image 37, and the frame image 38a are illustrated in FIG.
 図3に示した医療画像解析処理部43は、第1特徴領域抽出部50と、第1条件設定部52と、第2特徴領域抽出部54と、第2条件設定部56と、対応付け部58と、報知部59と、報知画像生成部60と、を備える。 The medical image analysis processing unit 43 shown in FIG. 3 includes a first feature region extraction unit 50, a first condition setting unit 52, a second feature region extraction unit 54, a second condition setting unit 56, and an association unit. 58, a notification unit 59, and a notification image generation unit 60.
 《第1特徴領域抽出部》
 第1特徴領域抽出部50は、CTC画像19から、規定の第1条件に合致する特徴領域である第1特徴領域を抽出する。CTC画像19の第1特徴領域の例として、病変、ひだ、各結腸間の変化点、及び血管が挙げられる。なお、血管は血管の走行パターンが含まれる。第1特徴領域抽出部50が担う機能は第1特徴領域抽出機能に相当する。
First feature area extraction unit
The first feature region extraction unit 50 extracts, from the CTC image 19, a first feature region that is a feature region that meets the defined first condition. Examples of the first feature area of the CTC image 19 include a lesion, a fold, a change point between colons, and a blood vessel. The blood vessel includes a running pattern of the blood vessel. The function of the first feature area extraction unit 50 corresponds to a first feature area extraction function.
 《第1条件設定部》
 第1条件設定部52は第1条件を設定する。第1条件は、第1特徴領域抽出部50を用いた抽出処理に適用される抽出条件である。第1条件設定部52は、図2に示した操作装置15を用いて入力された情報を第1条件として設定し得る。上述した第1特徴領域の例示は、第1条件の例示として把握される。
<< First Condition Setting Unit >>
The first condition setting unit 52 sets a first condition. The first condition is an extraction condition applied to the extraction process using the first feature region extraction unit 50. The first condition setting unit 52 can set information input using the controller device 15 shown in FIG. 2 as a first condition. The illustration of the first feature area described above is grasped as an illustration of the first condition.
 《第2特徴領域抽出部》
 第2特徴領域抽出部54は、図2に示した内視鏡画像37から、規定の第2条件に合致する特徴領域である第2特徴領域を抽出する。内視鏡画像37の第2特徴領域の例として、CTC画像19の第1特徴領域と同様に、病変、ひだ、各結腸間の変化点、及び血管が挙げられる。
Second feature area extraction unit
The second feature area extraction unit 54 extracts, from the endoscopic image 37 shown in FIG. 2, a second feature area that is a feature area that meets the prescribed second condition. Similar to the first feature area of the CTC image 19, examples of the second feature area of the endoscopic image 37 include a lesion, a fold, a change point between colons, and a blood vessel.
 第2特徴領域抽出部54は、第2条件に合致する第2特徴領域を内視鏡画像37から自動的に抽出してもよい。第2特徴領域抽出部54は、第2条件に合致する第2特徴領域を、ユーザが内視鏡画像37から手動抽出した抽出結果を取得してもよい。ユーザは、図2に示した情報取得部42を用いて手動抽出した抽出結果を入力してもよい。第2特徴領域抽出部54が担う機能は第2特徴領域抽出機能に相当する。 The second feature area extraction unit 54 may automatically extract a second feature area that matches the second condition from the endoscopic image 37. The second feature region extraction unit 54 may obtain an extraction result in which the user manually extracts a second feature region that matches the second condition from the endoscopic image 37. The user may input the extraction result manually extracted using the information acquisition unit 42 shown in FIG. The function of the second feature area extraction unit 54 corresponds to a second feature area extraction function.
 《第2条件設定部》
 第2条件設定部56は、内視鏡画像37の第2特徴領域の抽出条件として、第1条件に対応する第2条件を設定する。第1条件に対応する第2条件には、第1条件と同一の第2条件が含まれる。例えば、第1条件として病変が設定された場合、第2条件として病変が設定され得る。
<< 2nd condition setting part >>
The second condition setting unit 56 sets a second condition corresponding to the first condition as the extraction condition of the second feature area of the endoscope image 37. The second condition corresponding to the first condition includes the same second condition as the first condition. For example, when a lesion is set as the first condition, a lesion may be set as the second condition.
 第1条件、及び第2条件は、病変という包括概念に代わり、ポリープ、及び炎症等の具体的な病変が設定されてもよい。また、第1条件、及び第2条件は、複数の条件を組み合わせてもよい。 As the first condition and the second condition, specific lesions such as polyps and inflammation may be set instead of the generic concept of lesions. The first condition and the second condition may be a combination of a plurality of conditions.
 第2条件として病変が設定され、且つ第2特徴領域として内視鏡画像37から病変を抽出する場合は、内視鏡検査における病変の検出に相当する。上述した第2特徴領域の例示は、第2条件の例示として把握される。 When a lesion is set as the second condition and a lesion is extracted from the endoscopic image 37 as the second feature region, this corresponds to the detection of the lesion in the endoscopy. The illustration of the second feature area described above is grasped as an illustration of the second condition.
 《対応付け部》
 対応付け部58は、図2に示したCTC画像19と内視鏡画像37との対応付けを行う。CTC画像19と内視鏡画像37との対応付けの例として、CTC画像19の第1特徴領域と内視鏡画像37の第2特徴領域との対応付けが挙げられる。例えば、内視鏡画像37の第2特徴領域として病変が検出された場合、検出された病変に対応するCTC画像19の第1特徴領域を内視鏡画像37の第2特徴領域に対応付けする。
<< Matching part >>
The associating unit 58 associates the CTC image 19 and the endoscopic image 37 shown in FIG. As an example of the correspondence between the CTC image 19 and the endoscopic image 37, the correspondence between the first feature area of the CTC image 19 and the second feature area of the endoscopic image 37 can be mentioned. For example, when a lesion is detected as the second feature area of the endoscopic image 37, the first feature area of the CTC image 19 corresponding to the detected lesion is associated with the second feature area of the endoscopic image 37. .
 CTC画像19と内視鏡画像37との対応付けは位置情報を用いることが可能である。例えば、CTC画像19の基準位置と内視鏡画像37の基準位置とを一致させて、CTC画像19、及び内視鏡画像37のそれぞれの基準位置からの距離を用いて、CTC画像19と内視鏡画像37との対応付けが可能である。 The correspondence between the CTC image 19 and the endoscopic image 37 can use position information. For example, the reference position of the CTC image 19 and the reference position of the endoscopic image 37 are matched, and the CTC image 19 and the endoscopic image 37 are compared with each other using the distances from the reference positions. Correspondence with the endoscopic image 37 is possible.
 また、CTC画像19の座標値と内視鏡画像37のフレーム画像38aの番号とを対応付けしてもよい。CTC画像19と内視鏡画像37との対応付けには、CTC画像19の第1特徴領域と内視鏡画像37の非抽出領域との対応付けが含まれる。CTC画像19の第1特徴領域と内視鏡画像37の非抽出領域との対応付けとの具体例は後述する。対応付け部58が担う機能を対応付け機能に相当する。 Further, the coordinate value of the CTC image 19 may be associated with the number of the frame image 38 a of the endoscopic image 37. The correspondence between the CTC image 19 and the endoscopic image 37 includes the correspondence between the first feature area of the CTC image 19 and the non-extraction area of the endoscopic image 37. A specific example of the correspondence between the first feature area of the CTC image 19 and the non-extraction area of the endoscopic image 37 will be described later. The function of the association unit 58 corresponds to the association function.
 《報知部》
 報知部59は、CTC画像19抽出された第1特徴領域80と対応付けされた内視鏡画像37の領域のうち、内視鏡画像37から抽出されない非抽出領域となる領域が存在する場合に、その旨を報知する。非抽出領域の一例として、内視鏡10の観察範囲の死角の位置が挙げられる。報知部59は、表示制御部44を介してモニタ装置16に報知情報を表示する。報知情報の一例として、後述する報知画像が挙げられる。報知部59が担う機能は報知機能に相当する。
<< notification part >>
If the notification unit 59 determines that there is a non-extraction region not extracted from the endoscopic image 37 among the regions of the endoscopic image 37 associated with the first feature region 80 extracted from the CTC image 19. , To that effect. The position of the blind spot of the observation range of endoscope 10 is mentioned as an example of a non-extraction field. The notification unit 59 displays notification information on the monitor device 16 via the display control unit 44. As an example of the notification information, a notification image to be described later can be mentioned. The function of the notification unit 59 corresponds to a notification function.
 《報知画像生成部》
 報知画像生成部60は、内視鏡画像37の第2特徴領域の存在を報知する報知画像を生成する。報知画像の例として、第2特徴領域の任意の位置に付される記号、第2特徴領域の縁を表す閉曲線等が挙げられる。
<< Inform image generation part >>
The notification image generation unit 60 generates a notification image for notifying of the presence of the second feature region of the endoscope image 37. Examples of the notification image include a symbol attached to an arbitrary position of the second feature region, a closed curve representing an edge of the second feature region, and the like.
 報知画像の表示例として、内視鏡画像37と報知画像とを別のレイヤーとして生成し、報知画像を内視鏡画像37にオーバーレイ表示させる態様が挙げられる。すなわち、報知画像生成部60は、内視鏡画像37に処理を加えることなく、内視鏡画像37に重畳表示させることが可能な報知画像を生成する。 As a display example of the notification image, there is a mode in which the endoscope image 37 and the notification image are generated as separate layers, and the notification image is overlaid on the endoscope image 37. That is, the notification image generation unit 60 generates a notification image that can be displayed superimposed on the endoscopic image 37 without processing the endoscopic image 37.
 報知画像の一例として図14に第1報知画像140を図示する。報知画像の他の例として図15に第2報知画像142を図示する。報知画像の詳細は後述する。 The first notification image 140 is illustrated in FIG. 14 as an example of the notification image. As another example of the notification image, a second notification image 142 is illustrated in FIG. Details of the notification image will be described later.
 〔画像記憶部の詳細な説明〕
 図4は画像記憶部の機能を示す機能ブロック図である。画像記憶装置18は、第1特徴領域記憶部64と、第2特徴領域記憶部66と、対応付け結果記憶部68とを備える。
[Detailed Description of Image Storage Unit]
FIG. 4 is a functional block diagram showing the function of the image storage unit. The image storage device 18 includes a first feature area storage unit 64, a second feature area storage unit 66, and an association result storage unit 68.
 《第1特徴領域記憶部》
 第1特徴領域記憶部64は、図3に示した第1特徴領域抽出部50を用いてCTC画像19から抽出された第1特徴領域の情報を記憶する。第1特徴領域の情報の例として、CTC画像19における第1特徴領域の位置を表す情報が挙げられる。CTC画像19における第1特徴領域の位置は、CTC画像19に設定された座標における座標値、及びCTC画像19に設定された視点等を用いて特定し得る。
First feature area storage unit
The first feature area storage unit 64 stores the information of the first feature area extracted from the CTC image 19 using the first feature area extraction unit 50 shown in FIG. As an example of the information of the first feature area, information representing the position of the first feature area in the CTC image 19 may be mentioned. The position of the first feature region in the CTC image 19 can be identified using coordinate values at the coordinates set in the CTC image 19 and the viewpoint set in the CTC image 19 or the like.
 《第2特徴領域記憶部》
 第2特徴領域記憶部66は、図3に示した第2特徴領域抽出部54を用いて内視鏡画像37から抽出された第2特徴領域の情報を記憶する。第2特徴領域の情報の例として、内視鏡画像37における第2特徴領域の位置を表す情報が挙げられる。
Second feature area storage unit
The second feature area storage unit 66 stores the information of the second feature area extracted from the endoscopic image 37 using the second feature area extraction unit 54 shown in FIG. 3. As an example of the information of the second feature area, information representing the position of the second feature area in the endoscopic image 37 can be mentioned.
 内視鏡画像37における第2特徴領域の位置は、内視鏡10が備えるセンサの検出情報を用いて、被観察対象の基準位置からの距離を用いて特定し得る。 The position of the second feature region in the endoscopic image 37 can be identified using the distance from the reference position of the object to be observed using detection information of a sensor provided in the endoscope 10.
 《対応付け結果記憶部》
 対応付け結果記憶部68は、図3に示した対応付け部58を用いて実行されたCTC画像19と内視鏡画像37との対応付けの結果を記憶する。例えば、CTC画像19の第1特徴領域の位置の情報と、内視鏡画像37の第2特徴領域の位置の情報とを対応付けた結果を記憶し得る。
<< correspondence result storage part >>
The association result storage unit 68 stores the result of association between the CTC image 19 and the endoscopic image 37 executed using the association unit 58 shown in FIG. For example, the result of associating the information of the position of the first feature area of the CTC image 19 with the information of the position of the second feature area of the endoscopic image 37 can be stored.
 [報知方法の説明]
 次に、内視鏡検査における報知方法について説明する。本実施形態では、仮想大腸内視鏡検査を併用して実施される大腸の内視鏡検査を例示する。なお、大腸の内視鏡検査は例示であり、本実施形態に係る報知方法は、気管支等の他の部位の内視鏡検査に適用可能である。
[Description of notification method]
Next, a notification method in endoscopy will be described. In the present embodiment, an endoscopy of the large intestine performed in combination with a virtual colonoscopy is exemplified. In addition, the endoscopy of the large intestine is an illustration, and the notification method according to the present embodiment is applicable to the endoscopy of other parts such as the bronchi.
 〔CTC画像〕
 図5はCTC画像の模式図である。図5に示した全体画像19aは、被観察部位である大腸の全体を表すCTC画像19の一形態である。被観察部位は、被検体、及び被検体の観察対象と同義である。
[CTC image]
FIG. 5 is a schematic view of a CTC image. The whole image 19a shown in FIG. 5 is one form of the CTC image 19 representing the whole of a large intestine which is a region to be observed. The observation site has the same meaning as the subject and the observation target of the subject.
 全体画像19aは、設定されたパス19cの上に1つ以上の視点Pを置き、スタート位置Pから、ゴール位置Pまで順次に視点Pを変更しつつ、各視点Pから管腔の内部を見たことを想定した画像である。パス19cは、全体画像19aに対して細線化を施して生成し得る。細線化処理は公知の細線化手法を適用可能である。図5には、複数の視点Pを図示したが、視点Pの配置、及び数量は検査条件等に応じて、適宜決定し得る。 Entire image 19a is placed one or more viewpoints P on the path 19c that is set, from the start position P S, while changing the sequentially viewpoint P to the goal position P G, the inside of the lumen from the viewpoint P It is an image assuming that it saw. The pass 19c may be generated by thinning the entire image 19a. A known thinning method can be applied to the thinning processing. Although a plurality of viewpoints P are illustrated in FIG. 5, the arrangement and the number of the viewpoints P can be appropriately determined according to the inspection condition and the like.
 全体画像19aの各視点Pを指定した場合、指定された視点Pにおける視野の画像を表す視点画像を表示可能である。なお、各視点Pにおける視点画像は符号19b、及び符号19bを付して図8に図示する。 When each viewpoint P of the entire image 19a is designated, a viewpoint image representing an image of a field of view at the designated viewpoint P can be displayed. Note that the viewpoint image at each viewpoint P is illustrated in FIG. 8 are denoted by the reference numerals 19b 1, and reference numeral 19b 2.
 各視点Pにおいて、内視鏡10の撮像方向を反映させた視点画像を生成し得る。内視鏡10の撮像方向を反映させた視点画像は、複数の撮像方向のそれぞれについて生成し得る。図5に示した全体画像19a、及び図5に図示しない視点画像は、図2に示したCTC画像19の概念に含まれる。 At each viewpoint P, a viewpoint image in which the imaging direction of the endoscope 10 is reflected can be generated. A viewpoint image reflecting the imaging direction of the endoscope 10 may be generated for each of a plurality of imaging directions. The entire image 19a shown in FIG. 5 and the viewpoint image not shown in FIG. 5 are included in the concept of the CTC image 19 shown in FIG.
 図5に全体画像19aを示したCTC画像19は、図示しない3次元座標が設定される。CTC画像19に設定される3次元座標は、CTC画像19の任意の基準位置を原点とする3次元座標を適用可能である。3次元座標は、直交座標、極座標、及び円筒座標など、任意の3次元座標を適用可能である。なお、3次元座標の図示は省略する。 The CTC image 19 whose whole image 19a is shown in FIG. 5 has three-dimensional coordinates not shown. The three-dimensional coordinates set in the CTC image 19 can be three-dimensional coordinates having an arbitrary reference position of the CTC image 19 as an origin. Three-dimensional coordinates can apply arbitrary three-dimensional coordinates, such as rectangular coordinates, polar coordinates, and cylindrical coordinates. Note that illustration of three-dimensional coordinates is omitted.
 〔仮想大腸内視鏡検査〕
 仮想大腸内視鏡検査は、CT装置を用いて大腸を撮像して大腸のCT画像を取得し、大腸のCT画像に画像処理を施して生成されたCTC画像19を用いて病変等を検出する。仮想大腸内視鏡検査は、内視鏡10の移動と連動して、内視鏡10に見立てたポインタ19dをスタート位置Pからゴール位置Pまで、パス19c上を移動させる。図5に示した矢印は、ポインタ19dの移動方向を表す。
[Virtual colonoscopy]
In virtual colonoscopy, a large intestine is imaged using a CT apparatus to acquire a CT image of the large intestine, and a lesion etc. is detected using a CTC image 19 generated by performing image processing on the CT image of the large intestine. . Virtual colonoscopy, in conjunction with the movement of the endoscope 10, the pointer 19d likened to the endoscope 10 from the start position P S to a goal position P G, it is moved on the path 19c. The arrows shown in FIG. 5 indicate the moving direction of the pointer 19d.
 図5には、スタート位置Pとして盲腸を適用し、且つゴール位置Pとして肛門を適用した例を示す。すなわち、図5には、内視鏡10をスタート位置Pまで挿入し、内視鏡10を抜きながらゴール位置P位置へ移動させる場合の仮想大腸内視鏡検査を模式的に図示した。 5 shows, by applying the cecum as the start position P S, and shows an example of applying the anus as goal position P G. That is, in FIG. 5, insert the endoscope 10 to the start position P S, the virtual colonoscopy when moving while venting the endoscope 10 to the goal position P G position shown schematically.
 ポインタ19dの位置は、内視鏡10の移動条件から導出される。内視鏡10の移動条件の例として、内視鏡10の移動速度、及び内視鏡10の移動方向を表す移動ベクトルが挙げられる。 The position of the pointer 19 d is derived from the movement condition of the endoscope 10. Examples of movement conditions of the endoscope 10 include the movement speed of the endoscope 10 and a movement vector representing the movement direction of the endoscope 10.
 内視鏡10は図示しないセンサを用いて被観察部位の内部における位置を把握し得る。また、内視鏡10は図示しないセンサを用いて内視鏡10の移動速度、及び移動方向を表す移動ベクトルの導出が可能である。更に、内視鏡10は図示しないセンサを用いて内視鏡10の向きの導出が可能である。 The endoscope 10 can grasp the position inside the observation site using a sensor (not shown). In addition, the endoscope 10 can derive the movement speed of the endoscope 10 and the movement vector representing the movement direction using a sensor (not shown). Furthermore, the endoscope 10 can derive the orientation of the endoscope 10 using a sensor (not shown).
 仮想大腸内視鏡検査では、内視鏡画像37から取得することが困難な3次元情報を、CTC画像19から取得し得る。 In virtual colonoscopy, three-dimensional information that is difficult to obtain from the endoscopic image 37 can be obtained from the CTC image 19.
 〔内視鏡画像〕
 内視鏡検査は、内視鏡画像37からポリープ等の病変を検出する。すなわち、内視鏡検査は、内視鏡10を用いて、リアルタイムに生成される動画像38を見て、病変の位置、及び形状等を特定する。内視鏡検査は、内視鏡画像37の再生画像を用いてもよい。
[Endoscope image]
Endoscopy detects lesions such as polyps from the endoscopic image 37. That is, in the endoscopy, the endoscope 10 is used to look at a moving image 38 generated in real time, and specify the position, shape, and the like of a lesion. The endoscopic examination may use a reproduced image of the endoscopic image 37.
 図6は内視鏡画像の模式図である。内視鏡画像37の一例として、動画像38を構成する任意のフレーム画像38aを図6に示す。図6に示したフレーム画像38aは2次元画像である。フレーム画像38aは色情報、及びテクスチャ情報を有している。 FIG. 6 is a schematic view of an endoscopic image. As an example of the endoscopic image 37, an optional frame image 38a constituting the moving image 38 is shown in FIG. The frame image 38a shown in FIG. 6 is a two-dimensional image. The frame image 38a has color information and texture information.
 内視鏡画像37は色情報、及びテクスチャ情報を有しているために、内視鏡検査は平坦な病変、及び表面状態の違い等の検出に強い。一方、内視鏡検査は、ひだ等の突起構造の裏側の病変の発見を苦手とする。 Since the endoscopic image 37 has color information and texture information, endoscopic examination is strong in detecting flat lesions, differences in surface condition, and the like. On the other hand, endoscopy is not good at finding a lesion on the back side of a ridged structure such as a fold.
 図7は内視鏡の観察範囲の死角を示す模式図である。図7は、CTC画像19のパス19cに沿う模式的な断面100、及びCTC画像19の断面100に対応する内視鏡画像37の模式的な断面120を図示する。 FIG. 7 is a schematic view showing a blind spot in the observation range of the endoscope. FIG. 7 illustrates a schematic cross section 100 along the path 19 c of the CTC image 19 and a schematic cross section 120 of the endoscopic image 37 corresponding to the cross section 100 of the CTC image 19.
 2点鎖線を用いて図示した内視鏡10A、及び内視鏡10Bは、既に観察を終えた観察位置における内視鏡10を表す。実線を用いて図示した内視鏡10は、観察中の観察位置における内視鏡10を表す。矢印線は内視鏡10の移動方向を表す。 The endoscope 10A and the endoscope 10B illustrated using a two-dot chain line represent the endoscope 10 at the observation position which has already been observed. The endoscope 10 illustrated using a solid line represents the endoscope 10 at the observation position during observation. Arrow lines indicate the moving direction of the endoscope 10.
 なお、図7では、図示の都合上、図7における上下方向の内視鏡10A、内視鏡10B、及び内視鏡10の位置をずらして、内視鏡10A、内視鏡10B、及び内視鏡10を図示した。 7, for convenience of illustration, the positions of the endoscope 10A, the endoscope 10B, and the endoscope 10 in the vertical direction in FIG. 7 are shifted to arrange the endoscope 10A, the endoscope 10B, and the inside. The endoscope 10 is illustrated.
 CTC画像19は3次元情報を有しているために、仮想大腸内視鏡検査はポリープ等の凸形状の検出に強い。また、ひだの裏に隠れているポリープ等の検出にも強い。例えば、CTC画像19は、ひだ102の裏側に位置するポリープ104、及びひだ102の表側に位置するポリープ106のいずれの検出も可能である。但し、視点画像ではひだ102の裏側に位置するポリープ106は表示されない場合があり得る。 Since the CTC image 19 has three-dimensional information, virtual colonoscopy is strong in detecting convex shapes such as polyps. In addition, it is also strong in detecting polyps and the like hidden behind the folds. For example, the CTC image 19 can detect either the polyp 104 located on the back side of the fold 102 and the polyp 106 located on the front side of the fold 102. However, in the viewpoint image, the polyp 106 located behind the fold 102 may not be displayed.
 一方、内視鏡画像37は3次元情報を有していないので、内視鏡検査では、ひだ122の表側に位置するポリープ126の検出は可能であるものの、ひだ122の裏側に位置するポリープ124の検出を苦手とする。 On the other hand, since the endoscopic image 37 has no three-dimensional information, the endoscopic examination allows detection of the polyp 126 located on the front side of the fold 122, but the polyp 124 located on the back side of the fold 122. Are not good at detecting
 例えば、ひだ122の表側のポリープ126は、内視鏡10B、又は内視鏡10の観察範囲に位置する。内視鏡10Aはポリープ126を検出し得る。ひだ122の裏側のポリープ124は、内視鏡10A、内視鏡10B、及び内視鏡10の観察範囲の死角に位置する。内視鏡10A、内視鏡10B、及び内視鏡10はひだ122の裏側のポリープ124の検出が困難である。 For example, the polyp 126 on the front side of the fold 122 is located in the observation range of the endoscope 10 B or the endoscope 10. The endoscope 10A can detect the polyp 126. The polyp 124 on the back of the fold 122 is located at a blind spot in the observation range of the endoscope 10A, the endoscope 10B, and the endoscope 10. The endoscope 10A, the endoscope 10B, and the endoscope 10 have difficulty in detecting the polyp 124 on the back side of the fold 122.
 このようにして、内視鏡10、内視鏡10A、及び内視鏡10Bは、いずれもひだ122の裏側のポリープ124の検出が困難である。 Thus, the endoscope 10, the endoscope 10A, and the endoscope 10B all have difficulty in detecting the polyp 124 on the back side of the fold 122.
 〔特徴領域抽出処理〕
 図8は第1特徴領域抽出の説明図である。図8には、CTC画像19のうち、任意の視点Pにおける視点画像19b、及び視点画像19bを図示する。図8に示した視点画像19b、及び視点画像19bを包括する概念が視点画像19bである。
[Feature area extraction processing]
FIG. 8 is an explanatory view of first feature area extraction. FIG. 8 illustrates a viewpoint image 19 b 1 and a viewpoint image 19 b 2 at an arbitrary viewpoint P in the CTC image 19. The concept including the viewpoint image 19 b 1 and the viewpoint image 19 b 2 shown in FIG. 8 is the viewpoint image 19 b.
 第1特徴領域抽出処理は、図3に示した第1特徴領域抽出部50を用いて、図8に示したCTC画像19から第1特徴領域80を抽出する。また、第1特徴領域抽出処理は、図7に示したひだ102の裏側に位置するポリープ106を第1特徴領域80として検出し得る。CTC画像19から第1特徴領域80を抽出する処理は、公知の特徴領域抽出技術を適用可能である。後述する第2特徴領域抽出も同様である。 In the first feature region extraction processing, the first feature region 80 is extracted from the CTC image 19 shown in FIG. 8 using the first feature region extraction unit 50 shown in FIG. Also, the first feature region extraction process can detect the polyp 106 located on the back side of the crimp 102 shown in FIG. 7 as a first feature region 80. The process of extracting the first feature area 80 from the CTC image 19 can apply a known feature area extraction technique. The same applies to second feature region extraction described later.
 公知の特徴領域抽出技術の例として、複数の領域ごとの特徴量を算出して、領域ごとの特徴量に応じて、第1条件に合致する領域を抽出対象の領域として特定する例が挙げられる。領域ごとの特徴量は、各領域に含まれる各画素の画素値を用いて算出し得る。 As an example of a known feature area extraction technique, there is an example in which feature quantities for each of a plurality of areas are calculated, and an area matching the first condition is specified as an extraction target area according to the feature quantity for each area. . The feature amount for each area can be calculated using the pixel value of each pixel included in each area.
 図8に示した視点画像19bは、第1特徴領域80として凸形状のポリープが抽出されている。CTC画像19の第1特徴領域80は、CTC画像19に設定されている3次元座標における座標値の特定が可能である。 In the viewpoint image 19 b 1 shown in FIG. 8, a convex polyp is extracted as the first feature region 80. The first feature area 80 of the CTC image 19 can specify coordinate values in three-dimensional coordinates set in the CTC image 19.
 図8には、CTC画像19から第1特徴領域80として凸形状のポリープを抽出する例を示したが、ひだ、各結腸の変化点、及び血管を抽出してもよい。血管として血管の走行パターンを抽出してもよい。後述する第2特徴領域抽出も同様である。 Although the example which extracts a convex-shaped polyp from the CTC image 19 as 1st feature area 80 was shown in FIG. 8, you may extract a crease, each colon's change point, and a blood vessel. The travel pattern of the blood vessel may be extracted as the blood vessel. The same applies to second feature region extraction described later.
 複数の第1特徴領域80が抽出された場合は、複数の第1特徴領域80は、第1条件と関連付けされ、一括して管理される。 When the plurality of first feature regions 80 are extracted, the plurality of first feature regions 80 are associated with the first condition and collectively managed.
 第1特徴領域80は、複数の属性に分類してもよい。例えば、病変の位置に関する情報を分類の条件として、第1特徴領域80として抽出した病変を位置に応じて分類してもよい。病変の位置に関する情報の例として、ひだの表であるか裏であるかの情報が挙げられる。すなわち、第1特徴領域80として抽出した病変をひだの表の病変と、ひだの裏の病変とに分類してもよい。第1条件として、ひだの表の病変、及びひだの裏の病変を適用して、2種類の第1特徴領域を抽出してもよい。 The first feature area 80 may be classified into a plurality of attributes. For example, a lesion extracted as the first feature region 80 may be classified according to the position, with information on the position of the lesion as the classification condition. An example of information on the location of a lesion is the information on the front or back of a fold. That is, the lesion extracted as the first feature region 80 may be classified into a lesion on the front of the fold and a lesion on the back of the fold. As the first condition, the lesion on the front of the fold and the lesion on the back of the fold may be applied to extract two types of first feature regions.
 図9は第2特徴領域抽出の説明図である。図9には、内視鏡画像37のうち、任意のフレーム画像38aを図示する。第2特徴領域の抽出結果は内視鏡検査の結果として取り扱うことが可能である。なお、図9に示したフレーム画像38aは、静止画像39を用いてもよい。 FIG. 9 is an explanatory diagram of second feature region extraction. 9 shows, among the endoscopic image 37 illustrates the arbitrary frame image 38a 1. The extraction result of the second feature area can be handled as the result of the endoscopy. The frame image 38a 1 shown in FIG. 9 may be used a still image 39.
 第2特徴領域抽出処理は、図3に示した第2特徴領域抽出部54を用いて内視鏡画像37から第2特徴領域70を抽出する。図9に示したフレーム画像38aは、第2特徴領域70として凸形状のポリープが抽出されている。一方、第2特徴領域抽出処理は、図7に示したひだ122の裏側のポリープ126を第2特徴領域70として抽出することが困難である。 In the second feature area extraction processing, the second feature area 70 is extracted from the endoscopic image 37 using the second feature area extraction unit 54 illustrated in FIG. 3. Frame image 38a 1 shown in FIG. 9, polyps convex shape is extracted as a second feature region 70. On the other hand, in the second feature region extraction processing, it is difficult to extract the polyp 126 on the back side of the fold 122 shown in FIG. 7 as the second feature region 70.
 図8に示した第1特徴領域80の情報は、第1特徴領域の抽出結果として、図4に示した第1特徴領域記憶部64に記憶される。また、図9に示した第2特徴領域70の情報は、第2特徴領域の抽出結果として、図4に示した第2特徴領域記憶部66に保存される。 The information of the first feature area 80 shown in FIG. 8 is stored in the first feature area storage unit 64 shown in FIG. 4 as the extraction result of the first feature area. Further, the information of the second feature area 70 shown in FIG. 9 is stored in the second feature area storage unit 66 shown in FIG. 4 as the extraction result of the second feature area.
 〔対応付け処理〕
 図10から図12を用いて対応付け処理について説明する。図12は病変の対応付けの例を示す模式図である。図10には内視鏡画像37のフレーム画像38aにおいて凸形状のポリープである第2特徴領域70が検出された場合の例を示す。
[Matching process]
The association processing will be described using FIGS. 10 to 12. FIG. 12 is a schematic view showing an example of association of lesions. It shows an example in which the second characteristic region 70 is a polyp of the convex shape in the frame image 38a 1 of the endoscope image 37 is detected in FIG. 10.
 なお、図10に示した視点画像19bは図8に示した視点画像19bである。図10に示した視点画像19bは図8に示した視点画像19bである。図10に示した第1特徴領域80は、図8に示した第1特徴領域80である。 Note that the viewpoint image 19b 1 shown in FIG. 10 is a viewpoint image 19b 1 shown in FIG. Viewpoint image 19b 2 shown in FIG. 10 is a viewpoint image 19b 2 shown in FIG. The first feature area 80 shown in FIG. 10 is the first feature area 80 shown in FIG.
 また、図10に示したフレーム画像38aは、図9に示したフレーム画像38aである。図10に示した第2特徴領域70は、図9に示した第2特徴領域70である。 The frame image 38a 1 shown in FIG. 10 is a frame image 38a 1 shown in FIG. The second feature area 70 shown in FIG. 10 is the second feature area 70 shown in FIG.
 以下に、内視鏡10の移動に対応して、全体画像19aのパス19cに沿って、内視鏡10の位置を表すポインタ19dを移動させながら、内視鏡検査を行う場合について説明する。内視鏡10の位置は、内視鏡10が備えるセンサを用いて既知である。また、各視点Pにおける内視鏡10の向きは、図4に示したパス19cの各視点Pにおける接線方向とする。 Hereinafter, a case in which an endoscopic examination is performed while moving the pointer 19d representing the position of the endoscope 10 along the path 19c of the entire image 19a in response to the movement of the endoscope 10 will be described. The position of the endoscope 10 is known using a sensor provided in the endoscope 10. Further, the direction of the endoscope 10 at each viewpoint P is a tangential direction at each viewpoint P of the path 19 c shown in FIG. 4.
 図3に示した対応付け部58は、図10に示した内視鏡画像37から第2特徴領域70が抽出された場合に、第2特徴領域70に対応する第1特徴領域80をCTC画像19から検索する。内視鏡画像37の第2特徴領域70に対応するCTC画像19の第1特徴領域80が検出された場合、CTC画像19の第1特徴領域80と、内視鏡画像37の第2特徴領域70とを対応付ける。 When the second feature area 70 is extracted from the endoscopic image 37 shown in FIG. 10, the associating unit 58 shown in FIG. 3 displays a CTC image of the first feature area 80 corresponding to the second feature area 70. Search from 19 When the first feature area 80 of the CTC image 19 corresponding to the second feature area 70 of the endoscopic image 37 is detected, the first feature area 80 of the CTC image 19 and the second feature area of the endoscopic image 37 Correspond with 70.
 図3に示した対応付け部58は、CTC画像19の位置の情報と内視鏡画像37の位置の情報とを用いてCTC画像19の第1特徴領域80と、内視鏡画像37の第2特徴領域70との対応付けをし得る、CTC画像19の位置の情報に代えてCTC画像19の画像の情報を適用し、且つ内視鏡画像37の位置の情報に代えて内視鏡画像37の画像の情報を適用してもよい。 The association unit 58 shown in FIG. 3 uses the information on the position of the CTC image 19 and the information on the position of the endoscope image 37 to use the first feature area 80 of the CTC image 19 and the first feature area 80 of the endoscope image 37. The information of the image of the CTC image 19 is applied instead of the information of the position of the CTC image 19 which can be associated with the two characteristic regions 70, and the endoscope image is used instead of the information of the position of the endoscopic image 37 The information of 37 images may be applied.
 図3に示した対応付け部58は、図10に示したCTC画像19の第1特徴領域80と、内視鏡画像37の第2特徴領域70との対応付け結果を、図4に示した対応付け結果記憶部68へ記憶する。 The associating unit 58 illustrated in FIG. 3 illustrates the association result of the first feature region 80 of the CTC image 19 illustrated in FIG. 10 and the second feature region 70 of the endoscopic image 37 in FIG. The association result storage unit 68 is stored.
 換言すると、CTC画像19と内視鏡画像37との対応付けという概念は、CTC画像19の構成要素と、内視鏡画像37の構成要素との組を構成するという概念が含まれる。CTC画像19と内視鏡画像37との対応付けという概念には、内視鏡画像37の構成要素に対応するCTC画像19の構成要素を検索して、特定するという概念が含まれてもよい。 In other words, the concept of the correspondence between the CTC image 19 and the endoscopic image 37 includes the concept of forming a combination of the components of the CTC image 19 and the components of the endoscopic image 37. The concept of the correspondence between the CTC image 19 and the endoscopic image 37 may include the concept of searching for and identifying the component of the CTC image 19 corresponding to the component of the endoscopic image 37. .
 図11はひだの対応付けの例を示す模式図である。図11に示したフレーム画像38a11は、第2特徴領域72としてひだが抽出されている。視点画像19b11は、第1特徴領域82としてひだが抽出されている。図12には、視点画像19b11の視点Pに対して連続する視点Pにおける視点画像19b12、及び視点画像19b13が図示されている。図3に示した対応付け部58は、図11に示した第1特徴領域82と第2特徴領域72とを対応付ける。 FIG. 11 is a schematic view showing an example of corrugation correspondence. The frame image 38 a 11 shown in FIG. 11 is extracted as a second feature area 72. Viewpoint image 19b 11 are folds are extracted as the first feature area 82. In FIG. 12, the viewpoint image 19 b 12 and the viewpoint image 19 b 13 at the viewpoint P continuous to the viewpoint P of the viewpoint image 19 b 11 are illustrated. The associating unit 58 illustrated in FIG. 3 associates the first feature area 82 and the second feature area 72 illustrated in FIG.
 図3に示した対応付け部58は、図11に示した第1特徴領域82と第2特徴領域72との対応付け結果を、図4に示した対応付け結果記憶部68へ記憶する。 The association unit 58 shown in FIG. 3 stores the association result between the first feature area 82 and the second feature area 72 shown in FIG. 11 in the association result storage unit 68 shown in FIG.
 図12はひだの番号を用いたひだの対応付けの例を示す模式図である。CTC画像19、及び内視鏡画像37のいずれにおいても、ひだの数は変わらない。そこで、基準とするひだを定めて、ひだの番号を用いてCTC画像19と内視鏡画像37との対応付けが可能である。 FIG. 12 is a schematic view showing an example of the arrangement of the folds using the fold numbers. In both the CTC image 19 and the endoscopic image 37, the number of folds does not change. Therefore, it is possible to set the reference fold and to associate the CTC image 19 with the endoscopic image 37 using the fold number.
 図12に示したフレーム画像38a21は、第2特徴領域74としてひだが抽出されている。視点画像19b21は、第1特徴領域84としてひだが抽出されている。図12に示した視点画像19b22、及び視点画像19b23もまた、第2特徴領域としてひだが抽出されている。なお、視点画像19b22、及び視点画像19b23の第2特徴領域の図示は省略する。 The frame image 38a 21 shown in FIG. 12 is extracted as a second feature area 74. The viewpoint image 19 b 21 is extracted as a first feature region 84. The viewpoint image 19 b 22 and the viewpoint image 19 b 23 shown in FIG. 12 are also extracted as the second feature region. Note that illustration of the viewpoint image 19b 22 and the second feature area of the viewpoint image 19b 23 is omitted.
 視点画像19b21に付されたnはひだの番号を表す整数である。視点画像19b22に付されたn、視点画像19b23に付されたn、及びフレーム画像38a21に付されたnも同様である。 N 1 attached to the viewpoint image 19b 21 is an integer representing a fold number. The same applies to n 2 attached to the viewpoint image 19 b 22 , n 3 attached to the viewpoint image 19 b 23 , and n 1 attached to the frame image 38 a 21 .
 フレーム画像38a21におけるひだの番号nと、視点画像19b21におけるひだの番号nとが一致する場合、図3に示した対応付け部58は、図12に示した第2特徴領域74と第1特徴領域84とを対応付けする。 When the fold number n 1 in the frame image 38 a 21 matches the fold number n 1 in the viewpoint image 19 b 21, the associating unit 58 shown in FIG. 3 matches the second feature region 74 shown in FIG. The first feature area 84 is associated with it.
 図3に示した対応付け部58は、図12に示した第2特徴領域74と第1特徴領域84と対応付け結果を、図4に示した対応付け結果記憶部68へ記憶する。 The association unit 58 shown in FIG. 3 stores the association result with the second feature area 74 and the first feature area 84 shown in FIG. 12 in the association result storage unit 68 shown in FIG.
 図示は省略するが、各結腸間の変換点、及び血管についても、図10から図12を用いて説明した例と同様に、対応付けが可能である。 Although illustration is omitted, the conversion points between the colons and the blood vessels can be associated as in the example described with reference to FIGS. 10 to 12.
 〔報知処理〕
 次に、図13から図15を用いて、報知処理について説明する。図13は非報知の場合の内視鏡画像、及び仮想内視鏡画像の模式図である。図13に示したモニタ装置16は、内視鏡画像37を表示し、且つ内視鏡画像37に対応するCTC画像19を表示する。CTC画像19の視点画像19b31は、内視鏡画像37のフレーム画像38a31に対応している。
[Notification process]
Next, notification processing will be described using FIGS. 13 to 15. FIG. 13 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of non notification. The monitor device 16 shown in FIG. 13 displays the endoscopic image 37 and displays the CTC image 19 corresponding to the endoscopic image 37. The viewpoint image 19 b 31 of the CTC image 19 corresponds to the frame image 38 a 31 of the endoscopic image 37.
 モニタ装置16に表示される内視鏡画像37は、内視鏡検査の進行に合わせて、順次更新される。また、CTC画像19は内視鏡画像37の更新に合わせて、順次更新される。なお、CTC画像19と内視鏡画像37との間には許容範囲内の遅延があってもよい。 The endoscopic image 37 displayed on the monitor device 16 is sequentially updated according to the progress of the endoscopic examination. Further, the CTC image 19 is sequentially updated in accordance with the update of the endoscopic image 37. There may be a delay within the allowable range between the CTC image 19 and the endoscopic image 37.
 図13に示したCTC画像19は、ポリープ等の病変が第1特徴領域80として抽出されていない。また、内視鏡画像37は、CTC画像19の第1特徴領域80と対応付けされた領域が存在していない。したがって、モニタ装置16は、後述する第1報知画像140、及び第2報知画像142を表示しない。 In the CTC image 19 shown in FIG. 13, a lesion such as a polyp is not extracted as the first feature region 80. Further, in the endoscopic image 37, there is no region associated with the first feature region 80 of the CTC image 19. Therefore, the monitor device 16 does not display the first notification image 140 and the second notification image 142 described later.
 図14は第1報知の場合の内視鏡画像、及び仮想内視鏡画像の模式図である。図14に示した内視鏡画像37のフレーム画像38a32は、第1報知画像140が表示されている。第1報知画像140は、ひだ150の裏側に図示しないポリープが存在しているものの、フレーム画像38a32にはポリープが映し出されていない場合に表示される。 FIG. 14 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of the first notification. The first notification image 140 is displayed as the frame image 38a 32 of the endoscopic image 37 shown in FIG. The first notification image 140 is displayed when a polyp (not shown) is present on the back side of the fold 150 but the polyp is not shown in the frame image 38a 32 .
 ひだ150の裏側にポリープが存在しているものの、フレーム画像38a32にはポリープが映し出されていない場合でも、ひだ150の裏側にポリープが存在している場合は、CTC画像19から、ひだ160の裏側のポリープが第1特徴領域80dとして抽出されている。 Even if a polyp exists on the back side of the fold 150 but the polyp does not appear in the frame image 38a 32 , if a polyp exists on the back side of the fold 150, the CTC image 19 shows that the fold 160 is The polyp on the back side is extracted as a first feature area 80d.
 但し、視点画像19b32は、フレーム画像38a32と同様の視野を表示しているので、第1特徴領域80dとして抽出されたポリープは表示されていない。第1特徴領域80dを表す破線は、第1特徴領域80dが視点画像19b32に表示されていないことを表す。 However, since the viewpoint image 19b 32 displays the same view as the frame image 38a 32 , the polyp extracted as the first feature region 80d is not displayed. Dashed line representing a first feature area 80d indicates that the first characteristic region 80d is not displayed in the view image 19b 32.
 図14に破線を用いて図示した第1特徴領域80dは、内視鏡画像37から第2特徴領域70として抽出されていない非抽出領域76と対応付けがされている。非抽出領域76の例として、内視鏡画像37における内視鏡10の観察範囲の死角に位置する領域が挙げられる。 The first feature region 80 d illustrated with broken lines in FIG. 14 is associated with the non-extraction region 76 not extracted as the second feature region 70 from the endoscopic image 37. An example of the non-extraction area 76 is an area located at a blind spot in the observation range of the endoscope 10 in the endoscopic image 37.
 すなわち、非抽出領域76は、本来、第2特徴領域70が抽出されるべき領域である。また、非抽出領域76は、内視鏡10の観察範囲の死角に位置する領域のために、実際に第2特徴領域70が抽出されない領域である。 That is, the non-extraction area 76 is an area from which the second feature area 70 is to be extracted. In addition, the non-extraction area 76 is an area where the second feature area 70 is not actually extracted because the area is located at a blind spot of the observation range of the endoscope 10.
 図3に示した報知部59は、内視鏡画像37に非抽出領域76が存在する場合に、第1報知として、モニタ装置16に表示した内視鏡画像37の非抽出領域76の位置に、第1報知画像140をオーバーレイ表示させる。また、第1報知画像140は、内視鏡画像37の非抽出領域76の位置に表示される。図14に示した第1報知画像140は一例であり、形状等は任意に規定し得る。 When the non-extraction area 76 exists in the endoscopic image 37, the notification unit 59 illustrated in FIG. 3 sets the position of the non-extraction area 76 of the endoscopic image 37 displayed on the monitor device 16 as the first notification. , And displays the first notification image 140 in an overlay manner. In addition, the first notification image 140 is displayed at the position of the non-extraction area 76 of the endoscopic image 37. The first notification image 140 illustrated in FIG. 14 is an example, and the shape and the like may be arbitrarily defined.
 第1報知画像140は、フレーム画像38a32の前後のフレーム画像38aに表示してもよい。すなわち、内視鏡検査の進行に応じて、非抽出領域76が内視鏡10の視野に進入したタイミングから、非抽出領域76が内視鏡10の視野から外れるタイミングまでの任意のタイミングにおいて、第1報知画像140を表示し得る。図15に示す第2報知画像142についても同様である。 The first notification image 140 may be displayed on the frame images 38a before and after the frame image 38a 32 . That is, at any timing from the timing when the non-extraction area 76 enters the field of view of the endoscope 10 to the timing when the non-extraction area 76 deviates from the field of view of the endoscope 10 according to the progress of the endoscopic examination The first notification image 140 can be displayed. The same applies to the second notification image 142 shown in FIG.
 図14に示した内視鏡画像37のひだ150は、図7に示した断面120ひだ122に対応する。また、CTC画像19のひだ160は、図7に示した断面100ひだ122に対応する。図15に示したひだ150、及びひだ160も同様である。 The fold 150 of the endoscopic image 37 shown in FIG. 14 corresponds to the cross-section 120 fold 122 shown in FIG. 7. Also, the fold 160 of the CTC image 19 corresponds to the cross-section 100 fold 122 shown in FIG. 7. The same applies to the folds 150 and 160 shown in FIG.
 図15は第2報知の場合の内視鏡画像、及び仮想内視鏡画像の模式図である。第2報知は、内視鏡画像37から第2特徴領域70が抽出された場合に行われる。図15に示した内視鏡画像37は、第2特徴領域70としてポリープが抽出されており、第2報知として第2報知画像142が表示される。 FIG. 15 is a schematic view of an endoscopic image and a virtual endoscopic image in the case of the second notification. The second notification is performed when the second feature region 70 is extracted from the endoscopic image 37. In the endoscopic image 37 shown in FIG. 15, a polyp is extracted as the second feature region 70, and the second notification image 142 is displayed as the second notification.
 換言すると、第1特徴領域80eとして、ひだ160の表側のポリープが抽出されている場合、図3に示した報知部59は、モニタ装置16に表示した内視鏡画像37の第2特徴領域70の位置に、第2報知画像142をオーバーレイ表示させる。 In other words, when a polyp on the front side of the fold 160 is extracted as the first feature area 80 e, the notification unit 59 illustrated in FIG. 3 is configured to display the second feature area 70 of the endoscopic image 37 displayed on the monitor device 16. The second notification image 142 is overlaid and displayed at the position of.
 図15に示した内視鏡画像37のフレーム画像38a33の第2特徴領域70は、CTC画像19の視点画像19b33の第1特徴領域80eと対応付けがされる。第1特徴領域80eはひだ160の表側のポリープである。 The second feature area 70 of the frame image 38 a 33 of the endoscopic image 37 shown in FIG. 15 is associated with the first feature area 80 e of the viewpoint image 19 b 33 of the CTC image 19. The first feature area 80 e is a polyp on the front side of the fold 160.
 図14に示した第1報知画像140は、図15に示した第2報知画像142に対して報知のレベルが変更されている。具体的には、図14に示した第1報知画像140は、図15に示した第2報知画像142に対して報知のレベルが上げられ、図14に示した第1報知画像140は、図15に示した第2報知画像142に対してサイズが大きくされている。第1報知と第2報知との報知レベルの違いの詳細は後述する。 The first notification image 140 shown in FIG. 14 has the notification level changed with respect to the second notification image 142 shown in FIG. Specifically, the first notification image 140 shown in FIG. 14 has the notification level raised with respect to the second notification image 142 shown in FIG. 15, and the first notification image 140 shown in FIG. The size of the second notification image 142 shown in FIG. The details of the difference between the notification levels of the first notification and the second notification will be described later.
 本実施形態に示した、ひだ160の裏側のポリープとして抽出された第1特徴領域80d、及びひだ160の表側のポリープとして抽出された第1特徴領域80eは、ポリープという条件と、ひだの表側か裏側かという条件を組み合わせた第1条件を適用して、CTC画像19から予め抽出されてもよい。 The first feature region 80d extracted as a polyp on the back side of the fold 160 and the first feature region 80e extracted as a polyp on the front side of the fold 160 shown in the present embodiment have the condition of polyp and the front side of the fold A first condition combining the back side condition may be applied and extracted in advance from the CTC image 19.
 [報知方法の手順]
 図16は報知方法の手順を示すフローチャートである。まず、CTC画像入力工程S10が実行される。CTC画像入力工程S10では、図2に示したCTC画像取得部41aを用いてCTC画像19が入力される。CTC画像19は、画像記憶部48へ記憶される。図16に示したCTC画像入力工程S10は第1画像入力工程の一例である。
[Procedure of notification method]
FIG. 16 is a flowchart showing the procedure of the notification method. First, a CTC image input process S10 is performed. In the CTC image input step S10, the CTC image 19 is input using the CTC image acquisition unit 41a shown in FIG. The CTC image 19 is stored in the image storage unit 48. The CTC image input process S10 shown in FIG. 16 is an example of a first image input process.
 CTC画像入力工程S10の後に、第1特徴領域抽出工程S12へ進む。第1特徴領域抽出工程S12では、図3に示した第1特徴領域抽出部50を用いて、CTC画像19から第1特徴領域を抽出する。第1特徴領域の情報は、図4に示した第1特徴領域記憶部64へ記憶される。 After the CTC image input process S10, the process proceeds to a first feature area extraction process S12. In the first feature region extraction step S12, the first feature region is extracted from the CTC image 19 using the first feature region extraction unit 50 shown in FIG. The information of the first feature area is stored in the first feature area storage unit 64 shown in FIG.
 図16に示した第1特徴領域抽出工程S12の後に、内視鏡画像入力工程S14へ進む。内視鏡画像入力工程S14では、図3に示した内視鏡画像取得部41bを用いて、内視鏡画像37が入力される。図16に示した内視鏡画像入力工程S14は第2画像入力工程の一例である。 After the first feature region extraction step S12 shown in FIG. 16, the process proceeds to an endoscopic image input step S14. In the endoscopic image input step S14, the endoscopic image 37 is input using the endoscopic image acquisition unit 41b shown in FIG. The endoscopic image input process S14 shown in FIG. 16 is an example of a second image input process.
 内視鏡画像入力工程S14後に、第2特徴領域抽出工程S16へ進む。第2特徴領域抽出工程S16では、図3に示した第2特徴領域記憶部66を用いて、内視鏡画像37から第2特徴領域が抽出される。 After the endoscopic image input process S14, the process proceeds to a second feature area extraction process S16. In the second feature area extraction step S16, the second feature area is extracted from the endoscopic image 37 using the second feature area storage unit 66 shown in FIG.
 図16に示した内視鏡画像入力工程S14、及び第2特徴領域抽出工程S16は、内視鏡検査として把握し得る。すなわち、図2に示した内視鏡画像取得部41bは、内視鏡10を用いて撮像された動画像38を逐次入力し、図1に示したモニタ装置16へ内視鏡画像37として表示する。 The endoscopic image input process S14 and the second feature area extraction process S16 shown in FIG. 16 can be grasped as an endoscopic examination. That is, the endoscopic image acquisition unit 41b illustrated in FIG. 2 sequentially inputs the moving image 38 captured using the endoscope 10, and displays the endoscopic image 37 as the endoscopic image 37 on the monitor device 16 illustrated in FIG. Do.
 図3に示した第2特徴領域抽出部54は、内視鏡画像37から自動的に第2特徴領域70として病変を抽出する。第2特徴領域抽出部54は、ユーザが操作装置15を用いて入力した抽出情報に基づいて、内視鏡画像37から第2特徴領域70として病変を抽出してもよい。 The second feature region extraction unit 54 illustrated in FIG. 3 automatically extracts a lesion as the second feature region 70 from the endoscopic image 37. The second feature area extraction unit 54 may extract a lesion as the second feature area 70 from the endoscopic image 37 based on the extraction information input by the user using the operation device 15.
 また、図3に示した第1特徴領域抽出部50は、内視鏡画像37の取得、及び第2特徴領域70の抽出と並行して、CTC画像19から第1特徴領域80の抽出を実行する。第1特徴領域80は予め抽出され、記憶されていてもよい。 In addition, the first feature area extraction unit 50 shown in FIG. 3 executes the extraction of the first feature area 80 from the CTC image 19 in parallel with the acquisition of the endoscopic image 37 and the extraction of the second feature area 70. Do. The first feature region 80 may be extracted in advance and stored.
 図16に示した対応付け工程S18では、図3に示した対応付け部58を用いて、CTC画像19と内視鏡画像37との対応付けを行う。すなわち、対応付け部58は、第1特徴領域80と第2特徴領域70とを対応付けするか、又は第1特徴領域80と非抽出領域
76とを対応付けする。図16に示した対応付け工程S18における対応付けの結果は、図4に示した対応付け結果記憶部68へ記憶される。
In the associating step S18 shown in FIG. 16, the CTC image 19 and the endoscopic image 37 are associated using the associating unit 58 shown in FIG. That is, the associating unit 58 associates the first feature region 80 with the second feature region 70, or associates the first feature region 80 with the non-extraction region 76. The result of the matching in the matching step S18 shown in FIG. 16 is stored in the matching result storage unit 68 shown in FIG.
 図16に示した対応付け工程S18の後に判定工程S20が実行される。判定工程S20では、図5に示した報知部59を用いて、図14を用いた説明した第1報知を実行するか、又は図15に示した第2報知を実行するかを判定する。 After the matching step S18 shown in FIG. 16, the determination step S20 is performed. In the determination step S20, it is determined using the notification unit 59 shown in FIG. 5 whether to execute the first notification described using FIG. 14 or the second notification shown in FIG.
 図16に示した判定工程S20において、第1特徴領域80が内視鏡画像37の非抽出領域76に対応付けされた場合はYes判定となる。Yes判定の場合は第1報知工程S22へ進む。判定工程S20において、第1特徴領域80が内視鏡画像37の第2特徴領域70に対応付けされた場合はNo判定となる。No判定の場合は第2報知工程S24へ進む。 When the first feature area 80 is associated with the non-extraction area 76 of the endoscopic image 37 in the determination step S20 shown in FIG. In the case of a Yes determination, it progresses to 1st alerting | reporting process S22. If the first feature area 80 is associated with the second feature area 70 of the endoscopic image 37 in the determination step S20, the determination is No. If the determination is No, the process proceeds to the second notification step S24.
 例えば、図5に示した報知部59は、CTC画像19から第1特徴領域80として抽出されたポリープ等の病変が、内視鏡10の観察範囲の死角の位置であるか、又は内視鏡10の観察範囲の位置であるかを判定してもよい。 For example, in the notification unit 59 illustrated in FIG. 5, whether a lesion such as a polyp extracted from the CTC image 19 as the first feature region 80 is a blind spot in the observation range of the endoscope 10 or the endoscope It may be determined whether it is the position of ten observation ranges.
 報知部59は、CTC画像19から第1特徴領域80として抽出されたポリープ等の病変が、内視鏡10の観察範囲の死角に位置する場合は第1報知を実行し得る。一方、CTC画像19から第1特徴領域80として抽出されたポリープ等の病変が、内視鏡10の観察範囲に位置する場合は第2報知を実行し得る。 The notification unit 59 can execute the first notification when a lesion such as a polyp extracted from the CTC image 19 as the first feature region 80 is located at a blind spot in the observation range of the endoscope 10. On the other hand, when a lesion such as a polyp extracted from the CTC image 19 as the first feature region 80 is located in the observation range of the endoscope 10, the second notification can be performed.
 第1報知工程S22は、図3に示した報知部59を用いて、図14を用いて説明した第1報知を実行する。図16に示した第1報知工程S22の後に検査終了判定工程S26へ進む。 The first notification step S22 executes the first notification described using FIG. 14 using the notification unit 59 shown in FIG. After the first notification step S22 shown in FIG. 16, the process proceeds to an inspection end determination step S26.
 第2報知工程S24は、図3に示した報知部59を用いて、図15を用いて説明した第2報知を実行する。図16に示した第2報知工程S24の後に検査終了判定工程S26へ進む。 The second notification step S24 executes the second notification described using FIG. 15 using the notification unit 59 shown in FIG. After the second notification step S24 shown in FIG. 16, the process proceeds to an inspection end determination step S26.
 検査終了判定工程S26は、図3に示した医療画像処理装置14を用いて、内視鏡検査が終了したか否かを判定する。検査終了判定工程S26において、医療画像処理装置14が、内視鏡検査が終了したと判定した場合はYesとなる。Yes判定の場合、医療画像処理装置14は報知方法を終了する。 In the examination end determination step S26, it is determined using the medical image processing apparatus 14 shown in FIG. 3 whether or not the endoscopy is completed. If the medical image processing apparatus 14 determines that the endoscopy is completed in the examination end determination step S26, the determination is Yes. If the determination is Yes, the medical image processing apparatus 14 ends the notification method.
 一方、検査終了判定工程S26において、医療画像処理装置14が、内視鏡検査が継続されていると判定した場合はNoとなる。No判定の場合、医療画像処理装置14は報知方法を継続する。すなわち、検査終了判定工程S26においてNo判定の場合は、内視鏡画像入力工程S14へ進む。以降、検査終了判定工程S26においてYes判定となるまで、内視鏡画像入力工程S14から検査終了判定工程S26までの各工程が実行される。 On the other hand, in the examination end determination step S26, when the medical image processing apparatus 14 determines that the endoscopic examination is continued, the result is No. If the determination is No, the medical image processing apparatus 14 continues the notification method. That is, in the case of No determination in the examination end determination step S26, the process proceeds to the endoscopic image input step S14. Thereafter, each process from the endoscopic image input process S14 to the examination completion determination process S26 is executed until the determination in the examination completion determination process S26 becomes Yes.
 [作用効果]
 〔1〕
 上記の如く構成された内視鏡システム、及び報知方法によれば、CTC画像19から第1特徴領域80としてポリープ等の病変を抽出する。CTC画像19と内視鏡画像37とを対応付ける。CTC画像19の第1特徴領域80が内視鏡画像37の非抽出領域76に対応付けされている場合は第1報知を行う。第1報知に起因して、ユーザは内視鏡画像37から抽出されない、例えば、内視鏡10の観察範囲の死角に位置するポリープ等の病変の存在を認識し得る。これにより、内視鏡10を用いた内視鏡検査において、内視鏡10の観察範囲の死角となる位置のポリープ等の病変の見落としを抑制し得る。
[Function effect]
[1]
According to the endoscope system configured as described above and the notification method, a lesion such as a polyp is extracted as the first feature region 80 from the CTC image 19. The CTC image 19 and the endoscopic image 37 are associated with each other. When the first feature area 80 of the CTC image 19 is associated with the non-extraction area 76 of the endoscopic image 37, the first notification is performed. Due to the first notification, the user can recognize the presence of a lesion such as a polyp which is not extracted from the endoscopic image 37, for example, at a blind spot in the observation range of the endoscope 10. Thereby, in endoscopy using the endoscope 10, it is possible to suppress the oversight of a lesion such as a polyp at a position at which the observation range of the endoscope 10 becomes a blind spot.
 また、内視鏡10の観察範囲の死角に位置するポリープ等の病変の存在を認識した場合は、内視鏡10の観察範囲を妨げているひだ等を押すことで、内視鏡10の観察範囲の死角の観察が可能となる。 In addition, when the existence of a lesion such as a polyp located at a blind spot in the observation range of the endoscope 10 is recognized, observation of the endoscope 10 is performed by pushing a fold or the like that obstructs the observation range of the endoscope 10 It becomes possible to observe the blind spot of the range.
 〔2〕
 第1特徴領域80が第2特徴領域70に対応付けされている場合は第2報知を行う。第1報知は第2報知に対して報知のレベルが変更され、報知のレベルが上げられている。これにより、第1特徴領域80が第2特徴領域70に対応付けされている場合と比較して、第1特徴領域が非抽出領域76と対応付けされている場合の認識がし易くなる。
[2]
When the first feature area 80 is associated with the second feature area 70, the second notification is performed. The first notification changes the notification level with respect to the second notification and raises the notification level. Thereby, as compared with the case where the first feature region 80 is associated with the second feature region 70, recognition in the case where the first feature region is associated with the non-extraction region 76 is facilitated.
 〔3〕
 第1報知画像、及び第2報知画像は、内視鏡画像37にオーバーレイ表示される。これにより、内視鏡画像37自体に処理を施すことなく、内視鏡画像37に第1報知画像140、又は第2報知画像142を重畳表示させることが可能となる。
[3]
The first notification image and the second notification image are overlaid on the endoscopic image 37. As a result, the first notification image 140 or the second notification image 142 can be superimposed and displayed on the endoscopic image 37 without processing the endoscopic image 37 itself.
 [報知の変形例]
 〔第1変形例〕
 図17は第1変形例に係る報知の説明図である。図17に示した第1報知画像144は、第2報知画像146に対して濃度が変更される。例えば、第1報知画像144は、第2報知画像146に対して濃い濃度が適用される。
[Modification of notification]
First Modification
FIG. 17 is an explanatory diagram of notification according to the first modification. The density of the first notification image 144 shown in FIG. 17 is changed with respect to the second notification image 146. For example, the first notification image 144 has a dark density applied to the second notification image 146.
 第1報知画像144と第2報知画像146との色を変更してもよい。例えば、第1報知画像144は黒が用いられ、第2報知画像146はイエローが用いられる。すなわち、第1報知画像144は、第2報知画像146と比較して、内視鏡画像37における視認性が高い色が適用される。 The colors of the first notification image 144 and the second notification image 146 may be changed. For example, black is used for the first notification image 144, and yellow is used for the second notification image 146. That is, a color with high visibility in the endoscopic image 37 is applied to the first notification image 144 as compared to the second notification image 146.
 第1変形例に係る報知によれば、第1報知画像144と第2報知画像146との濃度、及び色の少なくともいずれか一方を変更する。これにより第2報知画像146に対して第1報知画像144の視認性を高くし得る。 According to the notification of the first modification, at least one of the density and the color of the first notification image 144 and the second notification image 146 is changed. Thereby, the visibility of the first notification image 144 with respect to the second notification image 146 can be enhanced.
 〔第2変形例〕
 図18は第2変形例に係る報知の説明図である。図18に示した第1報知画像147は、点滅表示される。一方、第2報知画像148は、点灯表示される。点灯表示は通常表示として把握し得る。
Second Modified Example
FIG. 18 is an explanatory diagram of notification according to the second modified example. The first notification image 147 shown in FIG. 18 is displayed blinking. On the other hand, the second notification image 148 is lit and displayed. The lighting display can be grasped as a normal display.
 第2変形例に係る報知によれば、第1報知画像147を点滅表示させ、且つ第2報知画像148を点灯表示させる。これにより第2報知画像148に対して第1報知画像147の視認性を高くし得る。 According to the notification according to the second modification, the first notification image 147 is blinked and the second notification image 148 is lit and displayed. Thereby, the visibility of the first notification image 147 with respect to the second notification image 148 can be increased.
 〔第3変形例〕
 図19は第3変形例に係る報知の説明図である。図19に示した第1報知画像147Aは、点滅表示される。第2報知画像147Bもまた点滅表示される。第1報知画像147Aは、第2報知画像147Bに対して点滅周期が短くされる。
Third Modified Example
FIG. 19 is an explanatory diagram of notification according to the third modification. The first notification image 147A shown in FIG. 19 is displayed blinking. The second notification image 147B is also blinked and displayed. The blinking cycle of the first notification image 147A is shorter than that of the second notification image 147B.
 第3変形例に係る報知によれば、第1報知画像147Aを点滅表示させ、且つ第2報知画像147Bを点滅表示させる。第1報知画像147Aは第2報知画像147Bに対して点滅周期が短くされる。これにより第2報知画像147Bに対して第1報知画像147Aの視認性を高くし得る。 According to the notification according to the third modification, the first notification image 147A is blinked and displayed, and the second notification image 147B is blinked and displayed. A flashing cycle of the first notification image 147A is shorter than that of the second notification image 147B. Thereby, the visibility of the first notification image 147A can be increased with respect to the second notification image 147B.
 上述した第1変形例は、第2変形例、又は第3変形例と、適宜組み合わせることが可能である。 The first modification described above can be combined with the second modification or the third modification as appropriate.
 〔第4変形例〕
 第1報知画像140は、内視鏡画像37の非抽出領域76が内視鏡10の観察領域に近づくに従い、連続的に、又は段階的にサイズを大きくする等の強調を実行してもよい。第2報知画像142も同様である。また、後述する報知音を用いた場合も同様である。
Fourth Modified Example
The first notification image 140 may be emphasized such as increasing in size continuously or in stages as the non-extraction area 76 of the endoscopic image 37 approaches the observation area of the endoscope 10 . The same applies to the second notification image 142. Moreover, the same applies to the case where a notification sound to be described later is used.
 〔第5変形例〕
 第1報知画像140、及び第2報知画像142をCTC画像19に表示してもよい。CTC画像19に表示する第1報知画像140、及び第2報知画像142は、内視鏡画像37に表示する第1報知画像140、及び第2報知画像142と同様の表示態様の変更が可能である。
Fifth Modification
The first notification image 140 and the second notification image 142 may be displayed on the CTC image 19. The first notification image 140 and the second notification image 142 displayed on the CTC image 19 can be displayed in the same manner as the first notification image 140 and the second notification image 142 displayed on the endoscope image 37. is there.
 [第1特徴領域の他の表示例]
 図20は第1特徴領域の他の表示例の説明図である。図20は、CTC画像19に第1特徴領域80を表示する例を示す。図20に示したCTC画像19は、図5に示した全体画像19aに相当する。
[Another display example of the first feature area]
FIG. 20 is an explanatory diagram of another display example of the first feature area. FIG. 20 shows an example of displaying the first feature area 80 in the CTC image 19. The CTC image 19 shown in FIG. 20 corresponds to the entire image 19a shown in FIG.
 細線を用いて図示されたパス19cは、内視鏡10が既に観察を終えた領域のパス19cを表す。また、太線を用いて図示されたパス19cは、内視鏡10がこれから観察を行う領域のパス19cを表す。 The path 19 c illustrated with thin lines represents the path 19 c in the area where the endoscope 10 has already finished observation. Further, a path 19 c illustrated by using a thick line represents a path 19 c of an area to be observed by the endoscope 10 from now.
 第1特徴領域80aは、内視鏡10が既に観察を終えた第1特徴領域80を表す。第1特徴領域80bは、内視鏡10が次に観察する第1特徴領域80を表す。内視鏡10が次に観察する第1特徴領域80bは強調表示がされる。強調表示は第1報知画像140の例と同様に、拡大、色の変更、及び点滅等を適用可能である。 The first feature area 80a represents the first feature area 80 that the endoscope 10 has already observed. The first feature area 80 b represents a first feature area 80 to be observed next by the endoscope 10. The first feature area 80b that the endoscope 10 observes next is highlighted. As in the example of the first notification image 140, the highlighting can be applied to enlargement, color change, blinking, and the like.
 第1特徴領域80cは、内視鏡10が第1特徴領域80bの次に観察する第1特徴領域80を表す。内視鏡10が第1特徴領域80bを観察した後に、第1特徴領域80cは強調表示がされる。 The first feature area 80c represents a first feature area 80 that the endoscope 10 observes next to the first feature area 80b. After the endoscope 10 observes the first feature area 80b, the first feature area 80c is highlighted.
 モニタ装置16は、図14等に示したCTC画像19に代わり、図20に示したCTC画像19を表示してもよい。これにより、内視鏡10の位置の近傍に存在する第1特徴領域80の位置を把握することができる。図14に示した第1報知画像140等との併用により、内視鏡画像37から第2特徴領域70として検出されないポリープ等の病変の検出に寄与する。 The monitor device 16 may display the CTC image 19 shown in FIG. 20 instead of the CTC image 19 shown in FIG. Thereby, the position of the first feature area 80 existing near the position of the endoscope 10 can be grasped. The combination with the first notification image 140 or the like shown in FIG. 14 contributes to the detection of a lesion such as a polyp which is not detected as the second feature region 70 from the endoscopic image 37.
 本実施形態では、CTC画像19、及び内視鏡画像37をモニタ装置16に2画面表示させる態様を例示したが、内視鏡画像37をモニタ装置16に全画面表示させてもよい。 In the present embodiment, a mode in which the CTC image 19 and the endoscopic image 37 are displayed on the monitor device 16 in two screens is exemplified, but the endoscopic image 37 may be displayed on the monitor device 16 in the full screen.
 [報知の他の実施形態]
 〔機能の説明〕
 図21は他の実施形態に係る報知を実現する医療画像処理装置の機能を示す機能ブロック図である。他の実施形態に係る報知では、報知音を用いた報知を実行する。図21に示した医療画像処理装置14Aは、図2に示した医療画像処理装置14に対して、報知音制御部200、及び音源202が追加されている。また、図21に示した内視鏡システム9Aは、図2に示した内視鏡システム9に対して、スピーカ204が追加されている。
[Another embodiment of notification]
[Description of Function]
FIG. 21 is a functional block diagram showing functions of a medical image processing apparatus for realizing notification according to another embodiment. In the notification according to the other embodiment, the notification using the notification sound is performed. In the medical image processing apparatus 14A shown in FIG. 21, a notification sound control unit 200 and a sound source 202 are added to the medical image processing apparatus 14 shown in FIG. Further, in the endoscope system 9A shown in FIG. 21, a speaker 204 is added to the endoscope system 9 shown in FIG.
 報知音制御部200は、スピーカ204を介して音源202を用いて生成された報知音を出力する。報知音は音声を適用してもよい。報知音は警告音を適用してもよい。報知音制御部200は、CTC画像19の第1特徴領域80が、例えば、内視鏡10の観察範囲に位置する領域等の、内視鏡画像37の第2特徴領域70に対応付けされた場合に対して、CTC画像19の第1特徴領域80が、例えば、内視鏡10の観察範囲の死角に位置する領域等の、内視鏡画像37の非抽出領域76に対応付けされ場合に報知音を強調してもよい。報知音の強調の例として、音量を上げる等が挙げられる。 The notification sound control unit 200 outputs a notification sound generated using the sound source 202 via the speaker 204. The notification sound may be voice. The notification sound may apply a warning sound. The notification sound control unit 200 associates the first feature region 80 of the CTC image 19 with the second feature region 70 of the endoscope image 37, such as a region located in the observation range of the endoscope 10, for example. In the case where the first feature area 80 of the CTC image 19 is associated with the non-extraction area 76 of the endoscope image 37, for example, an area located at a blind spot in the observation range of the endoscope 10 The notification sound may be emphasized. As an example of emphasizing the notification sound, the volume may be raised.
 CTC画像19の第1特徴領域80が、例えば、内視鏡10の観察範囲の死角に位置する領域等の、内視鏡画像37の非抽出領域76に対応付けされた場合の報知は第1報知音の一例である。また、CTC画像19の第1特徴領域80が、例えば、内視鏡10の観察範囲に位置する領域等の、内視鏡画像37の第2特徴領域70に対応付けされた場合の報知音は第2報知音の一例である。 Notification when the first feature area 80 of the CTC image 19 is associated with the non-extraction area 76 of the endoscopic image 37, such as an area located at a blind spot of the observation range of the endoscope 10, is the first It is an example of a notification sound. Further, the notification sound when the first feature area 80 of the CTC image 19 is associated with the second feature area 70 of the endoscope image 37, such as an area located in the observation range of the endoscope 10, is It is an example of a 2nd alerting sound.
 報知音制御部200、音源202、及びスピーカ204は、それぞれ、報知音出力部の構成要素の一例である。 The notification sound control unit 200, the sound source 202, and the speaker 204 are examples of components of the notification sound output unit.
 〔報知の他の実施形態の作用効果〕
 他の実施形態に係る報知によれば、CTC画像19の第1特徴領域80が内視鏡画像37の非抽出領域76に対応付けされ場合に音を用いた報知を実行する。これにより、内視鏡画像37自体に処理を施すことなく、報知が可能となる。
[Operation effect of other embodiment of notification]
According to the notification according to the other embodiment, when the first feature region 80 of the CTC image 19 is associated with the non-extraction region 76 of the endoscopic image 37, notification using a sound is performed. Thereby, notification can be performed without processing the endoscope image 37 itself.
 [他の構成要件の変形例]
 〔CTC画像の変形例〕
 《第1例》
 図2に示した医療画像処理装置14は、CT画像等の3次元検査画像からCTC画像19を生成するCTC画像生成部を備えてもよい。医療画像処理装置14は、CTC画像取得部41aを介して3次元検査画像を取得し、CTC画像生成部を用いてCTC画像19を生成してもよい。
[Modification of other configuration requirements]
[Modification of CTC image]
First example
The medical image processing apparatus 14 illustrated in FIG. 2 may include a CTC image generation unit that generates a CTC image 19 from a three-dimensional inspection image such as a CT image. The medical image processing apparatus 14 may acquire a three-dimensional inspection image via the CTC image acquisition unit 41a, and generate the CTC image 19 using the CTC image generation unit.
 《第2例》
 図5に示した視点Pは、パス19cの上に限定されない。視点Pは任意の位置に設定可能である。視点画像19bの視野方向は、内視鏡10の撮像方向に対応して任意に設定し得る。
Second example
The viewpoint P shown in FIG. 5 is not limited to above the path 19c. The viewpoint P can be set at an arbitrary position. The viewing direction of the viewpoint image 19 b can be arbitrarily set corresponding to the imaging direction of the endoscope 10.
 《第3例》
 視点画像19bは、全体画像19aの任意の断面における3次元検査画像を2次元画像に変換した2次元検査画像としてもよい。
Third example
The viewpoint image 19 b may be a two-dimensional inspection image obtained by converting a three-dimensional inspection image of an arbitrary cross section of the entire image 19 a into a two-dimensional image.
 〔第1特徴領域抽出の変形例〕
 《第1例》
 第1特徴領域80の抽出は、CTC画像19の生成に用いられる3次元検査画像を用いてもよい。
[Modified Example of First Feature Region Extraction]
First example
Extraction of the first feature region 80 may use a three-dimensional inspection image used to generate the CTC image 19.
 《第2例》
 第1特徴領域80は、予め抽出し、記憶してもよい。予め抽出された第1特徴領域80は、第1特徴領域80の位置の情報をインデックスとして検索可能に記憶されてもよい。
Second example
The first feature region 80 may be extracted and stored in advance. The pre-extracted first feature area 80 may be searchably stored using information on the position of the first feature area 80 as an index.
 〔第2特徴領域抽出の変形例〕
 第2特徴領域の抽出は、動画像38を再生して実行してもよい。
[Modified Example of Second Feature Region Extraction]
The extraction of the second feature area may be performed by reproducing the moving image 38.
 〔照明光の変形例〕
 特定の波長領域は、以下の変形例の適用が可能である。
[Modification of illumination light]
The specific wavelength range is applicable to the following modifications.
 《第1例》
 特定の波長帯域の第1例は、可視域の青色帯域又は緑色帯域である。第1例の波長帯域は、390ナノメートル以上450ナノメートル以下、又は530ナノメートル以上550ナノメートル以下の波長帯域を含み、且つ第1例の光は、390ナノメートル以上450ナノメートル以下、又は530ナノメートル以上550ナノメートル以下の波長帯域内にピーク波長を有する。
First example
A first example of a particular wavelength band is the blue or green band in the visible range. The wavelength band of the first example includes a wavelength band of 390 nanometers or more and 450 nanometers or less, or 530 nanometers or more and 550 nanometers or less, and the light of the first example is 390 nanometers or more and 450 nanometers or less, or It has a peak wavelength within the wavelength band of 530 nanometers or more and 550 nanometers or less.
 《第2例》
 特定の波長帯域の第2例は、可視域の赤色帯域である。第2例の波長帯域は、585ナノメートル以上615ナノメートル以下、又は610ナノメートル以上730ナノメートル以下の波長帯域を含み、且つ第2例の光は、585ナノメートル以上615ナノメートル以下、又は610ナノメートル以上730ナノメートル以下の波長帯域内にピーク波長を有する。
Second example
A second example of a particular wavelength band is the red band in the visible range. The wavelength band of the second example includes a wavelength band of 585 nanometers or more and 615 nanometers or less, or 610 nanometers or more and 730 nanometers or less, and the light of the second example is 585 nanometers or more and 615 nanometers or less, or It has a peak wavelength within the wavelength band of 610 nanometers or more and 730 nanometers or less.
 《第3例》
 特定の波長帯域の第3例は、酸化ヘモグロビンと還元ヘモグロビンとで吸光係数が異なる波長帯域を含み、且つ第3例の光は、酸化ヘモグロビンと還元ヘモグロビンとで吸光係数が異なる波長帯域にピーク波長を有する。この第3例の波長帯域は、400±10ナノメートル、440±10ナノメートル、470±10ナノメートル、又は600ナノメートル以上750ナノメートル以下の波長帯域を含み、且つ第3例の光は、400±10ナノメートル、440±10ナノメートル、470±10ナノメートル、又は600ナノメートル以上750ナノメートル以下の波長帯域にピーク波長を有する。
Third example
The third example of the specific wavelength band includes wavelength bands in which the absorption coefficient is different between oxygenated hemoglobin and reduced hemoglobin, and the light of the third example has peak wavelengths in wavelength bands where the absorption coefficient is different between oxygenated hemoglobin and reduced hemoglobin. Have. The wavelength band of this third example includes wavelength bands of 400 ± 10 nanometers, 440 ± 10 nanometers, 470 ± 10 nanometers, or 600 nanometers to 750 nanometers, and the light of the third example is It has a peak wavelength in a wavelength band of 400 ± 10 nm, 440 ± 10 nm, 470 ± 10 nm, or 600 nm or more and 750 nm or less.
 《第4例》
 特定の波長帯域の第4例は、生体内の蛍光物質が発する蛍光の観察に用いられ且つこの蛍光物質を励起させる励起光の波長帯域である。例えば、390ナノメートル以上470ナノメートル以下の波長帯域である。なお、蛍光の観察は蛍光観察と呼ばれる場合がある。
Fourth Example
A fourth example of the specific wavelength band is a wavelength band of excitation light which is used to observe fluorescence emitted from a fluorescent substance in the living body and which excites the fluorescent substance. For example, it is a wavelength band of 390 nanometers or more and 470 nanometers or less. In addition, observation of fluorescence may be called fluorescence observation.
 《第5例》
 特定の波長帯域の第5例は、赤外光の波長帯域である。この第5例の波長帯域は、790ナノメートル以上820ナノメートル以下、又は905ナノメートル以上970ナノメートル以下の波長帯域を含み、且つ第5例の光は、790ナノメートル以上820ナノメートル以下、又は905ナノメートル以上970ナノメートル以下の波長帯域にピーク波長を有する。
Fifth example
The fifth example of the specific wavelength band is a wavelength band of infrared light. The wavelength band of the fifth example includes a wavelength band of 790 nm or more and 820 nm or less, or 905 nm or more and 970 nm or less, and the light of the fifth example is 790 nm or more and 820 nm or less, Or has a peak wavelength in a wavelength band of 905 nm or more and 970 nm or less.
 〔特殊光画像の生成例〕
 プロセッサ12は、白色光を用いて撮像して得られた通常光画像に基づいて、特定の波長帯域の情報を有する特殊光画像を生成してもよい。なお、ここでいう生成は取得が含まれる。この場合、プロセッサ12は、特殊光画像取得部として機能する。そして、プロセッサ12は、特定の波長帯域の信号を、通常光画像に含まれる赤、緑、及び青、或いはシアン、マゼンタ、及びイエローの色情報に基づく演算を行うことで得る。
[Example of special light image generation]
The processor 12 may generate a special light image having information of a specific wavelength band based on a normal light image obtained by imaging using white light. Note that the generation referred to here includes acquisition. In this case, the processor 12 functions as a special light image acquisition unit. Then, the processor 12 obtains a signal of a specific wavelength band by performing an operation based on the color information of red, green and blue or cyan, magenta and yellow contained in the normal light image.
 なお、赤、緑、及び青は、RGB(Red,Green,Blue)と表されることがある。また、シアン、マゼンタ、及びイエローは、CMY(Cyan,Magenta,Yellow)と表されることがある。 Note that red, green and blue may be represented as RGB (Red, Green, Blue). Also, cyan, magenta and yellow may be expressed as CMY (Cyan, Magenta, Yellow).
 〔特徴量画像の生成例〕
 プロセッサ12は、通常光画像、及び特殊光画像の少なくともいずれか一方に基づいて、公知の酸素飽和度画像等の特徴量画像を生成してもよい。
[Generation example of feature quantity image]
The processor 12 may generate a feature image such as a known oxygen saturation image based on at least one of the normal light image and the special light image.
 [機械学習を用いた抽出規則の更新]
 図3に示した第2特徴領域抽出部54は、CTC画像19の第1特徴領域80と内視鏡画像37の非抽出領域76との対応関係を学習データとして機械学習を実行して、第2特徴領域の抽出規則を更新することが可能である。機械学習は、図2に示した深層学習アルゴリズム65が適用される。
[Updating extraction rules using machine learning]
The second feature region extraction unit 54 illustrated in FIG. 3 performs machine learning by using the correspondence between the first feature region 80 of the CTC image 19 and the non-extraction region 76 of the endoscopic image 37 as learning data. It is possible to update the extraction rule of 2 feature areas. In machine learning, the deep learning algorithm 65 shown in FIG. 2 is applied.
 [コンピュータを画像処理装置として機能させるプログラムへの適用例]
 上述した画像処理方法は、コンピュータを用いて、画像処理方法における各工程に対応する機能を実現させるプログラムとして構成可能である。例えば、コンピュータに、CTC画像入力機能、第1特徴領域抽出機能、内視鏡画像入力機能、第2特徴領域抽出機能、対応付け機能、及び保存機能を実現させるプログラムを構成し得る。
[Example of application to a program that causes a computer to function as an image processing apparatus]
The image processing method described above can be configured as a program that implements functions corresponding to the respective steps in the image processing method using a computer. For example, a program that causes a computer to realize a CTC image input function, a first feature area extraction function, an endoscope image input function, a second feature area extraction function, an association function, and a storage function can be configured.
 CTC画像入力機能は第1画像入力機能に相当する。内視鏡画像入力機能は第2画像入力機能に相当する。 The CTC image input function corresponds to the first image input function. The endoscope image input function corresponds to the second image input function.
 上述した画像処理機能をコンピュータに実現させるプログラムを、有体物である非一時的な情報記憶媒体である、コンピュータが読取可能な情報記憶媒体に記憶し、情報記憶媒体を通じてプログラムを提供することが可能である。 It is possible to store a program that causes a computer to realize the image processing function described above in a tangible, non-temporary information storage medium, in a computer readable information storage medium, and provide the program through the information storage medium is there.
 また、非一時的な情報記憶媒体にプログラムを記憶して提供する態様に代えて、ネットワークを介してプログラム信号を提供する態様も可能である。 Also, instead of storing and providing the program in a non-temporary information storage medium, it is also possible to provide a program signal via a network.
 [実施形態及び変形例等の組み合わせについて]
 上述した実施形態で説明した構成要素、及び変形例で説明した構成要素は、適宜組み合わせて用いることができ、また、一部の構成要素を置き換えることもできる。
[About the combination of the embodiment and the modification etc.]
The components described in the above-described embodiment and the components described in the modification can be used in appropriate combination, and some components can be replaced.
 以上説明した本発明の実施形態は、本発明の趣旨を逸脱しない範囲で、適宜構成要件を変更、追加、削除することが可能である。本発明は以上説明した実施形態に限定されるものではなく、本発明の技術的思想内で当該分野の通常の知識を有する者により、多くの変形が可能である。 In the embodiment of the present invention described above, it is possible to appropriately change, add, or delete the constituent requirements without departing from the spirit of the present invention. The present invention is not limited to the embodiments described above, and many modifications can be made by those skilled in the art within the technical concept of the present invention.
9、9A 内視鏡システム
10、10A、10B 内視鏡
11 光源装置
12 プロセッサ
13 表示装置
14、14A 医療画像処理装置
15 操作装置
16 モニタ装置
17 ネットワーク
18 画像記憶装置
19 CTC画像
19a 全体画像
19b、19b、19b、19b、19b11、19b12、19b13、19b21、19b22、19b23、19b31、19b32、19b33 視点画像
19c パス
19d ポインタ
20 挿入部
21 操作部
22 ユニバーサルコード
25 軟性部
26 湾曲部
27 先端部
27a 先端面
28 撮像素子
29 湾曲操作ノブ
30 送気送水ボタン
31 吸引ボタン
32 静止画像撮影指示部
33 処置具導入口
35 ライトガイド
36 信号ケーブル
37 内視鏡画像
37a、37b コネクタ
38 動画像
38a、38a、38a11、38a21、38a31、38a32、38a33 フレーム画像
39 静止画像
41 画像取得部
41a CTC画像取得部
41b 内視鏡画像取得部
42 情報取得部
43 医療画像解析処理部
44 表示制御部
44a 再生制御部
44b 情報表示制御部
47 記憶部
48 画像記憶部
49 プログラム記憶部
50 第1特徴領域抽出部
52 第1条件設定部
54 第2特徴領域抽出部
56 第2条件設定部
58 対応付け部
59 報知部
60 報知画像生成部
64 第1特徴領域記憶部
65 深層学習アルゴリズム
66 第2特徴領域記憶部
68 対応付け結果記憶部
70、72、74 第2特徴領域
76 非抽出領域
80、80a、80b、80c、80d、80e、82、84 第1特徴領域
100、120 断面
102、122、150、160 ひだ
104、106、124、126 ポリープ
140、144、147、147A 第1報知画像
142、146、147B、148 第2報知画像
200 報知音制御部
202 音源
204 スピーカ
P 視点
 スタート位置
 ゴール位置
、n、n ひだの番号
S10からS26 報知方法の各工程
9, 9A Endoscope system 10, 10A, 10B Endoscope 11 Light source device 12 Processor 13 Display device 14, 14A Medical image processing device 15 Operation device 16 Monitor device 17 Network 18 Image storage device 19 CTC image 19a Whole image 19b, 19b 1 , 19b 2 , 19b 3 , 19b 11 , 19b 12 , 19b 13 , 19b 21 , 19b 22 , 19b 23 , 19b 31 , 19b 32 , 19b 33 viewpoint images 19 c path 19 d pointer 20 operation unit 22 universal code 25 soft part 26 curved part 27 tip part 27a tip surface 28 image pickup element 29 curved operation knob 30 air / water feed button 31 suction button 32 still image photographing instruction part 33 treatment tool introduction port 35 light guide 36 signal cable 37 endoscope image 37a , 37b connector 38 Moving image 38a, 38a 1, 38a 11, 38a 21, 38a 31, 38a 32, 38a 33 frame image 39 still image 41 image acquisition unit 41a CTC image obtaining unit 41b endoscopic image acquiring unit 42 the information acquisition unit 43 medical image analysis Processing unit 44 Display control unit 44a Reproduction control unit 44b Information display control unit 47 Storage unit 48 Image storage unit 49 Program storage unit 50 First feature area extraction unit 52 First condition setting unit 54 Second feature area extraction unit 56 Second condition Setting unit 58 Association unit 59 Notification unit 60 Notification image generation unit 64 First feature area storage unit 65 Deep learning algorithm 66 Second feature area storage unit 68 Association result storage unit 70, 72, 74 Second feature area 76 Non-extraction Regions 80, 80a, 80b, 80c, 80d, 80e, 82, 84 First Feature Regions 100, 120 Cross Section 102 122,150,160 pleats 104,106,124,126 polyps 140,144,147,147A first notification image 142,146,147B, 148 second notification image 200 notification sound control unit 202 sound source 204 speaker P viewpoint P S Start each step of the positions P G goal position n 1, n 2, n 3 S26 informing method with the number S10 in folds

Claims (25)

  1.  被検体の3次元画像から生成される仮想内視鏡画像を入力する第1画像入力部と、
     内視鏡を用いて前記被検体の観察対象を撮像して得られた実内視鏡画像を入力する第2画像入力部と、
     前記仮想内視鏡画像と前記実内視鏡画像とを対応付けする対応付け部と、
     前記仮想内視鏡画像から規定の第1条件に合致する第1特徴領域を抽出する第1特徴領域抽出部と、
     前記実内視鏡画像から前記第1条件に対応する第2条件に合致する第2特徴領域を抽出する第2特徴領域抽出部と、
     前記第1特徴領域が前記第2特徴領域と対応付けされていない場合に報知を行う報知部と、
     を備えた内視鏡システム。
    A first image input unit for inputting a virtual endoscopic image generated from a three-dimensional image of a subject;
    A second image input unit configured to input a real endoscopic image obtained by imaging an observation target of the subject using an endoscope;
    An associating unit that associates the virtual endoscopic image with the real endoscopic image;
    A first feature area extraction unit that extracts a first feature area that meets a prescribed first condition from the virtual endoscopic image;
    A second feature area extraction unit that extracts a second feature area that matches a second condition corresponding to the first condition from the real endoscope image;
    A notification unit that performs notification when the first feature region is not associated with the second feature region;
    Endoscope system equipped with.
  2.  前記報知部は、前記第1特徴領域が前記第2特徴領域と対応付けされた場合に、前記第1特徴領域が前記内視鏡の観察範囲に位置する前記第2特徴領域に対応付けされたことを報知し、
     前記第1特徴領域が前記第2特徴領域と対応付けされていない場合は、前記第1特徴領域が前記第2特徴領域と対応付けされた場合における報知方法、及び報知レベルと比較して、前記報知方法、及び前記報知レベルの少なくともいずれかを変更する請求項1に記載の内視鏡システム。
    The notification unit associates the first feature region with the second feature region located in the observation range of the endoscope when the first feature region is associated with the second feature region. Informing that,
    When the first feature area is not associated with the second feature area, the notification method and the notification level in the case where the first feature area is associated with the second feature area are compared. The endoscope system according to claim 1, wherein at least one of a notification method and the notification level is changed.
  3.  前記実内視鏡画像を表示する表示部を備え、
     前記報知部は、前記第1特徴領域が前記第2特徴領域と対応付けされていないことを報知する第1報知画像、及び前記第1特徴領域が前記第2特徴領域に対応付けされたことを報知する第2報知画像を前記表示部に表示し、且つ前記第2報知画像より前記第1報知画像を拡大して表示する請求項2に記載の内視鏡システム。
    A display unit for displaying the real endoscope image;
    The notification unit notifies that a first notification image notifying that the first feature region is not associated with the second feature region, and that the first feature region is associated with the second feature region. The endoscope system according to claim 2, wherein a second notification image to be notified is displayed on the display unit, and the first notification image is enlarged and displayed from the second notification image.
  4.  前記実内視鏡画像を表示する表示部を備え、
     前記報知部は、前記第1特徴領域が前記第2特徴領域と対応付けされていないことを報知する第1報知画像、及び前記第1特徴領域が前記第2特徴領域と対応付けされたことを報知する第2報知画像を前記表示部に表示し、且つ前記第1報知画像は前記第2報知画像と色を変更する請求項2に記載の内視鏡システム。
    A display unit for displaying the real endoscope image;
    The notification unit notifies that the first feature area is associated with the second feature area, and a first notification image for notifying that the first feature area is not associated with the second feature area. The endoscope system according to claim 2, wherein a second notification image to be notified is displayed on the display unit, and the first notification image changes color from the second notification image.
  5.  前記実内視鏡画像を表示する表示部を備え、
     前記報知部は、前記第1特徴領域が前記第2特徴領域と対応付けされていないことを報知する第1報知画像、及び前記第1特徴領域が前記第2特徴領域に対応付けされたことを報知する第2報知画像を前記表示部に表示し、且つ前記第1報知画像を点滅表示する一方、前記第2報知画像を点灯表示する請求項2に記載の内視鏡システム。
    A display unit for displaying the real endoscope image;
    The notification unit notifies that a first notification image notifying that the first feature region is not associated with the second feature region, and that the first feature region is associated with the second feature region. The endoscope system according to claim 2, wherein the second notification image to be notified is displayed on the display unit, and the first notification image is blinked and displayed, and the second notification image is lit and displayed.
  6.  前記実内視鏡画像を表示する表示部を備え、
     前記報知部は、前記第1特徴領域が前記第2特徴領域と対応付けされていないことを報知する第1報知画像、及び前記第1特徴領域が前記第2特徴領域に対応付けされたことを報知する第2報知画像を前記表示部に点滅表示し、且つ前記第2報知画像に対して前記第1報知画像の点滅周期を短くする請求項2に記載の内視鏡システム。
    A display unit for displaying the real endoscope image;
    The notification unit notifies that a first notification image notifying that the first feature region is not associated with the second feature region, and that the first feature region is associated with the second feature region. The endoscope system according to claim 2, wherein a second notification image to be notified is displayed on the display unit in a blinking manner, and a blinking cycle of the first notification image is shortened with respect to the second notification image.
  7.  前記表示部は、前記実内視鏡画像と別に生成された前記第1報知画像、及び前記第2報知画像を前記実内視鏡画像に重畳表示させる請求項3から6のいずれか一項に記載の内視鏡システム。 The display unit displays the first notification image and the second notification image, which are generated separately from the real endoscope image, superimposed on the real endoscope image. Endoscope system as described.
  8.  前記表示部は、前記仮想内視鏡画像を表示し、且つ前記仮想内視鏡画像における前記内視鏡の位置を表示する請求項3から7のいずれか一項に記載の内視鏡システム。 The endoscope system according to any one of claims 3 to 7, wherein the display unit displays the virtual endoscopic image and displays a position of the endoscope in the virtual endoscopic image.
  9.  前記表示部は、前記仮想内視鏡画像を表示し、且つ前記第1特徴領域の情報を表示する請求項3から7のいずれか一項に記載の内視鏡システム。 The endoscope system according to any one of claims 3 to 7, wherein the display unit displays the virtual endoscopic image and displays information of the first feature area.
  10.  前記表示部は、前記第1特徴領域を拡大して表示する請求項9に記載の内視鏡システム。 The endoscope system according to claim 9, wherein the display unit enlarges and displays the first feature area.
  11.  前記表示部は、前記第1特徴領域を点滅表示する請求項9に記載の内視鏡システム。 The endoscope system according to claim 9, wherein the display unit blinks the first feature area.
  12.  報知音を出力する報知音出力部を備え、
     前記報知部は、前記報知音出力部を用いて、前記第1特徴領域が前記第2特徴領域と対応付けされていないことを表す第1報知音を出力する請求項1から11のいずれか一項に記載の内視鏡システム。
    It has a notification sound output unit that outputs a notification sound,
    12. The information unit according to any one of claims 1 to 11, wherein the notification unit uses the notification sound output unit to output a first notification sound indicating that the first feature region is not associated with the second feature region. The endoscope system according to the item.
  13.  前記報知部は、前記報知音出力部を用いて、前記第1特徴領域が前記第2特徴領域に対応付けされたことを表し、且つ前記第1報知音と異なる第2報知音を出力する請求項12に記載の内視鏡システム。 The notification unit uses the notification sound output unit to indicate that the first feature region is associated with the second feature region, and outputs a second notification sound different from the first notification sound. The endoscope system according to Item 12.
  14.  前記報知部は、前記第2報知音に対して前記第1報知音の音量を大きくする請求項13に記載の内視鏡システム。 The endoscope system according to claim 13, wherein the notification unit increases the volume of the first notification sound with respect to the second notification sound.
  15.  前記報知部は、前記第1特徴領域と対応付けがされた前記実内視鏡画像の領域から前記実内視鏡画像の観察位置までの間の距離が短くなるに従い報知レベルを変更する請求項2から14のいずれか一項に記載の内視鏡システム。 The notification unit changes the notification level as the distance from the region of the real endoscope image associated with the first characteristic region to the observation position of the real endoscope image decreases. The endoscope system according to any one of 2 to 14.
  16.  前記第1特徴領域抽出部は、前記実内視鏡画像の観察の際に、予め前記仮想内視鏡画像から前記第1特徴領域を抽出する請求項1から15のいずれか一項に記載の内視鏡システム。 The first feature area extraction unit according to any one of claims 1 to 15, wherein the first feature area is extracted in advance from the virtual endoscopic image when observing the real endoscopic image. Endoscope system.
  17.  前記第1特徴領域抽出部は、前記実内視鏡画像の観察の際に、前記実内視鏡画像の観察に対応して、逐次、前記仮想内視鏡画像から前記第1特徴領域を抽出する請求項1から15のいずれか一項に記載の内視鏡システム。 The first feature area extraction unit sequentially extracts the first feature area from the virtual endoscopic image corresponding to the observation of the real endoscopic image when the real endoscopic image is observed. The endoscope system according to any one of claims 1 to 15.
  18.  前記第1特徴領域抽出部は、同一の前記第1条件を用いて複数の第1特徴領域を抽出した場合、前記複数の第1特徴領域を一括して管理する請求項1から17のいずれか一項に記載の内視鏡システム。 The first feature area extraction unit, when extracting a plurality of first feature areas using the same first condition, collectively manages the plurality of first feature areas. The endoscope system according to one item.
  19.  前記第1特徴領域抽出部は、前記第1条件として前記仮想内視鏡画像における位置の情報を適用する請求項1から18のいずれか一項に記載の内視鏡システム。 The endoscope system according to any one of claims 1 to 18, wherein the first feature area extraction unit applies information of a position in the virtual endoscopic image as the first condition.
  20.  前記第1特徴領域抽出部は、前記位置の情報として前記内視鏡の観察範囲の死角の位置を適用する請求項19に記載の内視鏡システム。 20. The endoscope system according to claim 19, wherein the first feature area extraction unit applies the position of a blind spot in the observation range of the endoscope as the information of the position.
  21.  前記第1特徴領域抽出部は、前記位置の情報としてひだの裏側を適用する請求項19又は20に記載の内視鏡システム。 The endoscope system according to claim 19 or 20, wherein the first feature region extraction unit applies a back side of a fold as the information of the position.
  22.  前記第2特徴領域抽出部は、前記第2特徴領域として病変を抽出する請求項1から21のいずれか一項に記載の内視鏡システム。 The endoscope system according to any one of claims 1 to 21, wherein the second feature region extraction unit extracts a lesion as the second feature region.
  23.  前記第2特徴領域抽出部は、機械学習を用いて生成された抽出規則を適用して、前記実内視鏡画像から前記第2特徴領域を抽出する請求項1から22のいずれか一項に記載の内視鏡システム。 The second feature region extraction unit applies the extraction rule generated using machine learning, and extracts the second feature region from the real endoscope image according to any one of claims 1 to 22. Endoscope system as described.
  24.  被検体の3次元画像から生成される仮想内視鏡画像を入力する第1画像入力工程と、
     内視鏡を用いて前記被検体の観察対象を撮像して得られた実内視鏡画像を入力する第2画像入力工程と、
     前記仮想内視鏡画像と前記実内視鏡画像とを対応付けする対応付け工程と、
     前記仮想内視鏡画像から規定の第1条件に合致する第1特徴領域を抽出する第1特徴領域抽出工程と、
     前記実内視鏡画像から前記第1条件に対応する第2条件に合致する第2特徴領域を抽出する第2特徴領域抽出工程と、
     前記第1特徴領域が前記第2特徴領域と対応付けされていない場合に報知を行う報知工程と、
     を含む報知方法。
    A first image input step of inputting a virtual endoscopic image generated from a three-dimensional image of the subject;
    A second image input step of inputting a real endoscopic image obtained by imaging an observation target of the subject using an endoscope;
    An associating step of associating the virtual endoscopic image with the real endoscopic image;
    A first feature area extraction step of extracting a first feature area that meets a prescribed first condition from the virtual endoscopic image;
    A second feature area extraction step of extracting a second feature area that matches a second condition corresponding to the first condition from the real endoscopic image;
    An informing step of informing when the first feature area is not associated with the second feature area;
    Informing method including.
  25.  コンピュータに、
     被検体の3次元画像から生成される仮想内視鏡画像を入力する第1画像入力機能、
     内視鏡を用いて前記被検体の観察対象を撮像して得られた実内視鏡画像を入力する第2画像入力機能、
     前記仮想内視鏡画像と前記実内視鏡画像とを対応付けする対応付け機能、
     前記仮想内視鏡画像から規定の第1条件に合致する第1特徴領域を抽出する第1特徴領域抽出機能、
     前記実内視鏡画像から前記第1条件に対応する第2条件に合致する第2特徴領域を抽出する第2特徴領域抽出機能、及び
     前記第1特徴領域が前記第2特徴領域と対応付けされていない場合に報知を行う報知機能、を実現させるプログラム。
    On the computer
    A first image input function for inputting a virtual endoscopic image generated from a three-dimensional image of a subject;
    A second image input function of inputting a real endoscopic image obtained by imaging an observation target of the subject using an endoscope;
    An associating function that associates the virtual endoscopic image with the real endoscopic image;
    A first feature area extraction function of extracting a first feature area that meets a prescribed first condition from the virtual endoscopic image;
    A second feature area extraction function of extracting a second feature area that matches a second condition corresponding to the first condition from the real endoscope image, and the first feature area is associated with the second feature area. Program that realizes a notification function of giving notification when not being done.
PCT/JP2018/039901 2017-10-31 2018-10-26 Endoscope system, reporting method, and program WO2019087969A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019550323A JP6840263B2 (en) 2017-10-31 2018-10-26 Endoscope system and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017210248 2017-10-31
JP2017-210248 2017-10-31

Publications (1)

Publication Number Publication Date
WO2019087969A1 true WO2019087969A1 (en) 2019-05-09

Family

ID=66331877

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/039901 WO2019087969A1 (en) 2017-10-31 2018-10-26 Endoscope system, reporting method, and program

Country Status (2)

Country Link
JP (1) JP6840263B2 (en)
WO (1) WO2019087969A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021171464A1 (en) * 2020-02-27 2021-09-02 オリンパス株式会社 Processing device, endoscope system, and captured image processing method
JP2023513646A (en) * 2021-01-14 2023-04-03 コ,ジファン Colon examination guide device and method using endoscope

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013009956A (en) * 2011-06-01 2013-01-17 Toshiba Corp Medical image display apparatus and medical image diagnostic apparatus
JP2013150650A (en) * 2012-01-24 2013-08-08 Fujifilm Corp Endoscope image diagnosis support device and method as well as program
JP2014230612A (en) * 2013-05-28 2014-12-11 国立大学法人名古屋大学 Endoscopic observation support device
JP2016143194A (en) * 2015-01-30 2016-08-08 ザイオソフト株式会社 Medical image processing device, medical image processing method, and medical image processing program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013009956A (en) * 2011-06-01 2013-01-17 Toshiba Corp Medical image display apparatus and medical image diagnostic apparatus
JP2013150650A (en) * 2012-01-24 2013-08-08 Fujifilm Corp Endoscope image diagnosis support device and method as well as program
JP2014230612A (en) * 2013-05-28 2014-12-11 国立大学法人名古屋大学 Endoscopic observation support device
JP2016143194A (en) * 2015-01-30 2016-08-08 ザイオソフト株式会社 Medical image processing device, medical image processing method, and medical image processing program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021171464A1 (en) * 2020-02-27 2021-09-02 オリンパス株式会社 Processing device, endoscope system, and captured image processing method
JP2023513646A (en) * 2021-01-14 2023-04-03 コ,ジファン Colon examination guide device and method using endoscope
JP7374224B2 (en) 2021-01-14 2023-11-06 コ,ジファン Colon examination guide device using an endoscope

Also Published As

Publication number Publication date
JPWO2019087969A1 (en) 2020-11-12
JP6840263B2 (en) 2021-03-10

Similar Documents

Publication Publication Date Title
JP6890184B2 (en) Medical image processing equipment and medical image processing program
JP5675227B2 (en) Endoscopic image processing apparatus, operation method, and program
JP7270626B2 (en) Medical image processing apparatus, medical image processing system, operating method of medical image processing apparatus, program, and storage medium
US11607109B2 (en) Endoscopic image processing device, endoscopic image processing method, endoscopic image processing program, and endoscope system
JP7346693B2 (en) Medical image processing device, processor device, endoscope system, operating method of medical image processing device, and program
JP6405138B2 (en) Image processing apparatus, image processing method, and image processing program
JP7308258B2 (en) Medical imaging device and method of operating medical imaging device
JP7125479B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, METHOD OF OPERATION OF MEDICAL IMAGE PROCESSING APPARATUS, AND ENDOSCOPE SYSTEM
JP7143504B2 (en) Medical image processing device, processor device, endoscope system, operating method and program for medical image processing device
JP2012024509A (en) Image processor, method, and program
US20210366593A1 (en) Medical image processing apparatus and medical image processing method
JPWO2019130868A1 (en) Image processing equipment, processor equipment, endoscopic systems, image processing methods, and programs
WO2019087969A1 (en) Endoscope system, reporting method, and program
US11481944B2 (en) Medical image processing apparatus, medical image processing method, program, and diagnosis support apparatus
JP7148534B2 (en) Image processing device, program, and endoscope system
JP7122328B2 (en) Image processing device, processor device, image processing method, and program
JP7335157B2 (en) LEARNING DATA GENERATION DEVICE, OPERATION METHOD OF LEARNING DATA GENERATION DEVICE, LEARNING DATA GENERATION PROGRAM, AND MEDICAL IMAGE RECOGNITION DEVICE
CN116724334A (en) Computer program, learning model generation method, and operation support device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18873161

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019550323

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18873161

Country of ref document: EP

Kind code of ref document: A1