WO2022059233A1 - Image processing device, endoscope system, operation method for image processing device, and program for image processing device - Google Patents

Image processing device, endoscope system, operation method for image processing device, and program for image processing device Download PDF

Info

Publication number
WO2022059233A1
WO2022059233A1 PCT/JP2021/010864 JP2021010864W WO2022059233A1 WO 2022059233 A1 WO2022059233 A1 WO 2022059233A1 JP 2021010864 W JP2021010864 W JP 2021010864W WO 2022059233 A1 WO2022059233 A1 WO 2022059233A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
recognition
recognition target
processing
recording
Prior art date
Application number
PCT/JP2021/010864
Other languages
French (fr)
Japanese (ja)
Inventor
裕哉 木村
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2022059233A1 publication Critical patent/WO2022059233A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof

Definitions

  • the present invention relates to an image processing device that performs image recognition processing, an endoscope system, an operation method of the image processing device, and a program for the image processing device.
  • an image obtained by photographing an observation target with an endoscope for use as evidence for preparing for a medical accident or for presentation at a conference (hereinafter, Recording is performed to record an endoscopic image) as a moving image.
  • control such as start or stop of recording is usually performed by manual operation of a recording device or the like.
  • start or stop of recording is usually performed by manual operation of a recording device or the like.
  • the operation of starting recording is manually performed before the inspection is started, and the operation of stopping recording is manually performed after the inspection is completed.
  • the user may manually start and stop recording in a scene where he / she wants to keep a moving image.
  • Patent Document 1 An image processing device for an endoscope that controls the above is known (Patent Document 1).
  • the image signal is red when the image signal obtained by photographing the inside of the body cavity of the subject with an endoscope is recorded.
  • an endoscopic image recording device that controls the start and stop of recording by determining whether or not the image contains a threshold value or more (Patent Document 2).
  • observation images are sequentially recorded and displayed as a plurality of recorded images during the period from the start of detection of the lesion candidate region to the interruption.
  • the device is known (Patent Document 3).
  • the present invention operates an image processing device, an endoscope system, and an image processing device that efficiently record an endoscope image when image recognition processing is performed in parallel based on a plurality of types of endoscope images. It is an object of the present invention to provide a method and a program for an image processing apparatus.
  • the present invention is an image processing apparatus, and performs image recognition processing based on an image obtained by photographing an observation target with an endoscope.
  • the image processing device includes an image processor.
  • the image processor acquires a plurality of types of recognition target images based on the image, controls the continuous display of at least one type of recognition target image among the plurality of types of recognition target images on the display, and controls the display of the plurality of types of recognition targets.
  • Image recognition processing for images is performed in parallel for each type of recognition target image, recognition processing results obtained by image recognition processing are acquired, and multiple types of recognition processing results are obtained based on the recognition processing results of all the acquired types of recognition target images. It controls the recording operation when recording a moving image of at least one type of recognition target image among the recognition target images.
  • the recognition processing result includes information on whether or not the recognition target image satisfies a preset condition, and the image processor includes information that one or more of all types of recognition processing results satisfy the condition. If so, it is preferable to start or continue recording.
  • the image processor contains information that all the recognition processing results of all types do not satisfy the conditions, and the recording is stopped when the recording is continued.
  • the image processor When recording a moving image, the image processor preferably attaches information regarding the corresponding conditions for starting, continuing, or stopping the recording to the moving image.
  • the image processor acquires in advance the correspondence information in which the observation target satisfying the condition and the recognition target image obtained by shooting the observation target satisfying the condition are associated with each other, and based on the correspondence information, the newly acquired recognition target image. It is preferable to perform image recognition processing on the image.
  • the image processor acquires correspondence information for each type of recognition target image, and performs image recognition processing for the newly acquired recognition target image based on the corresponding type of correspondence information.
  • condition is that an object other than a specific part or a living body is detected by the image processor, or that it is determined to include a region in a specific state.
  • the plurality of types of recognition target images include at least a first recognition target image and a second recognition target image of different types from each other, and the image processor performs a first image recognition process on the first recognition target image.
  • the image processor performs a second image recognition process different from the first image recognition process.
  • the first image recognition process is performed with respect to the detection of a specific part or an object other than a living body in the first recognition target image.
  • the second image recognition process is performed with respect to the determination that the region of the specific state in the second recognition target image is included.
  • a recognition target image is generated by performing enhancement processing on an image, and the image processor distinguishes the type of recognition target image according to the presence or absence or type of enhancement processing, and each of the distinguished recognition target images is one type of recognition target image. It is preferable to obtain as.
  • the types of enhancement processing are color expansion processing and / or structure enhancement processing, and the image processor acquires recognition target images generated by performing different types of enhancement processing as one type of recognition target image. Is preferable.
  • the endoscope system of the present invention includes an image processing device and a light source unit that emits illumination light to irradiate the observation target.
  • the image processor acquires an image obtained by photographing an observation target illuminated by each of a plurality of illumination lights having different spectral spectra from each other emitted by the light source unit as one type of recognition target image.
  • the image processor acquires an image obtained by photographing an observation target illuminated by white illumination light emitted by a light source unit as one kind of recognition target image.
  • the image processor acquires an image obtained by photographing an observation target illuminated by illumination light including a narrow band light of a preset wavelength band emitted by a light source unit as one kind of recognition target image.
  • the light source unit repeatedly emits each of a plurality of illumination lights having different spectral spectra in a preset order.
  • the light source unit emits the first illumination light and the second illumination light having different spectral spectra from each other, emits the first illumination light by the first emission pattern during the first illumination period, and emits the second illumination light by the first emission pattern, and the second illumination light during the second illumination period.
  • a light source processor that emits light according to the second emission pattern and switches between the first illumination light and the second illumination light, a first image signal obtained by photographing an observation target illuminated by the first illumination light, and a first image signal.
  • the image sensor includes an image sensor that outputs a second image signal obtained by photographing the observation target illuminated by the illumination light, and the image processor is the first with respect to the first recognition target image based on the first image signal.
  • the image recognition process is performed, the second image recognition process is performed on the second recognition target image based on the second image signal, the first image recognition result by the first image recognition process and the second image by the second image recognition process. It is preferable to control the recording operation based on the recognition result.
  • the present invention is a method of operating an image processing device, in which image recognition processing is performed based on an image obtained by photographing an observation target with an endoscope, and a plurality of types of recognition target images based on the image are performed.
  • An image acquisition step for acquiring images a display control step for controlling the continuous display of at least one type of recognition target image among a plurality of types of recognition target images, and an image recognition process for a plurality of types of recognition target images.
  • An image recognition processing step performed in parallel for each type of recognition target image, a recognition processing result acquisition step for acquiring recognition processing results obtained by image recognition processing for each type of recognition target image, and a plurality of types of recognition targets.
  • the recording control step for controlling the recording operation based on the recognition processing results of all the acquired recognition target images is provided.
  • the present invention is a program for an image processing device, which is installed in an image processing device that performs image recognition processing based on an image obtained by photographing an observation target with an endoscope, and is installed on a computer for an image.
  • An image acquisition function that acquires multiple types of recognition target images based on the above, a display control function that controls the continuous display of at least one type of recognition target image among multiple types of recognition target images, and a plurality of types.
  • Image recognition processing function that performs image recognition processing for the recognition target image in parallel for each type of recognition target image, and recognition processing result acquisition that acquires the recognition processing result obtained by the image recognition processing for each type of recognition target image.
  • a function and a recording control function that controls the recording operation based on the recognition processing results of all the acquired recognition target images when recording a moving image of at least one type of recognition target image among a plurality of types of recognition target images. It is a program for an image processing device to realize.
  • the endoscopic images when image recognition processing is performed in parallel based on a plurality of types of endoscopic images, the endoscopic images can be efficiently recorded.
  • the endoscope system 10 includes an endoscope 12, a light source device 14, a processor device 16, a display 18, and a keyboard 19.
  • the endoscope 12 photographs the observation target.
  • the light source device 14 emits illumination light to irradiate the observation target.
  • the processor device 16 controls the system of the endoscope system 10.
  • the display 18 is a display unit that displays an observation image or the like based on an endoscopic image.
  • the keyboard 19 is an input device for inputting settings to the processor device 16 and the like.
  • the endoscope system 10 includes three modes as observation modes: a normal observation mode, a special observation mode, and a diagnosis support mode.
  • a normal observation mode a normal observation image having a natural color is displayed on the display 18 as an observation image by irradiating the observation target with normal light such as white light and taking a picture.
  • a special observation mode a special image emphasizing a specific structure or the like is displayed on the display 18 as an observation image by illuminating the observation target with special light having a wavelength band or a spectral spectrum different from that of normal light.
  • diagnosis support mode image recognition processing is performed for each type of recognition target image for a plurality of types of recognition target images based on the endoscopic image.
  • the recognition target image is an image based on the endoscopic image, and is an image to be subject to image recognition processing.
  • the display 18 continuously displays at least one of the plurality of types of recognition target images as an observation image.
  • the results of a plurality of recognition processes performed by the image recognition process performed for each type of the image to be recognized are used for controlling the recording operation.
  • the type of the image to be recognized depends on the spectral spectrum of the illumination light when the observation target is photographed, and / or the method of image processing for generating the image to be recognized (hereinafter referred to as image processing for generating the image to be recognized). Distinguish by. The details of the types of recognition target images will be described later.
  • the endoscope 12 includes an insertion portion 12a to be inserted into a subject having an observation target, an operation portion 12b provided at the base end portion of the insertion portion 12a, and a bending portion 12c provided on the distal end side of the insertion portion 12a. It has a tip portion 12d.
  • the operation unit 12b is provided with a treatment tool insertion port (not shown), a scope button No. 1 12f, a scope button No. 2 12g, and a zoom operation unit 12h.
  • the treatment tool insertion port is an entrance for inserting a treatment tool such as a biopsy forceps, a snare, or an electric knife.
  • the treatment tool inserted into the treatment tool insertion port protrudes from the tip portion 12d.
  • the scope button No. 1 12f is a freeze button and is used for an operation of acquiring a still image.
  • the scope button No. 2 12g is used for the operation of switching the observation mode. Various operations can be assigned to the scope button. By operating the zoom operation unit 12h, the observation target can be enlarged or reduced for shooting.
  • the light source device 14 includes a light source unit 20 including a light source that emits illumination light, and a light source processor 22 that controls the operation of the light source unit 20.
  • the light source unit 20 emits illumination light that illuminates the observation target.
  • the illumination light includes light emission such as excitation light used to emit the illumination light.
  • the light source unit 20 includes, for example, a light source of a laser diode, an LED (Light Emitting Diode), a xenon lamp, or a halogen lamp, and is used to emit at least white illumination light (hereinafter referred to as white light) or white light. It emits an excitation light.
  • the white color includes so-called pseudo-white color, which is substantially equivalent to white color in the imaging of the observation target using the endoscope 12.
  • the light source unit 20 includes, if necessary, a phosphor that emits light when irradiated with excitation light, an illumination light, an optical filter that adjusts the wavelength band, spectral spectrum, light amount, etc. of the excitation light.
  • the light source unit 20 can emit illumination light composed of at least narrow band light (hereinafter referred to as narrow band light).
  • narrow band light means a substantially single wavelength band in relation to the characteristics of the observation target and / or the spectral characteristics of the color filter of the image sensor 45. For example, when the wavelength band is, for example, about ⁇ 20 nm or less (preferably about ⁇ 10 nm or less), this light is a narrow band.
  • the light source unit 20 can emit a plurality of illumination lights having different spectral spectra from each other.
  • the plurality of illumination lights may include narrow band light.
  • the light source unit 20 may emit light having a specific wavelength band or spectral spectrum necessary for capturing an image used for calculating biological information such as oxygen saturation of hemoglobin contained in the observation target, for example. can.
  • the light source unit 20 has four color LEDs of V-LED20a, B-LED20b, G-LED20c, and R-LED20d.
  • the V-LED 20a emits purple light V having a center wavelength of 405 nm and a wavelength band of 380 to 420 nm.
  • the B-LED 20b emits blue light B having a center wavelength of 460 nm and a wavelength band of 420 to 500 nm.
  • the G-LED 20c emits green light G having a wavelength band of 480 to 600 nm.
  • the R-LED 20d emits red light R having a center wavelength of 620 to 630 nm and a wavelength band of 600 to 650 nm.
  • the center wavelengths of the V-LED 20a and the B-LED 20b have a width of about ⁇ 20 nm, preferably about ⁇ 5 nm to about ⁇ 10 nm.
  • the purple light V is a short-wavelength light used in the special observation mode or the diagnostic support mode to emphasize and display the density of superficial blood vessels, intramucosal hemorrhage, extramucosal hemorrhage, etc., and has a central wavelength or It is preferable to include 410 nm in the peak wavelength. Further, the purple light V and / or the blue light B is preferably narrow band light.
  • the light source processor 22 controls the timing of turning on, off, or blocking each light source constituting the light source unit 20, the light intensity, the amount of light emitted, and the like. As a result, the light source unit 20 can emit a plurality of types of illumination light having different spectral spectra for a preset period and emission amount.
  • the light source processor 22 turns on and off the V-LED20a, B-LED20b, G-LED20c, and R-LED20d, the light intensity or the amount of light emitted at the time of lighting, the insertion and removal of the optical filter, and the like. It is controlled by inputting an independent control signal to each.
  • the light source processor 22 independently controls each of the LEDs 20a to 20d to independently change the light intensity or the amount of light per unit time for purple light V, blue light B, green light G, or red light R. It can emit light. Therefore, the light source processor 22 can emit a plurality of illumination lights having different spectral spectra, for example, white illumination light, a plurality of types of illumination lights having different spectral spectra, or at least an illumination light composed of narrow band light. Etc. are emitted.
  • the light source processor 22 produces white light having a light intensity ratio of Vc: Bc: Gc: Rc between purple light V, blue light B, green light G, and red light R in the normal observation mode.
  • Vc light intensity ratio
  • Bc blue light B
  • Gc red light R
  • Each LED 20a to 20d is controlled so as to emit light. It should be noted that each of Vc, Bc, Gc, or Rc is larger than 0 (zero) and is not 0.
  • the light source processor 22 has a light intensity ratio of Vs: to the purple light V, the blue light B, the green light G, and the red light R as short-wavelength narrow-band light in the special observation mode.
  • Each LED 20a to 20d is controlled so as to emit special light having Bs: Gs: Rs.
  • the light intensity ratio Vs: Bs: Gs: Rs is different from the light intensity ratio Vc: Bc: Gc: Rc used in the normal observation mode, and is appropriately determined according to the observation purpose. Therefore, the light source unit 20 can emit a plurality of special lights having different spectral spectra from each other under the control of the light source processor 22.
  • Vs when emphasizing superficial blood vessels, Vs is preferably larger than other Bs, Gs, and Rs, and when emphasizing mesopelagic blood vessels, Gs is referred to as other Vs, Gs, and Rs. It is preferably larger than Rs.
  • the light intensity ratio includes the case where the ratio of at least one semiconductor light source is 0 (zero) except for Vc: Bc: Gc: Rc. Therefore, this includes the case where any one or more of the semiconductor light sources are not lit. For example, as in the case where the light intensity ratio between purple light V, blue light B, green light G, and red light R is 1: 0: 0: 0, only one of the semiconductor light sources is turned on, and the other three. Even if one does not light up, it shall have a light intensity ratio.
  • the light source processor 22 automatically switches a plurality of illumination lights having different spectral spectra to a specific pattern in order to acquire a plurality of types of recognition target images in the diagnostic support mode. To emit. Then, each of the plurality of illumination lights having different spectral spectra is repeatedly emitted in a preset order.
  • the first illumination light is the first illumination. It emits in the first emission pattern during the period, and emits the second illumination light in the second emission pattern in the second illumination period.
  • the first illumination light and the second illumination light are illumination lights having different spectral spectra.
  • the first illumination light is white light.
  • the second illumination light is an illumination light that can obtain an image suitable for a specific recognition process by illuminating the observation target with the second illumination light.
  • the second illumination light is purple light V.
  • the first light emission pattern is the light emission order of the first illumination light
  • the second light emission pattern is the light emission order of the second illumination light
  • the element constituting each pattern is a frame which is a unit of photography.
  • the frame means a period including at least a period from a specific timing in the image sensor 45 to the completion of signal reading.
  • One shooting and image acquisition are performed in one frame.
  • the first illumination light and the second illumination light emit either one, and do not emit at the same time.
  • One light emission cycle is composed of at least one first light emission pattern and a second light emission pattern, respectively, and the first light emission pattern and the second light emission pattern are combined to form a light emission cycle. Illumination is performed by repeating the light emission cycle.
  • the type of illumination light is distinguished by the spectral spectrum of the illumination light. Therefore, the illumination light having a different spectral spectrum is a different kind of illumination light.
  • the first light emission pattern is preferably the first A light emission pattern or the first B light emission pattern.
  • the number of frame FLs in the first illumination light L1 in the first illumination period P1 is determined in the emission cycle Q1 and is the same.
  • the number of frame FLs in the first illumination period P1 is different in each first illumination period P1 in the emission cycle Q2.
  • the first illumination light L1 has the same spectral spectrum and is white light.
  • the second light emission pattern is preferably a second A light emission pattern, a second B light emission pattern, a second C light emission pattern, or a second D light emission pattern.
  • the number of frame FLs in the second illumination period is determined in the emission cycle Q1 and is the same, and the spectral spectrum of the second illumination light L2 is the respective second illumination pattern.
  • the second illumination light L2a is the same in the second illumination period P2.
  • the second illumination light L2 may include illumination lights having different spectral spectra, and these are described as the second illumination light L2a and the second illumination light L2b to distinguish them, and when they are described as the second illumination light L2. , These are collectively referred to.
  • the second illumination light L2 is emitted in the second A emission pattern even in the emission cycle Q2.
  • the number of frames FL of the second illumination period P2 is the same in each second illumination period P2, and the spectral spectrum of the second illumination light L2 is the same. Is the second illumination light L2a or the second illumination light L2b in each of the second illumination periods P2, and is different.
  • the number of frames FL of the second illumination period P2 is different in each second illumination period P2, and the spectral spectra of the second illumination light L2 are different from each other.
  • the second illumination light L2a is the same in the second illumination period P2 of the above.
  • the number of frames FL of the second illumination period P2 is different in each second illumination period P2, and the spectral spectra of the second illumination light P2 are different from each other.
  • the second illumination light 2La or the second illumination light 2Lb in the second illumination period P2 of the above is different.
  • the light source processor 22 repeats the light emission cycle configured by combining these first light emission patterns and the second light emission patterns.
  • the light emission cycle Q1 includes a first A light emission pattern and a second A light emission pattern.
  • the light emission cycle Q2 includes a first B light emission pattern and a second A light emission pattern.
  • the light emission cycle Q3 includes a first A light emission pattern and a second B light emission pattern.
  • the light emission cycle Q4 includes a first A light emission pattern and a second C light emission pattern.
  • the light emission cycle Q5 includes a first A light emission pattern and a second D light emission pattern.
  • the spectral spectrum of the first illumination light L1 may be different in each first illumination period P1.
  • the light source processor 22 may change the first light emission pattern or the second light emission pattern based on the recognition processing result described later.
  • Changing the emission pattern includes changing the type of illumination light. Specifically, for example, the second light emission pattern is changed from the second A pattern to the second B light emission pattern based on the recognition processing result, or the second A light emission pattern using the second illumination light L2a is changed to the second light emission pattern. Switching may be performed such as changing to the second A emission pattern using the illumination light L2b.
  • the first lighting period P1 is preferably longer than the second lighting period P2, and the first lighting period P1 is preferably two frames or more.
  • the first illumination period P1 is set to 2 frames and the second illumination period P2 is set to 1 frame. It is supposed to be. Since the first illumination light P1 is used to generate an observation image to be displayed on the display 18, it is preferable to obtain a bright observation image by illuminating the observation target with the first illumination light P1.
  • the light emitted by each of the LEDs 20a to 20d is incident on the light guide 41 via an optical path coupling portion (not shown) composed of a mirror, a lens, or the like.
  • the light guide 41 is built in the endoscope 12 and the universal cord (not shown).
  • the universal cord is a cord that connects the endoscope 12, the light source device 14, and the processor device 16.
  • the light guide 41 propagates the light from the optical path coupling portion to the tip portion 12d of the endoscope 12.
  • the tip portion 12d of the endoscope 12 is provided with an illumination optical system 30a and a photographing optical system 30b.
  • the illumination optical system 30a has an illumination lens 42, and the illumination light propagated by the light guide 41 is emitted toward the observation target through the illumination lens 42.
  • the photographing optical system 30b has an objective lens 43, a zoom lens 44, and an image sensor 45.
  • the image sensor 45 was administered to the observation target via the objective lens 43 and the zoom lens 44, such as reflected light of the illumination light returning from the observation target (in addition to the reflected light, scattered light, fluorescence emitted by the observation target, or administration to the observation target.
  • the observation target is photographed using (including fluorescence caused by the drug).
  • the zoom lens 44 moves by operating the zoom operation unit 12h to enlarge or reduce the observation target image.
  • the image sensor 45 has a color filter of one of a plurality of color filters for each pixel.
  • the image sensor 45 is a color sensor having a primary color system color filter.
  • the image sensor 45 has an R pixel having a red color filter (R filter), a G pixel having a green color filter (G filter), and a B pixel having a blue color filter (B filter). ..
  • a CCD (Charge Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor can be used.
  • the image sensor 45 of the present embodiment is a primary color system color sensor, a complementary color system color sensor can also be used.
  • Complementary color sensors include, for example, a cyan pixel provided with a cyan color filter, a magenta pixel provided with a magenta color filter, a yellow pixel provided with a yellow color filter, and a green pixel provided with a green color filter. Have.
  • the complementary color sensor is used, the image obtained from the pixels of each of the above colors can be converted into the same image as the image obtained by the primary color sensor by performing the complementary color-primary color conversion.
  • the primary color system or complementary color system sensor has one or a plurality of types of pixels having characteristics other than the above, such as W pixels (white pixels that receive light in almost all wavelength bands).
  • W pixels white pixels that receive light in almost all wavelength bands.
  • the image sensor 45 of the present embodiment is a color sensor, a monochrome sensor having no color filter may be used.
  • the endoscope 12 includes a photographing processor 46 that controls the image sensor 45.
  • the control of the photographing processor 46 is different for each observation mode.
  • the photographing processor 46 controls the image sensor 45 so as to photograph an observation target illuminated by normal light.
  • the Bc image signal is output from the B pixel of the image sensor 45
  • the Gc image signal is output from the G pixel
  • the Rc image signal is output from the R pixel.
  • the photographing processor 46 controls the image sensor 45 and controls the image sensor 45 so as to photograph an observation target illuminated by special light.
  • the Bs image signal is output from the B pixel of the image sensor 45
  • the Gs image signal is output from the G pixel
  • the Rs image signal is output from the R pixel.
  • the photographing processor 46 controls the image sensor 45, and controls the image sensor 45 so as to photograph the observation target illuminated by the first illumination light L1 or the second illumination light L2.
  • the first illumination light L1 is illuminated
  • the B1 image signal is output from the B pixel of the image sensor 45
  • the G1 image signal is output from the G pixel
  • the R1 image signal is output from the R pixel.
  • the second illumination light L2 is illuminated
  • the B2 image signal is output from the B pixel of the image sensor 45
  • the G2 image signal is output from the G pixel
  • the R2 image signal is output from the R pixel.
  • the processor device 16 incorporates a program (not shown) related to processing performed by the central control unit 51, the image acquisition unit 52, the image processing unit 56, the display control unit 57, and the like, which will be described later.
  • a central control unit 51, an image acquisition unit 52, an image processing unit 56, and a display control unit are operated by operating the program by a central control unit 51 composed of an image processor included in the processor device 16 that functions as an image processing unit. 57 functions are realized.
  • the central control unit 51 comprehensively controls the endoscope system 10 such as synchronous control of the irradiation timing of the illumination light and the shooting timing.
  • the central control unit 51 sets the settings to the endoscope system 10 such as the light source processor 22, the photographing processor 46, or the image processing unit 56. Enter in each part of.
  • the image acquisition unit 52 acquires an image of an observation target captured using pixels of each color, that is, a RAW image, from the image sensor 45.
  • the RAW image is an image (endoscopic image) before the demosaic processing is performed. If the image is an image before the demosaic processing is performed, the RAW image also includes an image obtained by performing arbitrary processing such as noise reduction processing on the image acquired from the image sensor 45.
  • the image acquisition unit 52 includes a DSP (Digital Signal Processor) 53, a noise reduction unit 54, and a conversion unit 55 in order to perform various processing on the acquired RAW image as needed.
  • DSP Digital Signal Processor
  • the DSP 53 includes, for example, an offset processing unit, a defect correction processing unit, a demosaic processing unit, a linear matrix processing unit, a YC conversion processing unit, and the like (none of which are shown).
  • the DSP 53 performs various processing on the RAW image or the image generated by using the RAW image using these.
  • the offset processing unit performs offset processing on the RAW image.
  • the offset process is a process of reducing the dark current component from the RAW image and setting an accurate zero level.
  • the offset process may be referred to as a clamp process.
  • the defect correction processing unit performs defect correction processing on the RAW image.
  • the defect correction process is a process of correcting or generating a pixel value of a RAW pixel corresponding to a defective pixel of the image sensor 45 when the image sensor 45 includes a pixel (defective pixel) having a defect due to a manufacturing process or a change with time. Is.
  • the demosaic processing unit performs demosaic processing on the RAW image of each color corresponding to the color filter of each color.
  • the demosaic process is a process of generating pixel values that are missing due to the arrangement of color filters in a RAW image by interpolation.
  • the linear matrix processing unit performs linear matrix processing on the endoscopic image generated by assigning one or a plurality of RAW images to channels of each RGB color.
  • the linear matrix processing is a processing for enhancing the color reproducibility of an endoscopic image.
  • an endoscope image generated by assigning one or a plurality of RAW images to channels of each RGB color is used as an endoscope having a brightness channel Y, a color difference channel Cb, and a color difference channel Cr. This is the process of converting to an image.
  • the noise reduction unit 54 performs noise reduction processing on an endoscope image having a brightness channel Y, a color difference channel Cb, and a color difference channel Cr by using, for example, a moving average method or a median filter method.
  • the conversion unit 55 reconverts the luminance channel Y, the color difference channel Cb, and the color difference channel Cr after the noise reduction processing into an endoscopic image having channels of each color of BGR.
  • the image processing unit 56 performs necessary image processing or calculation on the endoscopic image output by the image acquisition unit 52.
  • the image processing unit 56 generates a plurality of types of recognition target images based on the endoscopic image output by the image acquisition unit 52. Further, control is performed so that at least one type of recognition target image out of a plurality of types of recognition target images is continuously displayed on the display. Further, image recognition processing for a plurality of types of recognition target images is performed in parallel for each type of recognition target image, and recognition processing results obtained by the image recognition processing are acquired for each type of recognition target image. Further, based on the recognition processing results of all the types of recognition target images acquired, the recording operation when recording a moving image of at least one type of recognition target image among the plurality of types of recognition target images is controlled.
  • the image processing unit 56 includes a normal observation image processing unit 61, a special observation image processing unit 62, and a diagnosis support image processing unit 63.
  • the normal observation image processing unit 61 performs image processing for normal observation images on the input Rc image signal, Gc image signal, and Bc image signal for one frame.
  • Image processing for normal observation images includes 3 ⁇ 3 matrix processing, gradation conversion processing, color conversion processing such as three-dimensional LUT (Look Up Table) processing, color enhancement processing, and structure enhancement processing such as spatial frequency enhancement. included.
  • the Rc image signal, the Gc image signal, and the Bc image signal that have been subjected to image processing for a normal observation image are input to the display control unit 57 as a normal observation image.
  • the special observation image processing unit 62 performs image processing for special observation images on the input Rs image signal, Gs image signal, and Bs image signal for one frame.
  • Image processing for special observation images includes 3x3 matrix processing, gradation conversion processing, color conversion processing such as 3D LUT (Look Up Table) processing, color enhancement processing, and structure enhancement processing such as spatial frequency enhancement. included.
  • the Rs image signal, the Gs image signal, and the Bs image signal that have been subjected to image processing for the special observation image are input to the display control unit 57 as special observation images.
  • the diagnosis support image processing unit 63 performs image processing and the like in the diagnosis support mode. As shown in FIG. 10, the diagnosis support image processing unit 63 includes a recognition target image generation unit 71, an image recognition processing unit 72, a recognition result acquisition unit 73, a recording control unit 74, and a display image generation unit 75.
  • the recognition target image generation unit 71 generates and acquires a plurality of types of recognition target images based on the endoscopic image output by the image acquisition unit 52.
  • the type of the image to be recognized is distinguished by one or both of the following two points.
  • the first point is distinguished by the spectral spectrum of the illumination light when the observation target is photographed. Therefore, the recognition target image generation unit 71 acquires an image obtained by photographing an observation target illuminated by each of a plurality of illumination lights having different spectral spectra from each other emitted by the light source unit as one type of recognition target image. ..
  • the second point is distinguished by the method of image processing for generating the image to be recognized.
  • the method of image processing for generating a recognition target image includes color expansion processing, structure enhancement processing, and the like.
  • the recognition target image When the recognition target image is distinguished by the recognition target image generation image processing method, it includes not performing the recognition target image generation image processing. Therefore, an endoscope image that does not perform image processing for generating a recognition target image on the endoscope image output by the image acquisition unit 52 is also one type of the recognition target image.
  • a recognition target image in which either the spectral spectrum of the illumination light or the image processing is different is regarded as a different type of recognition target image.
  • the recognition target image generation unit 71 is a recognition target image acquisition unit that is distinguished by the spectral spectrum of the illumination light when the observation target is photographed and / or by the method of image processing for recognition target image generation.
  • the recognition target image generation unit 71 includes a recognition target image generation unit for each type of recognition target image, and is a first recognition target image generation unit 81, a second recognition target image generation unit 82, and a third recognition target image generation unit. It includes 83, a fourth recognition target image generation unit 84, a fifth recognition target image generation unit 85, and an nth recognition target image generation unit 86.
  • n is an integer of 6 or more.
  • the following illumination light and / or image processing for generating a recognition target image is performed, respectively.
  • the first recognition target image generation unit 81 performs the first image processing for generating the first recognition target image.
  • the first image processing is processing applied to the B1 image signal, the G1 image signal, and the R1 image signal obtained by emitting the first illumination light of white light which is the spectral spectrum for the first illumination light.
  • the first image processing is the same as the normal observation image processing in the normal observation image processing unit 61, and the same first recognition target image as the normal observation image is obtained.
  • the first recognition target image is a kind of recognition target image. Therefore, the recognition target image generation unit 71 acquires an image obtained by photographing the observation target illuminated by the white illumination light as one type of the recognition target image.
  • the second recognition target image generation unit 82 performs the second image processing for generating the second recognition target image.
  • the second image processing is a process applied to the B2 image signal, the G2 image signal, and the R2 image signal obtained by emitting the second illumination light L2 in the second illumination light spectral spectrum SP1. As shown in FIG. 12, in the second illumination light L2 emitted by the spectroscopic spectrum SP1 for the second illumination light, the purple light V has a peak intensity higher than that of the blue light B, the green light G, and the red light R of other colors. Larger light is preferred.
  • the second image processing is a pseudo-color processing in which the B2 image signal is assigned to the B channel and the G channel for display, and the G2 image signal is assigned to the R channel for display. By this pseudo-color processing, a second recognition target image in which a blood vessel or a structure having a specific depth such as a surface blood vessel is emphasized can be obtained.
  • the second recognition target image is a kind of recognition target image.
  • the third recognition target image generation unit 83 performs a third image process for generating the third recognition target image.
  • the third image processing is a process applied to the B2 image signal, the G2 image signal, and the R2 image signal obtained by emitting the second illumination light in the second illumination light spectroscopic spectrum SP2.
  • the second illumination light emitted in the second illumination light spectral spectrum SP2 is preferably light that emits only purple light V (peak wavelength is, for example, 400 to 420 nm).
  • the third image processing is a process of allocating the B2 image signal to the B channel, the G channel, and the R channel for display, and adjusting the color tone and the gradation balance. By the third image processing, a third recognition target image in which the polar superficial blood vessels shallower than the surface blood vessels are emphasized can be obtained.
  • the third recognition target image is a kind of recognition target image.
  • the fourth recognition target image generation unit 84 performs the fourth image processing for generating the fourth recognition target image.
  • the fourth image processing is obtained by emitting the second illumination light in the spectroscopic spectrum SP3 for the second illumination light in addition to the B1 image signal, the G1 image signal, and the R1 image signal obtained by emitting the first illumination light. This is a process applied to the B2 image signal, the G2 image signal, and the R2 image signal.
  • the second illumination light spectroscopic spectrum SP3 is bluish-purple light VB (peak wavelength is, for example, 470 to 480 nm), which is light in a wavelength range in which the extinction coefficients of oxidized hemoglobin and reduced hemoglobin are different. Is preferable.
  • the fourth recognition target image generation unit 84 has a first signal ratio (B2 / G1) representing the ratio between the B2 image signal and the G1 image signal, and the R1 image signal and the G1 image signal.
  • the oxygen saturation signal ratio calculation unit 84a that performs signal ratio calculation processing to calculate the second signal ratio (R1 / G1) representing the ratio, and the oxygen saturation calculation table 84b, the first signal ratio and the first signal ratio.
  • It includes an oxygen saturation calculation unit 84c that calculates the oxygen saturation corresponding to the two signal ratios, and an oxygen saturation image generation unit 84d that generates an oxygen saturation image based on the oxygen saturation.
  • the oxygen saturation image becomes the fourth recognition target image obtained by the fourth image processing.
  • the fourth recognition target image is a kind of recognition target image.
  • the oxygen saturation calculation table 84b stores the correlation between the oxygen saturation and the first signal ratio and the second signal ratio. Specifically, as shown in FIG. 16, the oxygen saturation calculation table 84b has oxygen in a two-dimensional space centered on the first signal ratio (B2 / G1) and the second signal ratio (R1 / G1). It is composed of a two-dimensional table that defines the saturation lines ELx, EL1, EL2, EL3, ELy, and the like. For example, the isoline ELx has an oxygen saturation of 0%, the equivalence line EL1 has an oxygen saturation of 30%, the equivalence line EL2 has an oxygen saturation of 50%, and the equivalence line EL3 has an oxygen saturation of 80%. It represents that there is.
  • the positions and shapes of the contour lines with respect to the first signal ratio (B2 / G1) and the second signal ratio (R1 / G1) are obtained in advance by physical simulation of light scattering.
  • the first signal ratio (B2 / G1) and the second signal ratio (R1 / G1) are preferably on a log scale.
  • the fifth recognition target image generation unit 85 performs the fifth image processing for generating the fifth recognition target image.
  • the fifth image processing is a color expansion processing, and specifically, with respect to the B2 image signal, the G2 image signal, and the R2 image signal obtained by emitting the second illumination light in the second illumination light spectral spectrum SP4. It is a process to be applied.
  • the second illumination light spectrum SP4 is preferably light in which the peak intensities of the purple light V and the blue light B are larger than the peak intensities of the green light G and the red light R. Further, it is preferable that the intensity of the red light R is higher than that of the second illumination light spectrum SP2.
  • the first signal ratio (B2 / G2) representing the ratio of the B2 image signal to the G2 image signal and the ratio of the R2 image signal to the G2 image signal are represented.
  • the color difference between a plurality of observation target ranges is calculated based on the signal ratio calculation unit 85a for color difference expansion that performs signal ratio calculation processing for calculating the two signal ratios (G2 / R2) and the first signal ratio and the second signal ratio.
  • a color difference expansion processing unit 85b that performs the color difference expansion processing to be expanded, and a color difference expansion image generation unit 85d that generates a color difference expansion image based on the first signal ratio and the second signal ratio after the color difference expansion processing are provided.
  • the color difference expanded image becomes the fifth recognition target image obtained by the fifth image processing.
  • the fifth recognition target image is a kind of recognition target image.
  • the distance between a plurality of observation target ranges is expanded in a two-dimensional space consisting of a first signal ratio (B2 / G2) and a second signal ratio (G2 / R2).
  • B2 / G2 first signal ratio
  • G2 / R2 second signal ratio
  • the first range and the second range (“2”) are maintained in a state where the position of the first range (denoted as “1”) among a plurality of observation target ranges is maintained before and after the color difference expansion process.
  • the distance between the first range and the third range (denoted as "3"), and the distance between the first range and the fourth range (denoted as "4") can be extended.
  • 3 the distance between the first range and the fourth range
  • the color difference expansion process is performed by a method of adjusting the radius and the angle after converting the first signal ratio and the second signal ratio into polar coordinates. It is preferable that the first range is a normal part in which no lesion or the like is present, and the second to fourth ranges are an abnormal part in which a lesion or the like may be present.
  • the color difference expansion processing the range A1 in the two-dimensional space before the color difference expansion processing is expanded to the range A2 after the color difference expansion processing, so that the color difference is emphasized. The image will be emphasized.
  • the nth recognition target image generation unit 86 generates the nth kind of recognition target image.
  • the method or content of image processing is not limited to the above.
  • enhancement processing such as structure enhancement processing may be performed.
  • the types of recognition target images are distinguished according to the presence or absence of enhancement processing on the endoscopic image or the type of enhancement processing, and each of the distinguished recognition target images is acquired as one type of recognition target image.
  • the endoscopic image to be enhanced may or may not be subjected to any one of the first image processing to the nth image processing.
  • the structure enhancement process is a process performed on the acquired endoscopic image so that the blood vessel in the observation target is emphasized and represented as an endoscopic image.
  • the pixel value (brightness value) is obtained on the horizontal axis
  • the density histogram which is a graph obtained by taking the frequency on the vertical axis, is obtained, and the memory of the image processing unit 56 (not shown).
  • the gradation correction is performed by the gradation correction table stored in advance in the above.
  • the gradation correction table has a gradation correction curve in which the horizontal axis represents an input value and the vertical axis represents an output value, and the correspondence between the input value and the output value is shown.
  • Gradation correction is performed based on the correction curve to widen the dynamic range of the acquired endoscopic image.
  • the density is lower in the low density portion and higher in the high density portion. Therefore, for example, the density difference between the blood vessel region and the region where no blood vessel exists. Spreads and the contrast of blood vessels improves.
  • the contrast of the blood vessel is improved, so that the visibility of the structure of the blood vessel is enhanced, and it is easier and more accurate, for example, of the blood vessel. It is an image that can be used for determination or the like as a specific area having a high density.
  • the recognition target image generation unit 71 captures an image obtained by photographing an observation target illuminated by illumination light including purple light V and / or blue light B, which is preferably narrow band light, as the recognition target image. It is preferable to produce it as one kind.
  • the generated recognition target image is sent to the image recognition processing unit 72.
  • the image recognition processing unit 72 performs image recognition processing for a plurality of types of recognition target images in parallel for each type of recognition target image.
  • the image recognition process is performed in order to output a specific state reflected in the observation target image as a recognition process result.
  • the image recognition processing unit 72 includes a first image recognition unit 91, a second image recognition unit 92, a third image recognition unit 93, and a fourth image recognition unit, which are provided for each type of image to be recognized. It includes 94, a fifth image recognition unit 95, and an nth image recognition unit 96.
  • n is an integer of 6 or more, and includes a number of image recognition units corresponding to the number of types of images to be recognized.
  • the image recognition processing unit 72 includes an image recognition unit for each type of the recognition target image, and each image recognition unit performs image recognition processing for the corresponding recognition target image in parallel and independently. Therefore, the first image recognition unit 91 performs the image recognition processing of the first recognition target image, the second image recognition unit 92 performs the image recognition processing of the second recognition target image, and the third image recognition unit 93 performs the image recognition processing. 3 Image recognition processing of the recognition target image is performed, image recognition processing of the 4th recognition target image is carried out in the 4th image recognition unit 94, and image recognition processing of the 5th recognition target image is carried out in the 5th image recognition unit 95. Then, the nth image recognition unit 96 performs the image recognition processing of the nth recognition target image.
  • image recognition processes are performed in parallel and independently. Further, in the first to nth image recognition units, how many image recognition units are used is set depending on how many types of recognition target images are acquired. Further, for example, when a plurality of recognition target images are acquired in one of the acquired plurality of types of recognition target images, it is sufficient to perform image recognition processing on at least one of the recognition target images. It is preferable to perform the image recognition process on all of the plurality of types of recognition target images because the recognition accuracy is improved and the control accuracy of the moving image recording operation is also improved.
  • each image recognition unit from the first image recognition unit 91 to the nth image recognition unit 96 may perform the image recognition processing of the same method, depending on the type of the image to be recognized.
  • Each image recognition unit may perform image recognition processing by a method different from each other.
  • Examples of the image recognition processing method include a method of performing pattern recognition using image processing, a method of using machine learning technology, and the like. Specifically, for example, a method using an image-based value such as a pixel value and / or a brightness value of a recognition target image, a method using a value of biological information such as oxygen saturation calculated from the image, or an observation target. Examples thereof include a method of using correspondence information in which a specific state and each recognition target image obtained by photographing an observation target including the specific state are associated in advance. By these methods of the image recognition processing, the specific state of the observation target obtained by detecting the part or the like in the observation target reflected in the recognition target image or determining the disease or the like is output as the recognition processing result.
  • the specific state in the observation target is a non-living object such as a site, a specific structure, or a treatment tool, including the observation target, the presence or absence of a lesion, the name of the lesion or disease, and the probability or progression of the lesion or disease. Degree or peculiar biometric information value. Further, the specific state in the observation target includes the accuracy of the image recognition result obtained by the image recognition process and the like.
  • the site is preferably a characteristic site that appears in the endoscopic image.
  • the upper gastrointestinal tract it is the esophagus, the sulcus, the colon, the gastric body, the pylorus, the angular incisure, or the duodenal bulb, and in the case of the colon, the cecum, the circumflex, and the ascending.
  • Specific structures include ridges or depressions such as blood vessels, ducts, polyps or cancers, and non-biological objects include biopsy forceps, snares, or foreign body removal that can be attached to an endoscope.
  • a treatment tool such as a device, or a treatment tool for abdominal cavity used for laparoscopic surgery.
  • Names of lesions or diseases include those found on endoscopy of the upper gastrointestinal tract or large intestine, such as inflammation, redness, bleeding, ulcers, or polyps, or gastritis, Barrett's esophagus, cancer or. Ulcerative colitis, etc.
  • the biological information value is a value of biological information to be observed, and is, for example, an oxygen saturation, a blood vessel density, a fluorescence value due to a dye, or the like.
  • the image recognition processing unit 72 may include a correspondence information acquisition unit that acquires correspondence information in which the specific state of the observation target and the recognition target image obtained by photographing the observation target in the specific state are associated in advance. preferable. Then, it is preferable that the image recognition processing unit 72 performs image recognition processing on the newly acquired recognition target image based on the corresponding information.
  • the correspondence information corresponds to the recognition target image obtained by photographing the observation target and the information such as the specific state of the observation target or the region of the specific state. Information.
  • each correspondence information acquisition unit may perform learning or feedback to further acquire the newly acquired recognition target image and the specific state included in the recognition processing result output by estimation as correspondence information.
  • Each image recognition unit includes a correspondence information acquisition unit that includes correspondence information about the corresponding recognition target image.
  • the correspondence information acquisition unit performs image recognition processing of the recognition target image based on the correspondence information, and outputs details such as an area related to a specific state of the observation target included in the recognition target image. It should be noted that the output of details regarding the specific state includes the content such as "does not include the specific state".
  • each type of recognition target image is provided with a corresponding information acquisition unit associated with a specific type of specific state, so that each type of recognition target image can obtain good results by image recognition processing. Therefore, it is preferable.
  • the image recognition processing unit corresponding to this type of recognition target image includes correspondence information associated with a specific state regarding the observation target blood vessel. It is used as an image recognition processing unit and outputs a specific state related to the blood vessel to be observed.
  • the first image recognition unit 91 includes a first correspondence information acquisition unit 101, and performs a first image recognition process on a first recognition target image.
  • the second image recognition unit 92 includes a second correspondence information acquisition unit 102, performs a second image recognition process on the second recognition target image, and the third image recognition unit 93 acquires the third correspondence information.
  • a unit 103 is provided to perform a third image recognition process on the third recognition target image, and a fourth image recognition unit 94 is provided with a fourth corresponding information acquisition unit 104 and a fourth image with respect to the fourth recognition target image.
  • the fifth image recognition unit 95 includes the fifth corresponding information acquisition unit 105, performs the fifth image recognition process on the fifth recognition target image, and the nth image recognition unit 96 performs the recognition process.
  • the n-correspondence information acquisition unit 100 is provided, and the nth image recognition process is performed on the nth recognition target image.
  • Each correspondence information acquisition unit is, for example, a trained model in machine learning. Since the specific state of the observation target in the newly acquired image to be recognized can be obtained as the image recognition result more quickly or accurately, the image recognition process using the trained model by machine learning as the corresponding information acquisition unit is performed. Is preferable.
  • each correspondence information acquisition unit performs image recognition processing for outputting a specific state of an observation target using a trained model in machine learning. In this case, it is preferable to use the trained model that has been trained for each type of recognition target image in order to obtain good recognition processing results. Therefore, for example, it is preferable that the first correspondence information acquisition unit 101 and the second correspondence information acquisition unit 102 are trained models that are different from each other.
  • the plurality of types of recognition target images include at least a first recognition target image and a second recognition target image of different types from each other, and the image recognition processing unit 72 performs first image recognition processing for the first recognition target image. It is preferable to perform a second image recognition process different from the first image recognition process for the second image to be recognized.
  • the first image recognition process is preferably performed with respect to the detection of a specific part or an object other than a living body in the first recognition target image. It is preferable that the second image recognition process is performed with respect to the determination that the region of the specific state is included in the second recognition target image.
  • the first recognition target image similar to the normal observation image using the first illumination light L1 and the third recognition in which the polar surface blood vessels shallower than the surface blood vessels using the second illumination light L2a are emphasized.
  • the target image and the fifth recognition target image which is a color difference expanded image using the second illumination light L2b, are used by the first image recognition unit 91, the third image recognition unit 93, and the fifth image recognition unit 95, respectively. , Performs image recognition processing for each recognition target image. Since the first recognition target image is a normal observation image, the first image recognition process satisfactorily detects a specific portion.
  • the third recognition target image is an image in which the polar surface blood vessels and the like are emphasized, the third image recognition process satisfactorily determines that a lesion such as a mucous membrane on the surface layer is included.
  • the fifth recognition target image is a color difference expanded image, the fifth image recognition process can satisfactorily determine, for example, a severe region among the severity of ulcerative colitis as a specific region to be observed. conduct.
  • the severity of ulcerative colitis is classified into, for example, mild, moderate, severe, or severe.
  • the first illumination light L1 is emitted in the first A emission pattern of five frames
  • the second illumination light L2 is emitted in the second B emission pattern of one frame
  • an endoscopic image is acquired. do.
  • the second illumination light the second illumination light L2b and the second illumination light L2a are switched in order.
  • the fifth recognition target image IM2b is acquired based on the endoscopic image obtained by the second illumination light L2b.
  • the third recognition target image IM2a is acquired based on the endoscopic image obtained by the second illumination light L2a.
  • the recognition target image obtained by the second illumination light L2 is shown with diagonal lines, and the third recognition target image IM2a obtained by the second illumination light L2a and the fifth recognition obtained by the second illumination light L2b are shown.
  • the target image IM2b is distinguished from the target image IM2b by different types of diagonal lines.
  • the first recognition target image IM1 is acquired based on the endoscopic image obtained by the first illumination light L1.
  • the first image recognition unit 91 detects a specific part, for example, the rectal part, with respect to the first recognition target image IM1, and generates a first recognition processing result including information on the detection.
  • the third image recognition unit 93 determines whether or not the severity of ulcerative colitis includes a severe region with respect to the third recognition target image IM2a, and generates a third recognition processing result including information on the determination. do.
  • the fifth image recognition unit 95 determines whether or not the fifth recognition target image IM2 includes a bleeding spot, and generates a fifth recognition processing result including information on the determination.
  • Information on detection includes at least "detection” with detection or “non-detection” without detection. Further, the information regarding the determination includes at least a "pass determination” that satisfies the determination condition or a "failure determination” that does not satisfy the determination condition. Therefore, the first recognition processing result includes the information of "detection” when it is estimated that the first recognition target image includes the rectal portion in the observation target. In addition, the third recognition processing result includes information of "confirmation determination” when the third recognition target image is estimated to include a region where the severity of ulcerative colitis is severe in the observation target. In addition, the fifth recognition processing result includes information of "match determination" when it is estimated that the fifth recognition target image includes a bleeding spot in the observation target.
  • the recognition result acquisition unit 73 acquires the recognition processing result of the recognition target image obtained by the image recognition processing by each image recognition unit.
  • the recognition result acquisition unit 73 acquires the recognition processing result based on the recognition processing result of all the acquired recognition target images.
  • the recognition processing result preferably includes information on whether or not the corresponding recognition target image satisfies or does not satisfy a preset condition.
  • the preset condition is that the recognition processing result when the image recognition processing is performed using the recognition target image is "detection" or "match judgment", and the recognition processing result is "detection” or "match”.
  • the information of "judgment" is included, the information that the condition is satisfied is included.
  • the first recognition processing result which is the detection result of the specific part
  • the third recognition processing result and the fifth recognition processing result which are the determination results including the region of the specific state. Get the result.
  • the third recognition treatment result is a determination result of whether or not a lesion is included
  • the fifth recognition treatment result is a determination result of whether or not a severe region of ulcerative colitis is included.
  • the first recognition target image IM1 detecting a specific portion is shown with a dot pattern.
  • the "first recognition” line indicates what the first image recognition process detects.
  • the "result” one line below the “first recognition” indicates the information regarding the detection included in the first recognition processing result.
  • the "fifth recognition” line indicates what the fifth image recognition process determines, and the “result” in the line below it indicates information regarding the determination included in the fifth recognition process result.
  • the "third recognition” line indicates what the third image recognition process determines, and the "result” in the line below it contains information regarding the determination included in the third recognition process result. show.
  • the description of "specific site detection” indicates that the information that the specific site was detected by the first recognition target image IM1x was obtained, and thereafter.
  • the description of "specific site non-detection” in the first recognition target image IM1y indicates that the information that the specific site was not detected was obtained by the first recognition target image IM1y.
  • the code of "x" attached to the recognition target image indicates the first image for which "detection” or “match judgment” is first obtained in the image recognition process of the same type of recognition target image, and the code of "y” is used. , “Not detected” or “No” after obtaining “Detection” or “Failure” is shown.
  • the “result” when the sign "x” or “y” is added is underlined. In the following figure, similar reference numerals indicate similar contents.
  • the first recognition processing result includes the information that the recognition target image IM1x "detects the specific part".
  • a specific part is first detected in the first recognition target image IM1x, and then the specific part is continuously detected in the first recognition target image IM1 obtained thereafter, and the specific part is specified in the first recognition target image IM1y. Detection of a specific site continued until the site was not detected.
  • no information of "pass determination" was included.
  • the recognition result acquisition unit 73 acquires the recognition processing result as soon as the image recognition processing unit 72 generates the recognition target result based on each recognition target image.
  • the recognition result acquisition unit 73 deletes the recognition processing result immediately before the same type, and always has the latest recognition processing results based on all types of recognition target images.
  • all types of recognition target images are all types within one light emission cycle due to the light emission pattern of the illumination light or the like.
  • the emission pattern of the illumination light can be switched arbitrarily or depending on the recognition processing result or the like, but in that case, it is preferable to use all types within the emission cycle after the switching.
  • the recognition processing results of all the acquired recognition target images are sent to the recording control unit 74.
  • the recording control unit 74 controls the recording operation when recording a moving image of at least one type of recognition target image among a plurality of types of recognition target images based on the recognition processing results of all types of recognition target images acquired. Based on the recognition processing results of all the acquired recognition target images, the recording operation is controlled after examining at least one recognition processing result for each type acquired by the recognition result acquisition unit 73 for all types. That is.
  • the moving image of the recognition target image that is the target for recording the moving image can be one or more of the multiple types of recognition target images to be acquired.
  • it is preferably the first recognition target image. This is because the first recognition target image is a normal observation image, which is a moving image taken in a natural color, and the recorded moving image has a wide range of uses and is generally useful.
  • the recording operation is an instruction regarding the execution of recording, and includes, for example, the start of recording when recording is not performed. Also, if recording has already been performed, it includes continuation and suspension of recording. In addition, it includes operations normally performed for recording execution, such as recording or stopping for a certain period of time, such as pausing recording.
  • the recording control unit 74 controls operations such as start and stop of recording from all types of recognition processing results based on the plurality of types of recognition target images acquired by the recognition result acquisition unit 73.
  • the recording control unit 74 includes information that at least one of the recognition processing results of all types of recognition target images acquired by the recognition result acquisition unit 73 satisfies the condition, it is preferable to start or continue recording. Further, when all the recognition processing results of all types of recognition target images acquired by the recognition result acquisition unit 73 include information that the conditions are not satisfied and the recording is continued, it is preferable to stop the recording. ..
  • the recording control unit 74 uses the recognition processing results of three types of recognition target images, the first recognition target image IM1, the fifth recognition target image IM2b, and the third recognition target image IM2a. Based on this, the operation of recording when recording the moving image of the first recognition target image IM1 is controlled.
  • the recording control unit 74 when the first recognition processing result of the first recognition target image IM1 is newly acquired, it is replaced with the first recognition processing result acquired immediately before, and the fifth recognition target image IM2b or the third recognition
  • the target image IM2a controls the recording operation based on the latest plurality of types of recognition target images by replacing the images acquired immediately before.
  • the recording control unit 74 sets the latest recognition target image IM2b "" at time t1 when the recognition processing result including "detection" is obtained by the first image recognition process of the first recognition target image IM1x. Since it is the case that the recording has not already started based on both of "No judgment", the control to start the recording is performed. If recording has already started, control is performed to continue recording, but by substantially not performing recording operation, control is performed to continue recording. Recording is continued as long as the information of "detection" or "match determination" is included in the recognition processing result of the first recognition target image or the third recognition target image.
  • the latest third recognition target image IM2a is also "no judgment”, and both are negative recognition processing results. Therefore, control is performed to stop recording.
  • the negative recognition processing result is "non-detection” or “negative judgment”
  • the positive recognition processing result is "detection” or "pass judgment”. In the present embodiment, in this way, a recorded file in which the first recognition target image from the time t1 to the time t2 is recorded can be obtained.
  • the type of the second illumination light L2 is switched from the second illumination light L2b to the second illumination light L2a.
  • the recognition target image to be acquired may be switched from the fifth recognition target image IM2b to the third recognition target image IM2a.
  • the display image generation unit 75 uses at least one type of recognition target image out of a plurality of types of recognition target images to generate a display image to be displayed on the display 18. For example, when the first recognition target image similar to the normal observation image is continuously displayed on the display 18, the display image generation unit 75 uses the first recognition target image for image processing to obtain a display image. To generate a display image. The generated display image is sent to the display control unit 57.
  • the display control unit 57 displays the normal image on the display 18 in the normal observation mode, and displays the special observation image on the display 18 in the special observation mode. Further, in the case of the diagnosis support mode, control is performed so that at least one type of recognition target image among the recognition target images is continuously displayed on the display 18. In the frame where the type of the recognition target image to be displayed on the display 18 is not acquired, the display control unit 57 performs control such as continuously displaying the display image or the like acquired immediately before. In the present embodiment, since the display image based on the first recognition target image similar to the normal image is sent to the display control unit 57, the normal image is continuously displayed on the display 18. Although it is not displayed, the acquired fifth recognition target image and / or third recognition target image may be displayed on the display 18 as a display image according to an instruction.
  • the processor device 16 that functions as an image processing device, the endoscope system 10 that includes the image processing device, and the like are recognized targets based on the recognition processing results of all types of acquired images to be recognized.
  • the recording is automatically controlled, and the troublesomeness of manually controlling the recording is eliminated.
  • the recording since it is possible to control the video recording operation in more detail than when controlling the video recording operation using the recognition processing result of one type of recognition target image, the recording that corresponds flexibly to the purpose is automatically performed. It can be carried out.
  • the processor device 16 the endoscope system 10, and the like efficiently record the endoscope images and the like when the image recognition processes are performed in parallel based on a plurality of types of endoscope images. be able to.
  • the recording operation was controlled because the first recognition processing result detected or did not detect a specific part, but the recording was performed based on the recognition processing results other than the first recognition processing result.
  • the operation of may be controlled.
  • the first illumination light L1 is emitted in the first A emission pattern of five frames
  • the second illumination light L2 is emitted in the second B emission pattern of one frame to acquire an endoscopic image. ..
  • the second illumination light L2b and the second illumination light L2c are switched in order.
  • the fifth recognition target image IM2b is acquired based on the endoscopic image obtained by the second illumination light L2b.
  • the second recognition target image IM2c is acquired based on the endoscopic image obtained by the second illumination light L2c.
  • the recognition target image obtained by the second illumination light L2 is shaded and shown, and the fifth recognition target image IM2a obtained by the second illumination light L2b and the second recognition obtained by the second illumination light L2c are shown.
  • the target image IM2c is distinguished from the target image IM2c by adding a different pattern.
  • the first recognition target image IM1 is acquired based on the endoscopic image obtained by the first illumination light L1.
  • the information that the fifth recognition processing result satisfies the condition is included.
  • the image to be recognized with a dot pattern is determined to include a lesion area.
  • the illumination pattern of the second illumination light L2 is a pattern in which the second illumination light IM2b for the fifth recognition target image and the second illumination light IM2c for the second recognition target image are sequentially switched and used. be.
  • the recording operation was controlled by a positive detection or determination in one type of recognition processing result of the first recognition processing result or the fifth recognition processing result, but it is positive. Even if the detection or determination occurs in two or more types of recognition processing results, the recording operation is appropriately controlled. As shown in FIG. 26, in the illumination light emission pattern similar to that in FIG. 25, it is determined that the recognition target image IM2bx includes the lesion region at time t1, and recording is started. After that, the treatment tool was detected in the first recognition target image IM1x, but since the recording had already started, the recording operation was not performed and the recording was continued as it was.
  • the treatment tool was not detected in the first recognition target image IM1y, but since the positive determination of the lesion as a result of the fifth recognition process continued, the recording stop operation was not performed. Recording was continued as it was. After that, the determination of the lesion is negative in the fifth recognition processed image IM2by, and the detection or determination is positive in the first recognition processing result, the second recognition processing result, and the fifth recognition processing result at this time. Since there are no conditions, the operation to stop recording was performed. As described above, even when there are a plurality of recognition processing results satisfying the conditions, the recording operation can be appropriately controlled.
  • the recording operation is properly controlled.
  • the first illumination light L1 is emitted in the first A emission pattern of 5 frames
  • the second illumination light L2 is emitted in the second B emission pattern of 1 frame
  • the endoscopic image is obtained.
  • the second illumination light the second illumination light L2b and the second illumination light L2c are switched in order.
  • the fifth recognition target image IM2b is acquired based on the endoscopic image obtained by the second illumination light L2b.
  • the second recognition target image IM2c is acquired based on the endoscopic image obtained by the second illumination light L2c.
  • the second illumination light is switched to the second illumination light L2d, and after switching to the second illumination light L2d, the second recognition processing result is "combined".
  • the acquisition of the second illumination light L2d and the second recognition target image corresponding to the second illumination light L2d is continued until the "rejection determination" after the "determination" is included.
  • the recognition target image obtained by the second illumination light L2 is shaded and shown, and the fifth recognition target image IM2a obtained by the second illumination light L2b and the second recognition obtained by the second illumination light L2c are shown.
  • the target image IM2c and the fourth recognition target image IM2d obtained by the fourth illumination light L2d are shown separately with different patterns.
  • the first recognition target image IM1 is acquired based on the endoscopic image obtained by the first illumination light L1.
  • the treatment tool was not detected in the first recognition target image IM1y.
  • the recognition target image at this point two types of the recognition target image, the first recognition target image and the fourth recognition target image, have been acquired, and the positive judgment by the fourth recognition target image has continued, so the recording is continued. did.
  • the operation of stopping the recording was performed because there is no condition that the detection or determination is affirmative based on the information that the bleeding point is not determined in the fourth recognition target image IM2d.
  • the recording operation can be appropriately controlled.
  • FIG. 27 shows a series of operations in an example of an endoscopic procedure.
  • recording is started by determining the lesion area based on the result of the fifth recognition process, and the second recognition process is performed.
  • the treatment is performed using the treatment tool, and the bleeding after the treatment is based on the result of the fourth recognition treatment.
  • the recording control unit 74 When recording a moving image, the recording control unit 74 preferably attaches information regarding conditions corresponding to the recording operation to the moving image.
  • the information regarding the conditions corresponding to the recording operation is the content of the recognition processing result that triggered the operation. Recording operations include starting, continuing, or stopping recording. Further, since the recognition processing result is recorded, it is preferable to attach information on the positive content of the recognition processing result even if it does not trigger the operation.
  • the information is attached to a moving image, it may be recorded as a chapter which is a break in the moving image recording.
  • the video it is preferable to save the video as an individual recording file.
  • a plurality of means for recording may be provided. By using a plurality of recording means, a plurality of recordings can be performed at the same time.
  • the first recognition target image or the like which is an image by the first illumination light L1 and the second recognition target image or the like which is an image by the second illumination light L2 may be recorded independently.
  • the storage of the recorded file may be built in the image processing device or may be an external storage device. Further, it may be installed on the network to which the image processing device is connected.
  • each recording file is accompanied by information on the conditions corresponding to the operation of recording, the recognition processing result that triggered the start or continuation of recording, and the time information at which recording was started, continued, or stopped.
  • the information regarding the corresponding condition is the information regarding the recognition processing result that triggered the start, continuation, or stop of recording.
  • the file I which is a recording file, is tagged with a "part" related to the detection of the first recognition processing result, and the recording start time is set according to the detection and non-detection of the part which is the first recognition processing result. And t1 and t2 of the stop time are attached.
  • File II is tagged with "lesion / severe / treatment tool / bleeding point" relating to the detection and acceptance determination of the first recognition processing result and the third to fifth recognition processing results. Further, t3 of the recording start time is attached according to the judgment of the lesion or the seriousness which is the result of the fifth recognition processing or the result of the third recognition processing, and according to the judgment of whether or not the bleeding point is the result of the fourth recognition processing.
  • the recording stop time t4 is attached.
  • the recording operation may be controlled based on other information regarding the inspection by the endoscope. ..
  • other information for example, when a means for displaying the shape of the endoscope is used, recording is performed when the tip portion 12d of the endoscope is located at a preset position, as can be seen from the shape of the endoscope. Can be started.
  • the preset location is, for example, a location where there was a lesion in the past based on patient data. This ensures that the course of the lesion is recorded.
  • the recording operation may be controlled so as to record a specific recognition target image. For example, if you want to record all oxygen saturation images, set to start recording when acquiring the 4th recognition target image and stop recording when switching to acquisition of other recognition target images. May be good.
  • the display control unit 57 may superimpose the recognition processing result information on the recognition target image to form a display image and control the display. good.
  • the display of the severely ill portion which is the result of the third recognition process, may be superimposed on the first recognition target image by information such as text or a figure and displayed on the display 18.
  • the information of the recording operation may be controlled so as to be superimposed and displayed on the recognition target image.
  • an indicator indicating recording execution may be provided on a part of the display 18, and may be displayed by blinking in red to draw attention when recording is executed and turning off when recording is not executed.
  • a plurality of types of recognition target images of the first recognition target image, the fifth recognition target image, and the third recognition target image are acquired by the illumination light of the first A emission pattern and the second B pattern (step ST110).
  • the display control unit continuously displays the first recognition target image on the display 18.
  • Image recognition processing is performed in parallel for each type of recognition target image for a plurality of types of recognition target images (step ST120). Therefore, the specific site is detected by the first recognition target image, the lesion is determined by the fifth recognition target image, and the serious condition is determined independently by the third recognition target image.
  • the recognition result acquisition unit 73 acquires a plurality of recognition processing results (step ST130).
  • the recording control unit 74 starts recording when there is one or more positive detections or determination results of "detection” or “match determination” from the plurality of types of recognition processing results (Y in step ST140) (Y). Step ST150). If there is no positive detection or judgment result of "detection” or “match judgment” (N in step ST140), the process returns to the acquisition of the image to be recognized.
  • step ST160 After starting recording, continuously acquire multiple types of recognition target images (step ST160). Image recognition processing is performed in parallel for each type of recognition target image for a plurality of types of recognition target images (step ST170).
  • the recognition result acquisition unit 73 acquires a plurality of recognition processing results (step ST180).
  • the recording control unit 74 stops recording when there is no positive detection or determination result of "detection” or “match determination” from the plurality of types of recognition processing results (Y in step ST190) (step ST200). ..
  • N in step ST190 When there is one or more positive detections or judgment results of "detection" or “match judgment” (N in step ST190), the process returns to the acquisition of the image to be recognized and the recording is continued.
  • the processor device 16 functions as an image processing device, but an image processing device including an image processing unit 56 may be provided separately from the processor device 16.
  • the image processing unit 56 has taken an image with the endoscope 12, for example, directly from the endoscope system 10 or indirectly from the PACS (Picture Archiving and Communication Systems) 910. It can be provided in the diagnostic support device 911 that acquires a RAW image. Further, as shown in FIG. 31, it is connected to various inspection devices such as the first inspection device 921, the second inspection device 922, ..., the K inspection device 923, including the endoscope system 10, via the network 926.
  • the image processing unit 56 can be provided in the medical service support device 930.
  • the endoscope 12 uses a so-called flexible endoscope having a flexible insertion portion 12a, but the observation target swallows and uses a capsule-type endoscope.
  • the present invention is also suitable when a rigid endoscope (laparoscope) used for a mirror, surgery, or the like is used.
  • the above-described embodiment and modification are an operation method of an image processing device including an image processor and performing image recognition processing based on an image obtained by photographing an observation target with an endoscope.
  • An image acquisition step for acquiring a plurality of types of recognition target images based on the above, a display control step for controlling to continuously display at least one type of recognition target image among a plurality of types of recognition target images on a display, and a plurality of types.
  • Image recognition processing step that performs image recognition processing for the recognition target image in parallel for each type of recognition target image, and recognition processing result acquisition that acquires the recognition processing result obtained by the image recognition processing for each type of recognition target image.
  • a step and a recording control step that controls a recording operation based on the recognition processing results of all types of the recognition target image when recording a moving image of at least one type of the recognition target image among a plurality of types of recognition target images.
  • the method of operating the image recognition processing device provided is included.
  • the above-described embodiment and modification are image processing installed in an image processing apparatus provided with an image processor and performing image recognition processing based on an image obtained by photographing an observation target with an endoscope.
  • the computer has an image acquisition function for acquiring multiple types of recognition target images based on images, and control for continuously displaying at least one type of recognition target image among the multiple types of recognition target images on the display.
  • the display control function to be performed, the image recognition processing function to perform image recognition processing for a plurality of types of recognition target images in parallel for each type of recognition target image, and the recognition processing result obtained by the image recognition processing are the recognition target images.
  • various types such as a central control unit 51, an image acquisition unit 52, a DSP 53, a noise reduction unit 54, a conversion unit 55, an image processing unit 56, and a display control unit 57 included in the processor device 16 which is an image processing device.
  • the hardware structure of the processing unit that executes processing is various processors as shown below.
  • the circuit configuration is changed after manufacturing the CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), which is a general-purpose processor that executes software (program) and functions as various processing units. It includes a programmable logic device (PLD), which is a possible processor, a dedicated electric circuit, which is a processor having a circuit configuration specially designed for performing various processes, and the like.
  • PLD programmable logic device
  • One processing unit may be composed of one of these various processors, or may be composed of a combination of two or more processors of the same type or different types (for example, a plurality of FPGAs or a combination of a CPU and an FPGA). May be done. Further, a plurality of processing units may be configured by one processor. As an example of configuring a plurality of processing units with one processor, first, as represented by a computer such as a client or a server, one processor is configured by a combination of one or more CPUs and software. There is a form in which this processor functions as a plurality of processing units.
  • SoC System On Chip
  • the various processing units are configured by using one or more of the above-mentioned various processors as a hardware-like structure.
  • the hardware-like structure of these various processors is, more specifically, an electric circuit (circuitry) in which circuit elements such as semiconductor elements are combined.
  • the present invention is a system or device for acquiring medical images (including moving images) other than endoscopic images. It can also be used in such cases.
  • the present invention can be applied to an ultrasonic inspection device, an X-ray imaging device (including a CT (Computed Tomography) inspection device, a mammography device, etc.), an MRI (magnetic resonance imaging) device, and the like.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)

Abstract

Provided are: an image processing device that efficiently records an endoscopic image when image recognition processes are performed on the basis of a plurality of types of endoscopic images in parallel; an endoscope system; an operation method for an image processing device; and a program for an image processing device. An image processing device (16) is provided with an image processor. The image processor acquires a plurality of types of recognition target images based on an endoscope image, performs image recognition processes for the plurality of types of recognition target images on the respective types of recognition target images in parallel, and controls the operation of recording a moving image of at least one type of recognition target image among the plurality of types of recognition target images on the basis of the results of the recognition processes of all the types of recognition target images acquired.

Description

画像処理装置、内視鏡システム、画像処理装置の作動方法、及び画像処理装置用プログラムImage processing device, endoscope system, operation method of image processing device, and program for image processing device
 本発明は、画像認識処理を行う画像処理装置、内視鏡システム、画像処理装置の作動方法、及び画像処理装置用プログラムに関する。 The present invention relates to an image processing device that performs image recognition processing, an endoscope system, an operation method of the image processing device, and a program for the image processing device.
 医療分野においては、光源装置、内視鏡、及びプロセッサ装置を備える内視鏡システムを用いた検査又は診断等が広く行われている。内視鏡システムを用いて診断又は検査等を行う場合、医療事故に備えるための証拠、又は学会発表等の用途に、内視鏡を用いて観察対象を撮影することにより得られる画像(以下、内視鏡画像という)を動画として記録する録画が行われる。 In the medical field, examinations or diagnoses using an endoscope system equipped with a light source device, an endoscope, and a processor device are widely performed. When performing diagnosis or examination using an endoscope system, an image obtained by photographing an observation target with an endoscope for use as evidence for preparing for a medical accident or for presentation at a conference (hereinafter, Recording is performed to record an endoscopic image) as a moving image.
 内視鏡画像の録画に際し、録画の開始又は停止等の制御は、通常、記録装置等の手動操作により行われる。例えば、検査開始前に録画を開始する操作を手動で行い、検査終了後に録画を停止する操作を手動で行う。また、内視鏡検査中に、ユーザが動画を残したいシーンにおいて、手動で録画の開始と停止とを手動で行う場合もある。 When recording an endoscopic image, control such as start or stop of recording is usually performed by manual operation of a recording device or the like. For example, the operation of starting recording is manually performed before the inspection is started, and the operation of stopping recording is manually performed after the inspection is completed. In addition, during endoscopy, the user may manually start and stop recording in a scene where he / she wants to keep a moving image.
 内視鏡画像の録画の制御については、ユーザの操作性向上のために、内視鏡により被検体の体腔内を撮影して得られる画像信号の画像処理の種類に応じて、動画の記録動作を制御する内視鏡用画像処理装置が知られている(特許文献1)。 Regarding the control of the recording of endoscopic images, in order to improve the operability of the user, a moving image recording operation is performed according to the type of image processing of the image signal obtained by photographing the inside of the body cavity of the subject with an endoscope. An image processing device for an endoscope that controls the above is known (Patent Document 1).
 また、内視鏡画像の動画の記録開始及び記録停止を自動的に制御するために、内視鏡により被検体の体腔内を撮影して得られる画像信号を記録する際に、画像信号が赤色をしきい値以上含むか否かの判定により、記録の開始と停止とを制御する内視鏡画像記録装置が知られている(特許文献2)。 In addition, in order to automatically control the start and stop of recording a moving image of an endoscope image, the image signal is red when the image signal obtained by photographing the inside of the body cavity of the subject with an endoscope is recorded. There is known an endoscopic image recording device that controls the start and stop of recording by determining whether or not the image contains a threshold value or more (Patent Document 2).
 また、内視鏡観察において病変部の見落としを低減させるために、病変候補領域の検出が開始されてから途絶するまでの期間において、観察画像を複数の記録画像として順次記録し、表示する画像処理装置が知られている(特許文献3)。 In addition, in order to reduce oversight of lesions in endoscopic observation, image processing is performed in which observation images are sequentially recorded and displayed as a plurality of recorded images during the period from the start of detection of the lesion candidate region to the interruption. The device is known (Patent Document 3).
特開2006-271871号公報Japanese Unexamined Patent Publication No. 2006-271871 特開2010-51399号公報Japanese Unexamined Patent Publication No. 2010-51399 国際公開第2017/216922号International Publication No. 2017/216922
 内視鏡画像の録画における開始又は停止等の動作の制御において、検査開始前に録画を開始する操作を行い、検査終了後に録画を停止する操作を行う手動操作の場合は、操作し忘れのおそれがある。また、このように録画された動画は、内視鏡検査の全体を撮影したものであるため、用途によっては不要な部分が多くなり、所望のシーンを再生することが難しくなるおそれがある。そして、ストレージ容量を圧迫し、ストレージに記録する検査数を少なくすることとなる。したがって、録画に関してユーザの使用性を阻害するおそれがある。 In the control of operations such as start or stop in recording endoscopic images, there is a risk of forgetting to operate in the case of a manual operation in which recording is started before the inspection starts and recording is stopped after the inspection is completed. There is. Further, since the moving image recorded in this way is a photograph of the entire endoscopy, there is a possibility that it becomes difficult to reproduce a desired scene due to a large number of unnecessary parts depending on the application. Then, the storage capacity is squeezed, and the number of inspections recorded in the storage is reduced. Therefore, there is a risk of hindering the usability of the user regarding recording.
 また、画像処理の種類、画像が含む赤色、又は、画像の病変候補領域の検出等により録画の動作を制御する方法等の従来の方法は、録画のそれぞれの目的に応じて録画の動作の契機が決定されている。したがって、他の目的の録画のためには適さない場合がある。 In addition, conventional methods such as a method of controlling the recording operation by detecting the type of image processing, the red color contained in the image, or the lesion candidate area of the image, etc., trigger the recording operation according to each purpose of the recording. Has been decided. Therefore, it may not be suitable for recording for other purposes.
 本発明は、複数種類の内視鏡画像に基づいて画像認識処理を並列して行う場合に、効率的に内視鏡画像の録画を行う画像処理装置、内視鏡システム、画像処理装置の作動方法、及び画像処理装置用プログラムを提供することを目的とする。 INDUSTRIAL APPLICABILITY The present invention operates an image processing device, an endoscope system, and an image processing device that efficiently record an endoscope image when image recognition processing is performed in parallel based on a plurality of types of endoscope images. It is an object of the present invention to provide a method and a program for an image processing apparatus.
 本発明は、画像処理装置であって、内視鏡を用いて観察対象を撮影することにより得られる画像に基づいて画像認識処理を行う。画像処理装置は、画像用プロセッサを備える。画像用プロセッサは、画像に基づく複数種類の認識対象画像を取得し、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像を継続してディスプレイに表示する制御を行い、複数種類の認識対象画像に対する画像認識処理を、認識対象画像の種類毎に並列して行い、画像認識処理により得られる認識処理結果を取得し、取得した全種類の認識対象画像の認識処理結果に基づき、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像の動画を記録する際の記録の動作を制御する。 The present invention is an image processing apparatus, and performs image recognition processing based on an image obtained by photographing an observation target with an endoscope. The image processing device includes an image processor. The image processor acquires a plurality of types of recognition target images based on the image, controls the continuous display of at least one type of recognition target image among the plurality of types of recognition target images on the display, and controls the display of the plurality of types of recognition targets. Image recognition processing for images is performed in parallel for each type of recognition target image, recognition processing results obtained by image recognition processing are acquired, and multiple types of recognition processing results are obtained based on the recognition processing results of all the acquired types of recognition target images. It controls the recording operation when recording a moving image of at least one type of recognition target image among the recognition target images.
 認識処理結果は、認識対象画像が予め設定した条件を満たすか又は満たさないかの情報を含み、画像用プロセッサは、全種類の認識処理結果のうち1つ以上が条件を満たすとの情報を含む場合、記録を開始又は継続することが好ましい。 The recognition processing result includes information on whether or not the recognition target image satisfies a preset condition, and the image processor includes information that one or more of all types of recognition processing results satisfy the condition. If so, it is preferable to start or continue recording.
 画像用プロセッサは、全種類の認識処理結果のすべてが条件を満たさないとの情報を含み、かつ、記録が継続している場合、記録を停止することが好ましい。 It is preferable that the image processor contains information that all the recognition processing results of all types do not satisfy the conditions, and the recording is stopped when the recording is continued.
 画像用プロセッサは、動画を記録する際に、記録の開始もしくは継続又は停止の対応する条件に関する情報を動画に付すことが好ましい。 When recording a moving image, the image processor preferably attaches information regarding the corresponding conditions for starting, continuing, or stopping the recording to the moving image.
 画像用プロセッサは、条件を満たす観察対象と条件を満たす観察対象を撮影することにより得た認識対象画像とを対応付けた対応情報を予め取得し、対応情報に基づき、新たに取得した認識対象画像に対する画像認識処理を行うことが好ましい。 The image processor acquires in advance the correspondence information in which the observation target satisfying the condition and the recognition target image obtained by shooting the observation target satisfying the condition are associated with each other, and based on the correspondence information, the newly acquired recognition target image. It is preferable to perform image recognition processing on the image.
 画像用プロセッサは、対応情報を認識対象画像の種類毎に取得し、新たに取得した認識対象画像に対する画像認識処理を、対応する種類の対応情報に基づき行うことが好ましい。 It is preferable that the image processor acquires correspondence information for each type of recognition target image, and performs image recognition processing for the newly acquired recognition target image based on the corresponding type of correspondence information.
 条件は、画像用プロセッサにより、特定の部位又は生体以外の物体が検出されたこと、又は、特定状態の領域を含むと判定されたことであることが好ましい。 It is preferable that the condition is that an object other than a specific part or a living body is detected by the image processor, or that it is determined to include a region in a specific state.
 複数種類の認識対象画像は、少なくとも互いに種類が異なる第1認識対象画像と第2認識対象画像とを含み、画像用プロセッサは、第1認識対象画像に対しては、第1画像認識処理を行い、第2認識対象画像に対しては、第1画像認識処理とは異なる第2画像認識処理を行うことが好ましい。 The plurality of types of recognition target images include at least a first recognition target image and a second recognition target image of different types from each other, and the image processor performs a first image recognition process on the first recognition target image. For the second image to be recognized, it is preferable to perform a second image recognition process different from the first image recognition process.
 第1画像認識処理は、第1認識対象画像における特定の部位又は生体以外の物体の検出に関して行われることが好ましい。 It is preferable that the first image recognition process is performed with respect to the detection of a specific part or an object other than a living body in the first recognition target image.
 第2画像認識処理は、第2認識対象画像における特定状態の領域を含むとの判定に関して行われることが好ましい。 It is preferable that the second image recognition process is performed with respect to the determination that the region of the specific state in the second recognition target image is included.
 画像に対し強調処理を行うことにより認識対象画像を生成し、画像用プロセッサは、強調処理の有無又は種類により認識対象画像の種類を区別し、区別した認識対象画像をそれぞれ1種の認識対象画像として取得することが好ましい。 A recognition target image is generated by performing enhancement processing on an image, and the image processor distinguishes the type of recognition target image according to the presence or absence or type of enhancement processing, and each of the distinguished recognition target images is one type of recognition target image. It is preferable to obtain as.
 強調処理の種類は、色彩拡張処理及び/又は構造強調処理であり、画像用プロセッサは、互いに異なる種類の強調処理を行うことにより生成した認識対象画像を、それぞれ1種の認識対象画像として取得することが好ましい。 The types of enhancement processing are color expansion processing and / or structure enhancement processing, and the image processor acquires recognition target images generated by performing different types of enhancement processing as one type of recognition target image. Is preferable.
 また、本発明の内視鏡システムは、画像処理装置と、観察対象に照射する照明光を発する光源部とを備える。 Further, the endoscope system of the present invention includes an image processing device and a light source unit that emits illumination light to irradiate the observation target.
 画像用プロセッサは、光源部が発する互いに分光スペクトルが異なる複数の照明光のそれぞれにより照明した観察対象を撮影することにより得られる画像を、それぞれ1種の認識対象画像として取得することが好ましい。 It is preferable that the image processor acquires an image obtained by photographing an observation target illuminated by each of a plurality of illumination lights having different spectral spectra from each other emitted by the light source unit as one type of recognition target image.
 画像用プロセッサは、光源部が発する白色の照明光により照明した観察対象を撮影することにより得られる画像を、認識対象画像の1種として取得することが好ましい。 It is preferable that the image processor acquires an image obtained by photographing an observation target illuminated by white illumination light emitted by a light source unit as one kind of recognition target image.
 画像用プロセッサは、光源部が発する予め設定した波長帯域の狭帯域光を含む照明光により照明した観察対象を撮影することにより得られる画像を、認識対象画像の1種として取得することが好ましい。 It is preferable that the image processor acquires an image obtained by photographing an observation target illuminated by illumination light including a narrow band light of a preset wavelength band emitted by a light source unit as one kind of recognition target image.
 光源部は、互いに分光スペクトルが異なる複数の照明光のそれぞれを、予め設定した順序により繰り返し発光することが好ましい。 It is preferable that the light source unit repeatedly emits each of a plurality of illumination lights having different spectral spectra in a preset order.
 光源部は、互いに分光スペクトルが異なる第1照明光と第2照明光とを発し、第1照明期間中に第1照明光を第1発光パターンにより発し、第2照明期間中に第2照明光を第2発光パターンにより発し、かつ、第1照明光と第2照明光とを切り替える光源用プロセッサと、第1照明光によって照明された観察対象を撮影して得られる第1画像信号と、第2照明光によって照明された観察対象を撮影して得られる第2画像信号とを出力するイメージセンサとを備え、画像用プロセッサは、第1画像信号に基づく第1認識対象画像に対して第1画像認識処理を行い、第2画像信号に基づく第2認識対象画像に対して第2画像認識処理を行い、第1画像認識処理による第1画像認識結果と、第2画像認識処理による第2画像認識結果とに基づき録画の動作を制御することが好ましい。 The light source unit emits the first illumination light and the second illumination light having different spectral spectra from each other, emits the first illumination light by the first emission pattern during the first illumination period, and emits the second illumination light by the first emission pattern, and the second illumination light during the second illumination period. A light source processor that emits light according to the second emission pattern and switches between the first illumination light and the second illumination light, a first image signal obtained by photographing an observation target illuminated by the first illumination light, and a first image signal. The image sensor includes an image sensor that outputs a second image signal obtained by photographing the observation target illuminated by the illumination light, and the image processor is the first with respect to the first recognition target image based on the first image signal. The image recognition process is performed, the second image recognition process is performed on the second recognition target image based on the second image signal, the first image recognition result by the first image recognition process and the second image by the second image recognition process. It is preferable to control the recording operation based on the recognition result.
 また、本発明は、画像処理装置の作動方法であって、内視鏡を用いて観察対象を撮影することにより得られる画像に基づいて画像認識処理を行い、画像に基づく複数種類の認識対象画像を取得する画像取得ステップと、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像を継続してディスプレイに表示する制御を行う表示制御ステップと、複数種類の認識対象画像に対する画像認識処理を、認識対象画像の種類毎に並列して行う画像認識処理ステップと、画像認識処理により得られる認識処理結果を、認識対象画像の種類毎に取得する認識処理結果取得ステップと、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像の動画を記録する際に、取得した全種類の認識対象画像の認識処理結果に基づき、記録の動作を制御する記録制御ステップとを備える。 Further, the present invention is a method of operating an image processing device, in which image recognition processing is performed based on an image obtained by photographing an observation target with an endoscope, and a plurality of types of recognition target images based on the image are performed. An image acquisition step for acquiring images, a display control step for controlling the continuous display of at least one type of recognition target image among a plurality of types of recognition target images, and an image recognition process for a plurality of types of recognition target images. , An image recognition processing step performed in parallel for each type of recognition target image, a recognition processing result acquisition step for acquiring recognition processing results obtained by image recognition processing for each type of recognition target image, and a plurality of types of recognition targets. When recording a moving image of at least one type of recognition target image among the images, the recording control step for controlling the recording operation based on the recognition processing results of all the acquired recognition target images is provided.
 また、本発明は、画像処理装置用プログラムであって、内視鏡を用いて観察対象を撮影することにより得られる画像に基づいて画像認識処理を行う画像処理装置にインストールされ、コンピュータに、画像に基づく複数種類の認識対象画像を取得する画像取得機能と、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像を継続してディスプレイに表示する制御を行う表示制御機能と、複数種類の認識対象画像に対する画像認識処理を、認識対象画像の種類毎に並列して行う画像認識処理機能と、画像認識処理により得られる認識処理結果を、認識対象画像の種類毎に取得する認識処理結果取得機能と、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像の動画を記録する際に、取得した全種類の認識対象画像の認識処理結果に基づき、記録の動作を制御する記録制御機能とを実現させるための画像処理装置用プログラムである。 Further, the present invention is a program for an image processing device, which is installed in an image processing device that performs image recognition processing based on an image obtained by photographing an observation target with an endoscope, and is installed on a computer for an image. An image acquisition function that acquires multiple types of recognition target images based on the above, a display control function that controls the continuous display of at least one type of recognition target image among multiple types of recognition target images, and a plurality of types. Image recognition processing function that performs image recognition processing for the recognition target image in parallel for each type of recognition target image, and recognition processing result acquisition that acquires the recognition processing result obtained by the image recognition processing for each type of recognition target image. A function and a recording control function that controls the recording operation based on the recognition processing results of all the acquired recognition target images when recording a moving image of at least one type of recognition target image among a plurality of types of recognition target images. It is a program for an image processing device to realize.
 本発明によれば、複数種類の内視鏡画像に基づいて画像認識処理を並列して行う場合に、効率的に内視鏡画像の録画を行うことができる。 According to the present invention, when image recognition processing is performed in parallel based on a plurality of types of endoscopic images, the endoscopic images can be efficiently recorded.
内視鏡システムの構成を説明する説明図である。It is explanatory drawing explaining the structure of an endoscope system. 内視鏡システムの機能を示すブロック図である。It is a block diagram which shows the function of an endoscope system. 紫色光V、青色光B、緑色光G、及び赤色光Rの分光スペクトルを示すグラフである。It is a graph which shows the spectral spectrum of purple light V, blue light B, green light G, and red light R. 第1A発光パターン及び第2A発光パターンを説明する説明図である。It is explanatory drawing explaining the 1A light emission pattern and the 2nd A light emission pattern. 第1B発光パターンを説明する説明図である。It is explanatory drawing explaining 1B light emission pattern. 第2B発光パターンを説明する説明図である。It is explanatory drawing explaining the 2nd B light emission pattern. 第2C発光パターンを説明する説明図である。It is explanatory drawing explaining the 2nd C light emission pattern. 第2D発光パターンを説明する説明図である。It is explanatory drawing explaining the 2D light emission pattern. 画像処理部の機能を示すブロック図である。It is a block diagram which shows the function of an image processing part. 診断支援画像処理部の機能を示すブロック図である。It is a block diagram which shows the function of the diagnosis support image processing part. 認識対象画像取得部の機能を示すブロック図である。It is a block diagram which shows the function of the recognition target image acquisition part. 第2照明光用分光スペクトルSP1を示すグラフである。It is a graph which shows the spectroscopic spectrum SP1 for the 2nd illumination light. 第2照明光用分光スペクトルSP2を示すグラフである。It is a graph which shows the spectroscopic spectrum SP2 for the 2nd illumination light. 第2照明光用分光スペクトルSP3を示すグラフである。It is a graph which shows the spectroscopic spectrum SP3 for the 2nd illumination light. 第4認識対象画像生成部の機能を示すブロック図である。It is a block diagram which shows the function of the 4th recognition target image generation part. 酸素飽和度算出用テーブルを示すグラフである。It is a graph which shows the table for oxygen saturation calculation. 第2照明光用スペクトルSP4を示すグラフである。It is a graph which shows the spectrum SP4 for the 2nd illumination light. 第5認識対象画像生成部の機能を示すブロック図である。It is a block diagram which shows the function of the 5th recognition target image generation part. 色差拡張処理を説明する説明図である。It is explanatory drawing explaining the color difference expansion process. 画像認識処理部の機能を示すブロック図である。It is a block diagram which shows the function of an image recognition processing part. 対応情報取得部の機能を説明する説明図である。It is explanatory drawing explaining the function of the correspondence information acquisition part. 認識対象画像の取得について説明する説明図である。It is explanatory drawing explaining the acquisition of the recognition target image. 認識処理結果について説明する説明図であるIt is explanatory drawing explaining the recognition process result. 認識処理結果と録画制御とについて説明する説明図である。It is explanatory drawing explaining the recognition process result and the recording control. 合判定による認識処理結果と録画制御とについて説明する説明図である。It is explanatory drawing explaining the recognition process result by the acceptance determination and the recording control. 検出及び合判定による認識処理結果と録画制御とについて説明する説明図である。It is explanatory drawing explaining the recognition process result by detection and the acceptance determination, and recording control. 検出及び複数の合判定による認識処理結果と録画制御とについて説明する説明図である。It is explanatory drawing explaining the recognition process result by detection and a plurality of match determinations, and recording control. 録画ファイルについて説明する説明図である。It is explanatory drawing explaining the recording file. 診断支援モードの一連の流れを示すフローチャートである。It is a flowchart which shows a series flow of a diagnosis support mode. 診断支援装置を示す説明図である。It is explanatory drawing which shows the diagnosis support apparatus. 医療業務支援装置を示す説明図である。It is explanatory drawing which shows the medical work support apparatus.
 図1に示すように、内視鏡システム10は、内視鏡12と、光源装置14と、プロセッサ装置16と、ディスプレイ18と、キーボード19とを備える。内視鏡12は、観察対象を撮影する。光源装置14は、観察対象に照射する照明光を発する。プロセッサ装置16は、内視鏡システム10のシステム制御を行う。ディスプレイ18は、内視鏡画像に基づく観察画像等を表示する表示部である。キーボード19は、プロセッサ装置16等への設定入力等を行う入力デバイスである。 As shown in FIG. 1, the endoscope system 10 includes an endoscope 12, a light source device 14, a processor device 16, a display 18, and a keyboard 19. The endoscope 12 photographs the observation target. The light source device 14 emits illumination light to irradiate the observation target. The processor device 16 controls the system of the endoscope system 10. The display 18 is a display unit that displays an observation image or the like based on an endoscopic image. The keyboard 19 is an input device for inputting settings to the processor device 16 and the like.
 内視鏡システム10は、観察モードとして、通常観察モード、特殊観察モード、及び診断支援モードの3つのモードを備える。通常観察モードでは、白色光等の通常光を観察対象に照射して撮影することによって、自然な色合いの通常観察画像を観察画像としてディスプレイ18に表示する。特殊観察モードでは、通常光と波長帯域又は分光スペクトルが異なる特殊光を観察対象に照明して撮影することによって、特定の構造等を強調した特殊画像を観察画像としてディスプレイ18に表示する。診断支援モードでは、内視鏡画像に基づく複数種類の認識対象画像に対し、認識対象画像の種類毎に画像認識処理を行う。認識対象画像は、内視鏡画像に基づく画像であり、画像認識処理の対象となる画像である。そして、ディスプレイ18は、複数種類の認識対象画像のうち少なくとも1種類を観察画像として継続して表示する。認識対象画像の種類毎に行った画像認識処理による複数の認識処理結果は、録画の動作の制御に用いる。なお、認識対象画像の種類は、観察対象を撮影する際の照明光の分光スペクトルにより、及び/又は、認識対象画像生成のための画像処理(以下、認識対象画像生成用画像処理という)の方法により区別する。認識対象画像の種類の詳細については後述する。 The endoscope system 10 includes three modes as observation modes: a normal observation mode, a special observation mode, and a diagnosis support mode. In the normal observation mode, a normal observation image having a natural color is displayed on the display 18 as an observation image by irradiating the observation target with normal light such as white light and taking a picture. In the special observation mode, a special image emphasizing a specific structure or the like is displayed on the display 18 as an observation image by illuminating the observation target with special light having a wavelength band or a spectral spectrum different from that of normal light. In the diagnosis support mode, image recognition processing is performed for each type of recognition target image for a plurality of types of recognition target images based on the endoscopic image. The recognition target image is an image based on the endoscopic image, and is an image to be subject to image recognition processing. Then, the display 18 continuously displays at least one of the plurality of types of recognition target images as an observation image. The results of a plurality of recognition processes performed by the image recognition process performed for each type of the image to be recognized are used for controlling the recording operation. The type of the image to be recognized depends on the spectral spectrum of the illumination light when the observation target is photographed, and / or the method of image processing for generating the image to be recognized (hereinafter referred to as image processing for generating the image to be recognized). Distinguish by. The details of the types of recognition target images will be described later.
 内視鏡12は、観察対象を有する被検体内に挿入する挿入部12aと、挿入部12aの基端部分に設けた操作部12bと、挿入部12aの先端側に設けた湾曲部12cと、先端部12dとを有している。操作部12bのアングルノブ12eを操作することにより、湾曲部12cが湾曲する。その結果、先端部12dが所望の方向に向く。また、操作部12bには、アングルノブ12eの他、処置具挿入口(図示せず)、スコープボタン1番12f、スコープボタン2番12g、及びズーム操作部12hが設けられている。処置具挿入口は、生検鉗子、スネア、又は電気メス等の処置具を挿入する入り口である。処置具挿入口に挿入した処置具は、先端部12dから突出する。スコープボタン1番12fは、フリーズボタンであり、静止画を取得する操作に使用する。スコープボタン2番12gは、観察モードを切り替える操作に使用する。スコープボタンには、各種の操作を割り当てることができる。ズーム操作部12hを操作することによって、観察対象を拡大または縮小して撮影できる。 The endoscope 12 includes an insertion portion 12a to be inserted into a subject having an observation target, an operation portion 12b provided at the base end portion of the insertion portion 12a, and a bending portion 12c provided on the distal end side of the insertion portion 12a. It has a tip portion 12d. By operating the angle knob 12e of the operation portion 12b, the curved portion 12c is curved. As a result, the tip portion 12d faces in a desired direction. Further, in addition to the angle knob 12e, the operation unit 12b is provided with a treatment tool insertion port (not shown), a scope button No. 1 12f, a scope button No. 2 12g, and a zoom operation unit 12h. The treatment tool insertion port is an entrance for inserting a treatment tool such as a biopsy forceps, a snare, or an electric knife. The treatment tool inserted into the treatment tool insertion port protrudes from the tip portion 12d. The scope button No. 1 12f is a freeze button and is used for an operation of acquiring a still image. The scope button No. 2 12g is used for the operation of switching the observation mode. Various operations can be assigned to the scope button. By operating the zoom operation unit 12h, the observation target can be enlarged or reduced for shooting.
 図2に示すように、光源装置14は、照明光を発する光源を備える光源部20と、光源部20の動作を制御する光源用プロセッサ22とを備える。光源部20は、観察対象を照明する照明光を発する。照明光には、照明光を発するために使用する励起光等の発光を含む。光源部20は、例えば、レーザーダイオード、LED(Light Emitting Diode)、キセノンランプ、又はハロゲンランプの光源を含み、少なくとも、白色の照明光(以下、白色光という)、又は白色光を発するために使用する励起光を発する。白色には、内視鏡12を用いた観察対象の撮影において実質的に白色と同等な、いわゆる擬似白色を含む。 As shown in FIG. 2, the light source device 14 includes a light source unit 20 including a light source that emits illumination light, and a light source processor 22 that controls the operation of the light source unit 20. The light source unit 20 emits illumination light that illuminates the observation target. The illumination light includes light emission such as excitation light used to emit the illumination light. The light source unit 20 includes, for example, a light source of a laser diode, an LED (Light Emitting Diode), a xenon lamp, or a halogen lamp, and is used to emit at least white illumination light (hereinafter referred to as white light) or white light. It emits an excitation light. The white color includes so-called pseudo-white color, which is substantially equivalent to white color in the imaging of the observation target using the endoscope 12.
 光源部20は、必要に応じて、励起光の照射を受けて発光する蛍光体、又は照明光、又は励起光の波長帯域、分光スペクトル、もしくは光量等を調節する光学フィルタ等を含む。この他、光源部20は、少なくとも狭帯域な光(以下、狭帯域光という)からなる照明光を発することができる。「狭帯域」とは、観察対象の特性及び/またはイメージセンサ45が有するカラーフィルタの分光特性との関係において、実質的にほぼ単一の波長帯域であることをいう。例えば、波長帯域が例えば約±20nm以下(好ましくは約±10nm以下)である場合、この光は狭帯域である。 The light source unit 20 includes, if necessary, a phosphor that emits light when irradiated with excitation light, an illumination light, an optical filter that adjusts the wavelength band, spectral spectrum, light amount, etc. of the excitation light. In addition, the light source unit 20 can emit illumination light composed of at least narrow band light (hereinafter referred to as narrow band light). The “narrow band” means a substantially single wavelength band in relation to the characteristics of the observation target and / or the spectral characteristics of the color filter of the image sensor 45. For example, when the wavelength band is, for example, about ± 20 nm or less (preferably about ± 10 nm or less), this light is a narrow band.
 また、光源部20は、互いに分光スペクトルが異なる複数の照明光を発することができる。複数の照明光は、狭帯域光を含んでもよい。また、光源部20は、例えば、観察対象が含むヘモグロビンの酸素飽和度等の生体情報を算出するために使用する画像の撮影に必要な、特定の波長帯域又は分光スペクトルを有する光を発することができる。 Further, the light source unit 20 can emit a plurality of illumination lights having different spectral spectra from each other. The plurality of illumination lights may include narrow band light. Further, the light source unit 20 may emit light having a specific wavelength band or spectral spectrum necessary for capturing an image used for calculating biological information such as oxygen saturation of hemoglobin contained in the observation target, for example. can.
 本実施形態では、光源部20は、V-LED20a、B-LED20b、G-LED20c、及びR-LED20dの4色のLEDを有する。図3に示すように、V-LED20aは、中心波長405nm、波長帯域380~420nmの紫色光Vを発光する。B-LED20bは、中心波長460nm、波長帯域420~500nmの青色光Bを発光する。G-LED20cは、波長帯域が480~600nmに及ぶ緑色光Gを発光する。R-LED20dは、中心波長620~630nmで、波長帯域が600~650nmに及ぶ赤色光Rを発光する。なお、V-LED20aとB-LED20bの中心波長は約±20nm、好ましくは約±5nmから約±10nm程度の幅を有する。なお、紫色光Vは、特殊観察モード又は診断支援モードにて用いる表層血管の密集、粘膜内出血、及び粘膜外出血等を強調して表示するために用いられる短波長の光であり、中心波長又はピーク波長に410nmを含めることが好ましい。また、紫色光V及び/又は青色光Bは、狭帯域光であることが好ましい。 In the present embodiment, the light source unit 20 has four color LEDs of V-LED20a, B-LED20b, G-LED20c, and R-LED20d. As shown in FIG. 3, the V-LED 20a emits purple light V having a center wavelength of 405 nm and a wavelength band of 380 to 420 nm. The B-LED 20b emits blue light B having a center wavelength of 460 nm and a wavelength band of 420 to 500 nm. The G-LED 20c emits green light G having a wavelength band of 480 to 600 nm. The R-LED 20d emits red light R having a center wavelength of 620 to 630 nm and a wavelength band of 600 to 650 nm. The center wavelengths of the V-LED 20a and the B-LED 20b have a width of about ± 20 nm, preferably about ± 5 nm to about ± 10 nm. The purple light V is a short-wavelength light used in the special observation mode or the diagnostic support mode to emphasize and display the density of superficial blood vessels, intramucosal hemorrhage, extramucosal hemorrhage, etc., and has a central wavelength or It is preferable to include 410 nm in the peak wavelength. Further, the purple light V and / or the blue light B is preferably narrow band light.
 光源用プロセッサ22は、光源部20を構成する各光源の点灯又は消灯もしくは遮蔽のタイミング、及び、光強度又は発光量等を制御する。その結果、光源部20は、分光スペクトルが異なる複数種類の照明光を、予め設定した期間及び発光量で発することができる。本実施形態においては、光源用プロセッサ22は、V-LED20a、B-LED20b、G-LED20c、及びR-LED20dの点灯や消灯、点灯時の光強度もしくは発光量、又は光学フィルタの挿抜等を、各々に独立した制御信号を入力することにより制御する。光源用プロセッサ22は、各LED20a~20dをそれぞれ独立に制御することで、紫色光V、青色光B、緑色光G、又は赤色光Rをそれぞれ独立に光強度又は単位時間あたりの光量を変えて発光可能である。したがって、光源用プロセッサ22は、互いに分光スペクトルが異なる複数の照明光を発することができ、例えば、白色の照明光、分光スペクトルが異なる複数種類の照明光、又は、少なくとも狭帯域光からなる照明光等を発する。 The light source processor 22 controls the timing of turning on, off, or blocking each light source constituting the light source unit 20, the light intensity, the amount of light emitted, and the like. As a result, the light source unit 20 can emit a plurality of types of illumination light having different spectral spectra for a preset period and emission amount. In the present embodiment, the light source processor 22 turns on and off the V-LED20a, B-LED20b, G-LED20c, and R-LED20d, the light intensity or the amount of light emitted at the time of lighting, the insertion and removal of the optical filter, and the like. It is controlled by inputting an independent control signal to each. The light source processor 22 independently controls each of the LEDs 20a to 20d to independently change the light intensity or the amount of light per unit time for purple light V, blue light B, green light G, or red light R. It can emit light. Therefore, the light source processor 22 can emit a plurality of illumination lights having different spectral spectra, for example, white illumination light, a plurality of types of illumination lights having different spectral spectra, or at least an illumination light composed of narrow band light. Etc. are emitted.
 光源用プロセッサ22は、本実施形態では、通常観察モード時には、紫色光V、青色光B、緑色光G、及び赤色光R間の光強度比がVc:Bc:Gc:Rcとなる白色光を発光するように、各LED20a~20dを制御する。なお、Vc、Bc、Gc、又はRcのそれぞれは、0(ゼロ)より大きく、0ではない。 In the present embodiment, the light source processor 22 produces white light having a light intensity ratio of Vc: Bc: Gc: Rc between purple light V, blue light B, green light G, and red light R in the normal observation mode. Each LED 20a to 20d is controlled so as to emit light. It should be noted that each of Vc, Bc, Gc, or Rc is larger than 0 (zero) and is not 0.
 また、光源用プロセッサ22は、本実施形態では、特殊観察モード時には、短波長の狭帯域光としての紫色光V、青色光B、緑色光G、及び赤色光Rとの光強度比がVs:Bs:Gs:Rsとなる特殊光を発光するように、各LED20a~20dを制御する。光強度比Vs:Bs:Gs:Rsは、通常観察モード時に使用する光強度比Vc:Bc:Gc:Rcと異なっており、観察目的に応じて適宜定められる。したがって、光源部20は、光源用プロセッサ22の制御により、互いに分光スペクトルが異なる複数の特殊光を発することができる。例えば、表層血管を強調する場合には、Vsを、他のBs、Gs、及びRsよりも大きくすることが好ましく、中深層血管を強調する場合には、Gsを、他のVs、Gs、及びRsよりも大きくすることが好ましい。 Further, in the present embodiment, the light source processor 22 has a light intensity ratio of Vs: to the purple light V, the blue light B, the green light G, and the red light R as short-wavelength narrow-band light in the special observation mode. Each LED 20a to 20d is controlled so as to emit special light having Bs: Gs: Rs. The light intensity ratio Vs: Bs: Gs: Rs is different from the light intensity ratio Vc: Bc: Gc: Rc used in the normal observation mode, and is appropriately determined according to the observation purpose. Therefore, the light source unit 20 can emit a plurality of special lights having different spectral spectra from each other under the control of the light source processor 22. For example, when emphasizing superficial blood vessels, Vs is preferably larger than other Bs, Gs, and Rs, and when emphasizing mesopelagic blood vessels, Gs is referred to as other Vs, Gs, and Rs. It is preferably larger than Rs.
 なお、本明細書において、光強度比は、Vc:Bc:Gc:Rcを除いて、少なくとも1つの半導体光源の比率が0(ゼロ)の場合を含む。したがって、各半導体光源のいずれか1つまたは2つ以上が点灯しない場合を含む。例えば、紫色光V、青色光B、緑色光G、及び赤色光R間の光強度比が1:0:0:0の場合のように、半導体光源の1つのみを点灯し、他の3つは点灯しない場合も、光強度比を有するものとする。 In the present specification, the light intensity ratio includes the case where the ratio of at least one semiconductor light source is 0 (zero) except for Vc: Bc: Gc: Rc. Therefore, this includes the case where any one or more of the semiconductor light sources are not lit. For example, as in the case where the light intensity ratio between purple light V, blue light B, green light G, and red light R is 1: 0: 0: 0, only one of the semiconductor light sources is turned on, and the other three. Even if one does not light up, it shall have a light intensity ratio.
 また、光源用プロセッサ22は、本実施形態では、診断支援モード時に、複数種類の認識対象画像を取得するために、互いに分光スペクトルの異なる複数の照明光を自動的に特定のパターンとなるよう切り替えて発する。そして、互いに分光スペクトルが異なる複数の照明光のそれぞれを、予め設定した順序により繰り返し発する。具体的には、診断支援モード時に、2種類の認識対象画像を取得するために、第1照明光と第2照明光とを自動的に切り替えて発する場合において、第1照明光を第1照明期間において第1発光パターンで発し、第2照明光を第2照明期間において第2発光パターンで発する。第1照明光と第2照明光とは、分光スペクトルが互いに異なる照明光である。例えば、第1照明光は、白色光である。一方、第2照明光は、認識処理に用いることから、第2照明光を観察対象に照明することによって、特定の認識処理に適した画像が得られる照明光であることが好ましい。例えば、表層血管に基づいて認識処理を行う場合には、第2照明光を紫色光Vとすることが好ましい。 Further, in the present embodiment, the light source processor 22 automatically switches a plurality of illumination lights having different spectral spectra to a specific pattern in order to acquire a plurality of types of recognition target images in the diagnostic support mode. To emit. Then, each of the plurality of illumination lights having different spectral spectra is repeatedly emitted in a preset order. Specifically, in the case of automatically switching between the first illumination light and the second illumination light in order to acquire two types of recognition target images in the diagnosis support mode, the first illumination light is the first illumination. It emits in the first emission pattern during the period, and emits the second illumination light in the second emission pattern in the second illumination period. The first illumination light and the second illumination light are illumination lights having different spectral spectra. For example, the first illumination light is white light. On the other hand, since the second illumination light is used for the recognition process, it is preferable that the second illumination light is an illumination light that can obtain an image suitable for a specific recognition process by illuminating the observation target with the second illumination light. For example, when the recognition process is performed based on the surface blood vessels, it is preferable that the second illumination light is purple light V.
 第1発光パターンは、第1照明光の発光順序であり、第2発光パターンは、第2照明光の発光順序であり、それぞれのパターンを構成する要素は、撮影の単位であるフレームである。フレームとは、イメージセンサ45における特定タイミングから信号読み出し完了までの間の期間を少なくとも含む期間をいう。1フレームにおいて1回の撮影及び画像の取得を行う。第1照明光と第2照明光とは、いずれか一方を発し、同時に発することはない。1つの発光周期は、それぞれ少なくとも1つの第1発光パターンと第2発光パターンとからなり、第1発光パターンと第2発光パターンとを組み合わせて発光周期を構成する。発光周期を繰り返すことにより照明を行う。第1発光パターン又は第2発光パターンのそれぞれを構成するフレーム数、又は照明光の種類等の詳細は、予め設定する。なお、照明光の種類は、照明光の分光スペクトルによって区別する。したがって、異なる分光スペクトルを有する照明光は、異なる種類の照明光である。 The first light emission pattern is the light emission order of the first illumination light, the second light emission pattern is the light emission order of the second illumination light, and the element constituting each pattern is a frame which is a unit of photography. The frame means a period including at least a period from a specific timing in the image sensor 45 to the completion of signal reading. One shooting and image acquisition are performed in one frame. The first illumination light and the second illumination light emit either one, and do not emit at the same time. One light emission cycle is composed of at least one first light emission pattern and a second light emission pattern, respectively, and the first light emission pattern and the second light emission pattern are combined to form a light emission cycle. Illumination is performed by repeating the light emission cycle. Details such as the number of frames constituting each of the first light emission pattern and the second light emission pattern, the type of illumination light, and the like are set in advance. The type of illumination light is distinguished by the spectral spectrum of the illumination light. Therefore, the illumination light having a different spectral spectrum is a different kind of illumination light.
 具体的には、第1発光パターンは、第1A発光パターン又は第1B発光パターンであることが好ましい。図4に示すように、第1A発光パターンは、発光周期Q1において、第1照明期間P1での第1照明光L1でのフレームFL数が決定されており、同じである。図5に示すように、第1B発光パターンは、発光周期Q2において、第1照明期間P1のフレームFL数がそれぞれの第1照明期間P1において異なる。なお、第1A発光パターン及び第1B発光パターンにおいて、第1照明光L1は同一の分光スペクトルであり、白色光である。 Specifically, the first light emission pattern is preferably the first A light emission pattern or the first B light emission pattern. As shown in FIG. 4, in the first A emission pattern, the number of frame FLs in the first illumination light L1 in the first illumination period P1 is determined in the emission cycle Q1 and is the same. As shown in FIG. 5, in the first B emission pattern, the number of frame FLs in the first illumination period P1 is different in each first illumination period P1 in the emission cycle Q2. In the first A emission pattern and the first B emission pattern, the first illumination light L1 has the same spectral spectrum and is white light.
 第2発光パターンは、第2A発光パターン、第2B発光パターン、第2C発光パターン、又は第2D発光パターンであることが好ましい。図4に示すように、第2A発光パターンは、発光周期Q1において、第2照明期間のフレームFL数が決定されており、同じであり、且つ、第2照明光L2の分光スペクトルがそれぞれの第2照明期間P2において第2照明光L2aであり、同じである。第2照明光L2は、分光スペクトルが異なる照明光を含む場合があり、これらを、第2照明光L2aと第2照明光L2bと記載して区別し、第2照明光L2と記載する場合は、これらを総称する。なお、図5に示すように、発光周期Q2においても、第2照明光L2は第2A発光パターンで発する。 The second light emission pattern is preferably a second A light emission pattern, a second B light emission pattern, a second C light emission pattern, or a second D light emission pattern. As shown in FIG. 4, in the second A emission pattern, the number of frame FLs in the second illumination period is determined in the emission cycle Q1 and is the same, and the spectral spectrum of the second illumination light L2 is the respective second illumination pattern. The second illumination light L2a is the same in the second illumination period P2. The second illumination light L2 may include illumination lights having different spectral spectra, and these are described as the second illumination light L2a and the second illumination light L2b to distinguish them, and when they are described as the second illumination light L2. , These are collectively referred to. As shown in FIG. 5, the second illumination light L2 is emitted in the second A emission pattern even in the emission cycle Q2.
 図6に示すように、第2B発光パターンは、発光周期Q3において、第2照明期間P2のフレームFL数がそれぞれの第2照明期間P2において同じであり、且つ、第2照明光L2の分光スペクトルが、それぞれの第2照明期間P2において第2照明光L2a又は第2照明光L2bであり、異なる。図7に示すように、第2C発光パターンは、発光周期Q4において、第2照明期間P2のフレームFL数がそれぞれの第2照明期間P2において異なり、且つ、第2照明光L2の分光スペクトルがそれぞれの第2照明期間P2において第2照明光L2aであり、同じである。図8に示すように、第2D発光パターンは、発光周期Q5において、第2照明期間P2のフレームFL数がそれぞれの第2照明期間P2において異なり、且つ、第2照明光P2の分光スペクトルがそれぞれの第2照明期間P2において第2照明光2La又は第2照明光2Lbであり、異なる。 As shown in FIG. 6, in the second B emission pattern, in the emission cycle Q3, the number of frames FL of the second illumination period P2 is the same in each second illumination period P2, and the spectral spectrum of the second illumination light L2 is the same. Is the second illumination light L2a or the second illumination light L2b in each of the second illumination periods P2, and is different. As shown in FIG. 7, in the second C emission pattern, in the emission cycle Q4, the number of frames FL of the second illumination period P2 is different in each second illumination period P2, and the spectral spectra of the second illumination light L2 are different from each other. The second illumination light L2a is the same in the second illumination period P2 of the above. As shown in FIG. 8, in the second D emission pattern, in the emission cycle Q5, the number of frames FL of the second illumination period P2 is different in each second illumination period P2, and the spectral spectra of the second illumination light P2 are different from each other. The second illumination light 2La or the second illumination light 2Lb in the second illumination period P2 of the above is different.
 以上のとおり、本実施形態では、診断支援モード時は、光源用プロセッサ22は、これらの第1発光パターンと第2発光パターンを組み合わせて構成した発光周期を繰り返す。図4に示すように、発光周期Q1は、第1A発光パターンと第2A発光パターンとからなる。図5に示すように、発光周期Q2は、第1B発光パターンと第2A発光パターンとからなる。図6に示すように、発光周期Q3は、第1A発光パターンと第2B発光パターンとからなる。図7に示すように、発光周期Q4は、第1A発光パターンと第2C発光パターンとからなる。図8に示すように、発光周期Q5は、第1A発光パターンと第2D発光パターンとからなる。なお、第1発光パターンにおいて、第1照明光L1の分光スペクトルは、それぞれの第1照明期間P1において、異なってもよい。 As described above, in the present embodiment, in the diagnostic support mode, the light source processor 22 repeats the light emission cycle configured by combining these first light emission patterns and the second light emission patterns. As shown in FIG. 4, the light emission cycle Q1 includes a first A light emission pattern and a second A light emission pattern. As shown in FIG. 5, the light emission cycle Q2 includes a first B light emission pattern and a second A light emission pattern. As shown in FIG. 6, the light emission cycle Q3 includes a first A light emission pattern and a second B light emission pattern. As shown in FIG. 7, the light emission cycle Q4 includes a first A light emission pattern and a second C light emission pattern. As shown in FIG. 8, the light emission cycle Q5 includes a first A light emission pattern and a second D light emission pattern. In the first emission pattern, the spectral spectrum of the first illumination light L1 may be different in each first illumination period P1.
 また、診断支援モード時は、光源用プロセッサ22は、後に説明する認識処理結果に基づいて、第1発光パターン又は第2発光パターンを変更してもよい。発光パターンの変更には、照明光の種類の変更を含む。具体的には、例えば、認識処理結果に基づいて、第2発光パターンを、第2Aパターンから第2B発光パターンに変更する、又は、第2照明光L2aを用いた第2A発光パターンから、第2照明光L2bを用いた第2A発光パターンに変更する、等の切り替えを行ってもよい。 Further, in the diagnosis support mode, the light source processor 22 may change the first light emission pattern or the second light emission pattern based on the recognition processing result described later. Changing the emission pattern includes changing the type of illumination light. Specifically, for example, the second light emission pattern is changed from the second A pattern to the second B light emission pattern based on the recognition processing result, or the second A light emission pattern using the second illumination light L2a is changed to the second light emission pattern. Switching may be performed such as changing to the second A emission pattern using the illumination light L2b.
 ここで、第1照明期間P1は第2照明期間P2よりも長くすることが好ましく、第1照明期間P1は2フレーム以上とすることが好ましい。例えば、図4では、第1発光パターンを第1Aパターンとし、第2発光パターンを第2A発光パターンとする発光周期Q1において、第1照明期間P1を2フレームとし、第2照明期間P2を1フレームとしている。第1照明光P1は、ディスプレイ18に表示する観察画像の生成に用いられることから、第1照明光P1を観察対象に照明することによって、明るい観察画像が得られることが好ましい。 Here, the first lighting period P1 is preferably longer than the second lighting period P2, and the first lighting period P1 is preferably two frames or more. For example, in FIG. 4, in the light emission cycle Q1 in which the first light emission pattern is the first A pattern and the second light emission pattern is the second A light emission pattern, the first illumination period P1 is set to 2 frames and the second illumination period P2 is set to 1 frame. It is supposed to be. Since the first illumination light P1 is used to generate an observation image to be displayed on the display 18, it is preferable to obtain a bright observation image by illuminating the observation target with the first illumination light P1.
 図2に示すように、各LED20a~20dが発する光は、ミラーやレンズ等で構成される光路結合部(図示せず)を介して、ライトガイド41に入射する。ライトガイド41は、内視鏡12及びユニバーサルコード(図示せず)に内蔵されている。ユニバーサルコードは、内視鏡12と、光源装置14及びプロセッサ装置16を接続するコードである。ライトガイド41は、光路結合部からの光を、内視鏡12の先端部12dまで伝搬する。 As shown in FIG. 2, the light emitted by each of the LEDs 20a to 20d is incident on the light guide 41 via an optical path coupling portion (not shown) composed of a mirror, a lens, or the like. The light guide 41 is built in the endoscope 12 and the universal cord (not shown). The universal cord is a cord that connects the endoscope 12, the light source device 14, and the processor device 16. The light guide 41 propagates the light from the optical path coupling portion to the tip portion 12d of the endoscope 12.
 内視鏡12の先端部12dには、照明光学系30aと撮影光学系30bとが設けられる。照明光学系30aは、照明レンズ42を有しており、ライトガイド41によって伝搬した照明光が、照明レンズ42を介して観察対象に向けて出射する。 The tip portion 12d of the endoscope 12 is provided with an illumination optical system 30a and a photographing optical system 30b. The illumination optical system 30a has an illumination lens 42, and the illumination light propagated by the light guide 41 is emitted toward the observation target through the illumination lens 42.
 撮影光学系30bは、対物レンズ43、ズームレンズ44、及びイメージセンサ45を有する。イメージセンサ45は、対物レンズ43及びズームレンズ44を介して、観察対象から戻る照明光の反射光等(反射光の他、散乱光、観察対象が発光する蛍光、または、観察対象に投与等した薬剤に起因した蛍光等を含む)を用いて観察対象を撮影する。ズームレンズ44は、ズーム操作部12hの操作をすることで移動し、観察対象像を拡大または縮小する。 The photographing optical system 30b has an objective lens 43, a zoom lens 44, and an image sensor 45. The image sensor 45 was administered to the observation target via the objective lens 43 and the zoom lens 44, such as reflected light of the illumination light returning from the observation target (in addition to the reflected light, scattered light, fluorescence emitted by the observation target, or administration to the observation target. The observation target is photographed using (including fluorescence caused by the drug). The zoom lens 44 moves by operating the zoom operation unit 12h to enlarge or reduce the observation target image.
 イメージセンサ45は、画素ごとに、複数色のカラーフィルタのうち1色のカラーフィルタを有する。本実施形態においては、イメージセンサ45は原色系のカラーフィルタを有するカラーセンサである。具体的には、イメージセンサ45は、赤色カラーフィルタ(Rフィルタ)を有するR画素と、緑色カラーフィルタ(Gフィルタ)を有するG画素と、青色カラーフィルタ(Bフィルタ)を有するB画素とを有する。 The image sensor 45 has a color filter of one of a plurality of color filters for each pixel. In the present embodiment, the image sensor 45 is a color sensor having a primary color system color filter. Specifically, the image sensor 45 has an R pixel having a red color filter (R filter), a G pixel having a green color filter (G filter), and a B pixel having a blue color filter (B filter). ..
 なお、イメージセンサ45としては、CCD(Charge Coupled Device)センサや、CMOS(Complementary Metal Oxide Semiconductor)センサを利用可能である。また、本実施形態のイメージセンサ45は、原色系のカラーセンサであるが、補色系のカラーセンサを用いることもできる。補色系のカラーセンサは、例えば、シアンカラーフィルタが設けられたシアン画素、マゼンタカラーフィルタが設けられたマゼンタ画素、イエローカラーフィルタが設けられたイエロー画素、及び、グリーンカラーフィルタが設けられたグリーン画素を有する。補色系カラーセンサを用いる場合に上記各色の画素から得る画像は、補色-原色色変換をすれば、原色系のカラーセンサで得る画像と同様の画像に変換できる。原色系または補色系のセンサにおいて、W画素(ほぼ全波長帯域の光を受光するホワイト画素)等、上記以外の特性を有する画素を1または複数種類有する場合も同様である。また、本実施形態のイメージセンサ45はカラーセンサであるが、カラーフィルタを有しないモノクロのセンサを使用してもよい。 As the image sensor 45, a CCD (Charge Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor can be used. Further, although the image sensor 45 of the present embodiment is a primary color system color sensor, a complementary color system color sensor can also be used. Complementary color sensors include, for example, a cyan pixel provided with a cyan color filter, a magenta pixel provided with a magenta color filter, a yellow pixel provided with a yellow color filter, and a green pixel provided with a green color filter. Have. When the complementary color sensor is used, the image obtained from the pixels of each of the above colors can be converted into the same image as the image obtained by the primary color sensor by performing the complementary color-primary color conversion. The same applies to the case where the primary color system or complementary color system sensor has one or a plurality of types of pixels having characteristics other than the above, such as W pixels (white pixels that receive light in almost all wavelength bands). Further, although the image sensor 45 of the present embodiment is a color sensor, a monochrome sensor having no color filter may be used.
 内視鏡12は、イメージセンサ45を制御する撮影用プロセッサ46を備える。撮影用プロセッサ46の制御は、観察モード毎に異なる。通常観察モードでは、撮影用プロセッサ46は、通常光で照明された観察対象を撮影するように、イメージセンサ45を制御する。これにより、イメージセンサ45のB画素からBc画像信号が出力され、G画素からGc画像信号が出力され、R画素からRc画像信号が出力される。 The endoscope 12 includes a photographing processor 46 that controls the image sensor 45. The control of the photographing processor 46 is different for each observation mode. In the normal observation mode, the photographing processor 46 controls the image sensor 45 so as to photograph an observation target illuminated by normal light. As a result, the Bc image signal is output from the B pixel of the image sensor 45, the Gc image signal is output from the G pixel, and the Rc image signal is output from the R pixel.
 特殊観察モードでは、撮影用プロセッサ46はイメージセンサ45を制御して、特殊光で照明された観察対象を撮影するように、イメージセンサ45を制御する。これにより、イメージセンサ45のB画素からBs画像信号が出力され、G画素からGs画像信号が出力され、R画素からRs画像信号が出力される。 In the special observation mode, the photographing processor 46 controls the image sensor 45 and controls the image sensor 45 so as to photograph an observation target illuminated by special light. As a result, the Bs image signal is output from the B pixel of the image sensor 45, the Gs image signal is output from the G pixel, and the Rs image signal is output from the R pixel.
 診断支援モードでは、撮影用プロセッサ46はイメージセンサ45を制御し、第1照明光L1又は第2照明光L2で照明された観察対象を撮影するように、イメージセンサ45を制御する。これにより、第1照明光L1の照明時には、イメージセンサ45のB画素からB1画像信号が出力され、G画素からG1画像信号が出力され、R画素からR1画像信号が出力される。また、第2照明光L2の照明時には、イメージセンサ45のB画素からB2画像信号が出力され、G画素からG2画像信号が出力され、R画素からR2画像信号が出力される。 In the diagnostic support mode, the photographing processor 46 controls the image sensor 45, and controls the image sensor 45 so as to photograph the observation target illuminated by the first illumination light L1 or the second illumination light L2. As a result, when the first illumination light L1 is illuminated, the B1 image signal is output from the B pixel of the image sensor 45, the G1 image signal is output from the G pixel, and the R1 image signal is output from the R pixel. Further, when the second illumination light L2 is illuminated, the B2 image signal is output from the B pixel of the image sensor 45, the G2 image signal is output from the G pixel, and the R2 image signal is output from the R pixel.
 プロセッサ装置16には、後述するような中央制御部51、画像取得部52、画像処理部56、及び表示制御部57等が行う処理等に関するプログラムがメモリ(図示せず)に組み込まれている。画像処理装置として機能するプロセッサ装置16が備える画像用プロセッサにより構成される中央制御部51によってそのプログラムが動作することで、中央制御部51、画像取得部52、画像処理部56、及び表示制御部57の機能が実現する。 The processor device 16 incorporates a program (not shown) related to processing performed by the central control unit 51, the image acquisition unit 52, the image processing unit 56, the display control unit 57, and the like, which will be described later. A central control unit 51, an image acquisition unit 52, an image processing unit 56, and a display control unit are operated by operating the program by a central control unit 51 composed of an image processor included in the processor device 16 that functions as an image processing unit. 57 functions are realized.
 中央制御部51は、照明光の照射タイミングと撮影のタイミングの同期制御等の内視鏡システム10の統括的な制御を行う。キーボード19等を用いて、各種設定の入力等をした場合には、中央制御部51は、その設定を、光源用プロセッサ22、撮影用プロセッサ46、又は画像処理部56等の内視鏡システム10の各部に入力する。 The central control unit 51 comprehensively controls the endoscope system 10 such as synchronous control of the irradiation timing of the illumination light and the shooting timing. When various settings are input using the keyboard 19 or the like, the central control unit 51 sets the settings to the endoscope system 10 such as the light source processor 22, the photographing processor 46, or the image processing unit 56. Enter in each part of.
 画像取得部52は、イメージセンサ45から、各色の画素を用いて観察対象を撮影した画像、すなわちRAW画像を取得する。また、RAW画像は、デモザイク処理を実施する前の画像(内視鏡画像)である。デモザイク処理を実施する前の画像であれば、イメージセンサ45から取得した画像に対してノイズ低減処理等の任意の処理を実施した画像もRAW画像に含む。 The image acquisition unit 52 acquires an image of an observation target captured using pixels of each color, that is, a RAW image, from the image sensor 45. The RAW image is an image (endoscopic image) before the demosaic processing is performed. If the image is an image before the demosaic processing is performed, the RAW image also includes an image obtained by performing arbitrary processing such as noise reduction processing on the image acquired from the image sensor 45.
 画像取得部52は、取得したRAW画像に必要に応じて各種処理を施すために、DSP(Digital Signal Processor)53と、ノイズ低減部54と、変換部55と、を備える。 The image acquisition unit 52 includes a DSP (Digital Signal Processor) 53, a noise reduction unit 54, and a conversion unit 55 in order to perform various processing on the acquired RAW image as needed.
 DSP53は、例えば、オフセット処理部、欠陥補正処理部、デモザイク処理部、リニアマトリクス処理部、及び、YC変換処理部、等(いずれも図示せず)を備える。DSP53は、これらを用いてRAW画像またはRAW画像を用いて生成した画像に対して各種処理を施す。 The DSP 53 includes, for example, an offset processing unit, a defect correction processing unit, a demosaic processing unit, a linear matrix processing unit, a YC conversion processing unit, and the like (none of which are shown). The DSP 53 performs various processing on the RAW image or the image generated by using the RAW image using these.
 オフセット処理部は、RAW画像に対してオフセット処理を施す。オフセット処理は、RAW画像から暗電流成分を低減し、正確な零レベルを設定する処理である。オフセット処理は、クランプ処理と称する場合がある。欠陥補正処理部は、RAW画像に対して欠陥補正処理を施す。欠陥補正処理は、イメージセンサ45が製造工程または経時変化に起因する欠陥を有する画素(欠陥画素)を含む場合に、イメージセンサ45の欠陥画素に対応するRAW画素の画素値を補正または生成する処理である。 The offset processing unit performs offset processing on the RAW image. The offset process is a process of reducing the dark current component from the RAW image and setting an accurate zero level. The offset process may be referred to as a clamp process. The defect correction processing unit performs defect correction processing on the RAW image. The defect correction process is a process of correcting or generating a pixel value of a RAW pixel corresponding to a defective pixel of the image sensor 45 when the image sensor 45 includes a pixel (defective pixel) having a defect due to a manufacturing process or a change with time. Is.
 デモザイク処理部は、各色のカラーフィルタに対応する各色のRAW画像に対してデモザイク処理を施す。デモザイク処理は、RAW画像においてカラーフィルタの配列に起因して欠落する画素値を補間によって生成する処理である。リニアマトリクス処理部は、1または複数のRAW画像をRGB各色のチャンネルに割り当てることにより生成する内視鏡画像に対してリニアマトリクス処理を行う。リニアマトリクス処理は、内視鏡画像の色再現性を高める処理である。YC変換処理部が行うYC変換処理は、1または複数のRAW画像をRGB各色のチャンネルに割り当てることにより生成する内視鏡画像を、輝度チャンネルYと色差チャンネルCb及び色差チャンネルCrを有する内視鏡画像に変換する処理である。 The demosaic processing unit performs demosaic processing on the RAW image of each color corresponding to the color filter of each color. The demosaic process is a process of generating pixel values that are missing due to the arrangement of color filters in a RAW image by interpolation. The linear matrix processing unit performs linear matrix processing on the endoscopic image generated by assigning one or a plurality of RAW images to channels of each RGB color. The linear matrix processing is a processing for enhancing the color reproducibility of an endoscopic image. In the YC conversion process performed by the YC conversion processing unit, an endoscope image generated by assigning one or a plurality of RAW images to channels of each RGB color is used as an endoscope having a brightness channel Y, a color difference channel Cb, and a color difference channel Cr. This is the process of converting to an image.
 ノイズ低減部54は、輝度チャンネルY、色差チャンネルCb及び色差チャンネルCrを有する内視鏡画像に対して、例えば、移動平均法またはメディアンフィルタ法等を用いてノイズ低減処理を施す。変換部55は、ノイズ低減処理後の輝度チャンネルY、色差チャンネルCb及び色差チャンネルCrを再びBGRの各色のチャンネルを有する内視鏡画像に再変換する。 The noise reduction unit 54 performs noise reduction processing on an endoscope image having a brightness channel Y, a color difference channel Cb, and a color difference channel Cr by using, for example, a moving average method or a median filter method. The conversion unit 55 reconverts the luminance channel Y, the color difference channel Cb, and the color difference channel Cr after the noise reduction processing into an endoscopic image having channels of each color of BGR.
 画像処理部56は、画像取得部52が出力する内視鏡画像に、必要な画像処理、又は演算を行う。画像処理部56は、画像取得部52が出力する内視鏡画像に基づいて、複数種類の認識対象画像を生成する。また、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像を継続してディスプレイに表示する制御を行う。また、複数種類の認識対象画像に対する画像認識処理を、認識対象画像の種類毎に並列して行い、画像認識処理により得られる認識処理結果を、認識対象画像の種類毎に取得する。また、取得した全種類の認識対象画像の認識処理結果に基づき、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像の動画を記録する際の記録の動作を制御する。 The image processing unit 56 performs necessary image processing or calculation on the endoscopic image output by the image acquisition unit 52. The image processing unit 56 generates a plurality of types of recognition target images based on the endoscopic image output by the image acquisition unit 52. Further, control is performed so that at least one type of recognition target image out of a plurality of types of recognition target images is continuously displayed on the display. Further, image recognition processing for a plurality of types of recognition target images is performed in parallel for each type of recognition target image, and recognition processing results obtained by the image recognition processing are acquired for each type of recognition target image. Further, based on the recognition processing results of all the types of recognition target images acquired, the recording operation when recording a moving image of at least one type of recognition target image among the plurality of types of recognition target images is controlled.
 図9に示すように、画像処理部56は、通常観察画像処理部61、特殊観察画像処理部62、及び診断支援画像処理部63を備える。通常観察画像処理部61は、入力した1フレーム分のRc画像信号、Gc画像信号、及びBc画像信号に対して、通常観察画像用画像処理を施す。通常観察画像用画像処理には、3×3のマトリクス処理、階調変換処理、3次元LUT(Look Up Table)処理等の色変換処理、色彩強調処理、又は空間周波数強
調等の構造強調処理が含まれる。通常観察画像用画像処理が施されたRc画像信号、Gc画像信号、及びBc画像信号は、通常観察画像として表示制御部57に入力する。
As shown in FIG. 9, the image processing unit 56 includes a normal observation image processing unit 61, a special observation image processing unit 62, and a diagnosis support image processing unit 63. The normal observation image processing unit 61 performs image processing for normal observation images on the input Rc image signal, Gc image signal, and Bc image signal for one frame. Image processing for normal observation images includes 3 × 3 matrix processing, gradation conversion processing, color conversion processing such as three-dimensional LUT (Look Up Table) processing, color enhancement processing, and structure enhancement processing such as spatial frequency enhancement. included. The Rc image signal, the Gc image signal, and the Bc image signal that have been subjected to image processing for a normal observation image are input to the display control unit 57 as a normal observation image.
 特殊観察画像処理部62は、入力した1フレーム分のRs画像信号、Gs画像信号、及びBs画像信号に対して、特殊観察画像用画像処理を施す。特殊観察画像用画像処理には、3×3のマトリクス処理、階調変換処理、3次元LUT(Look Up Table)処理等の色変換処理、色彩強調処理、又は空間周波数強調等の構造強調処理が含まれる。特殊観察画像用画像処理が施されたRs画像信号、Gs画像信号、及びBs画像信号は、特殊観察画像として表示制御部57に入力する。 The special observation image processing unit 62 performs image processing for special observation images on the input Rs image signal, Gs image signal, and Bs image signal for one frame. Image processing for special observation images includes 3x3 matrix processing, gradation conversion processing, color conversion processing such as 3D LUT (Look Up Table) processing, color enhancement processing, and structure enhancement processing such as spatial frequency enhancement. included. The Rs image signal, the Gs image signal, and the Bs image signal that have been subjected to image processing for the special observation image are input to the display control unit 57 as special observation images.
 診断支援画像処理部63は、診断支援モードにおける画像処理等を行う。図10に示すように、診断支援画像処理部63は、認識対象画像生成部71、画像認識処理部72、認識結果取得部73、録画制御部74、及び表示用画像生成部75を備える。 The diagnosis support image processing unit 63 performs image processing and the like in the diagnosis support mode. As shown in FIG. 10, the diagnosis support image processing unit 63 includes a recognition target image generation unit 71, an image recognition processing unit 72, a recognition result acquisition unit 73, a recording control unit 74, and a display image generation unit 75.
 認識対象画像生成部71は、画像取得部52が出力する内視鏡画像に基づいて、複数種類の認識対象画像を生成及び取得する。認識対象画像の種類は、次の2点の一方又は両方により区別する。1点目は、観察対象を撮影する際の照明光の分光スペクトルにより区別する。したがって、認識対象画像生成部71は、光源部が発する互いに分光スペクトルが異なる複数の照明光のそれぞれにより照明した観察対象を撮影することにより得られる画像を、それぞれ1種の認識対象画像として取得する。2点目は、認識対象画像生成用画像処理の方法により区別する。認識対象画像生成用画像処理の方法は、色彩拡張処理又は構造強調処理等を含む。なお、認識対象画像生成用画像処理の方法により認識対象画像を区別する際には、認識対象画像生成用画像処理を行わないことを含む。したがって、画像取得部52が出力する内視鏡画像に対して認識対象画像生成用画像処理を行わない内視鏡画像も、認識対象画像の1種類である。 The recognition target image generation unit 71 generates and acquires a plurality of types of recognition target images based on the endoscopic image output by the image acquisition unit 52. The type of the image to be recognized is distinguished by one or both of the following two points. The first point is distinguished by the spectral spectrum of the illumination light when the observation target is photographed. Therefore, the recognition target image generation unit 71 acquires an image obtained by photographing an observation target illuminated by each of a plurality of illumination lights having different spectral spectra from each other emitted by the light source unit as one type of recognition target image. .. The second point is distinguished by the method of image processing for generating the image to be recognized. The method of image processing for generating a recognition target image includes color expansion processing, structure enhancement processing, and the like. When the recognition target image is distinguished by the recognition target image generation image processing method, it includes not performing the recognition target image generation image processing. Therefore, an endoscope image that does not perform image processing for generating a recognition target image on the endoscope image output by the image acquisition unit 52 is also one type of the recognition target image.
 したがって、照明光の分光スペクトルと認識対象画像生成用画像処理との組み合わせが異なる場合も、認識対象画像の1種類とする。したがって、照明光の分光スペクトル又は画像処理のいずれか一方が異なる認識対象画像は、異なる種類の認識対象画像とする。 Therefore, even if the combination of the spectral spectrum of the illumination light and the image processing for generating the image to be recognized is different, it is regarded as one type of the image to be recognized. Therefore, a recognition target image in which either the spectral spectrum of the illumination light or the image processing is different is regarded as a different type of recognition target image.
 図11に示すように、認識対象画像生成部71は、観察対象を撮影する際の照明光の分光スペクトルにより、及び/又は、認識対象画像生成用画像処理の方法により区別した認識対象画像取得部を備える。したがって、認識対象画像生成部71は、認識対象画像の種類毎の認識対象画像生成部を備え、第1認識対象画像生成部81、第2認識対象画像生成部82、第3認識対象画像生成部83、第4認識対象画像生成部84、第5認識対象画像生成部85、及び第n認識対象画像生成部86を備える。nは6以上の整数である。本実施形態では、それぞれ以下の照明光及び/又は認識対象画像生成用画像処理を行う。 As shown in FIG. 11, the recognition target image generation unit 71 is a recognition target image acquisition unit that is distinguished by the spectral spectrum of the illumination light when the observation target is photographed and / or by the method of image processing for recognition target image generation. To prepare for. Therefore, the recognition target image generation unit 71 includes a recognition target image generation unit for each type of recognition target image, and is a first recognition target image generation unit 81, a second recognition target image generation unit 82, and a third recognition target image generation unit. It includes 83, a fourth recognition target image generation unit 84, a fifth recognition target image generation unit 85, and an nth recognition target image generation unit 86. n is an integer of 6 or more. In this embodiment, the following illumination light and / or image processing for generating a recognition target image is performed, respectively.
 第1認識対象画像生成部81は、第1認識対象画像を生成するための第1画像処理を行う。第1画像処理は、第1照明光用分光スペクトルである白色光の第1照明光を発して得られたB1画像信号、G1画像信号、及びR1画像信号に対して施す処理である。第1画像処理は、通常観察画像処理部61における通常観察画像処理と同様であり、通常観察画像と同様の第1認識対象画像を得る。第1認識対象画像は、認識対象画像の1種である。したがって、認識対象画像生成部71は、白色の照明光により照明した観察対象を撮影することにより得られる画像を、認識対象画像の1種として取得する。 The first recognition target image generation unit 81 performs the first image processing for generating the first recognition target image. The first image processing is processing applied to the B1 image signal, the G1 image signal, and the R1 image signal obtained by emitting the first illumination light of white light which is the spectral spectrum for the first illumination light. The first image processing is the same as the normal observation image processing in the normal observation image processing unit 61, and the same first recognition target image as the normal observation image is obtained. The first recognition target image is a kind of recognition target image. Therefore, the recognition target image generation unit 71 acquires an image obtained by photographing the observation target illuminated by the white illumination light as one type of the recognition target image.
 第2認識対象画像生成部82は、第2認識対象画像を生成するための第2画像処理を行う。第2画像処理は、第2照明光用分光スペクトルSP1で第2照明光L2を発して得られたB2画像信号、G2画像信号、及びR2画像信号に対して施す処理である。第2照明光用分光スペクトルSP1で発する第2照明光L2は、図12に示すように、紫色光Vが他の色の青色光B、緑色光G、及び赤色光Rよりもピークの強度が大きい光であることが好ましい。第2画像処理は、B2画像信号を表示用のBチャンネルとGチャンネルとに割り当て、G2画像信号を表示用のRチャンネルに割り当てる疑似カラー処理である。この疑似カラー処理により、表層血管など特定深さの血管又は構造が強調された第2認識対象画像が得られる。第2認識対象画像は、認識対象画像の1種である。 The second recognition target image generation unit 82 performs the second image processing for generating the second recognition target image. The second image processing is a process applied to the B2 image signal, the G2 image signal, and the R2 image signal obtained by emitting the second illumination light L2 in the second illumination light spectral spectrum SP1. As shown in FIG. 12, in the second illumination light L2 emitted by the spectroscopic spectrum SP1 for the second illumination light, the purple light V has a peak intensity higher than that of the blue light B, the green light G, and the red light R of other colors. Larger light is preferred. The second image processing is a pseudo-color processing in which the B2 image signal is assigned to the B channel and the G channel for display, and the G2 image signal is assigned to the R channel for display. By this pseudo-color processing, a second recognition target image in which a blood vessel or a structure having a specific depth such as a surface blood vessel is emphasized can be obtained. The second recognition target image is a kind of recognition target image.
 第3認識対象画像生成部83は、第3認識対象画像を生成するための第3画像処理を行う。第3画像処理は、第2照明光用分光スペクトルSP2で第2照明光を発して得られたB2画像信号、G2画像信号、及びR2画像信号に対して施す処理である。第2照明光用分光スペクトルSP2で発する第2照明光は、図13に示すように、紫色光V(ピーク波長は例えば400~420nm)のみを発する光であることが好ましい。第3画像処理は、B2画像信号を表示用のBチャンネル、Gチャンネル、及びRチャンネルに割り当てて、且つ、色調及び階調バランスの調整を行う処理である。第3画像処理によって、表層血管よりも浅い極表層血管などが強調された第3認識対象画像が得られる。第3認識対象画像は、認識対象画像の1種である。 The third recognition target image generation unit 83 performs a third image process for generating the third recognition target image. The third image processing is a process applied to the B2 image signal, the G2 image signal, and the R2 image signal obtained by emitting the second illumination light in the second illumination light spectroscopic spectrum SP2. As shown in FIG. 13, the second illumination light emitted in the second illumination light spectral spectrum SP2 is preferably light that emits only purple light V (peak wavelength is, for example, 400 to 420 nm). The third image processing is a process of allocating the B2 image signal to the B channel, the G channel, and the R channel for display, and adjusting the color tone and the gradation balance. By the third image processing, a third recognition target image in which the polar superficial blood vessels shallower than the surface blood vessels are emphasized can be obtained. The third recognition target image is a kind of recognition target image.
 第4認識対象画像生成部84は、第4認識対象画像を生成するための第4画像処理を行う。第4画像処理は、第1照明光を発して得られたB1画像信号、G1画像信号、及びR1画像信号に加えて、第2照明光用分光スペクトルSP3で第2照明光を発して得られたB2画像信号、G2画像信号、及びR2画像信号に対して施す処理である。第2照明光用分光スペクトルSP3は、図14に示すように、酸化ヘモグロビンと還元ヘモグロビンの吸光係数に差がある波長域の光である青紫色光VB(ピーク波長は例えば470~480nm)であることが好ましい。 The fourth recognition target image generation unit 84 performs the fourth image processing for generating the fourth recognition target image. The fourth image processing is obtained by emitting the second illumination light in the spectroscopic spectrum SP3 for the second illumination light in addition to the B1 image signal, the G1 image signal, and the R1 image signal obtained by emitting the first illumination light. This is a process applied to the B2 image signal, the G2 image signal, and the R2 image signal. As shown in FIG. 14, the second illumination light spectroscopic spectrum SP3 is bluish-purple light VB (peak wavelength is, for example, 470 to 480 nm), which is light in a wavelength range in which the extinction coefficients of oxidized hemoglobin and reduced hemoglobin are different. Is preferable.
 第4認識対象画像生成部84は、図15に示すように、B2画像信号とG1画像信号との比を表す第1信号比(B2/G1)、及び、R1画像信号とG1画像信号との比を表す第2信号比(R1/G1)を算出する信号比算出処理を行う酸素飽和度用信号比算出部84aと、酸素飽和度算出用テーブル84bを参照して、第1信号比及び第2信号比に対応する酸素飽和度を算出する酸素飽和度算出部84cと、酸素飽和度に基づいて酸素飽和度画像を生成する酸素飽和度画像生成部84dとを備える。酸素飽和度画像が、第4画像処理により得られる第4認識対象画像となる。第4認識対象画像は、認識対象画像の1種である。 As shown in FIG. 15, the fourth recognition target image generation unit 84 has a first signal ratio (B2 / G1) representing the ratio between the B2 image signal and the G1 image signal, and the R1 image signal and the G1 image signal. With reference to the oxygen saturation signal ratio calculation unit 84a that performs signal ratio calculation processing to calculate the second signal ratio (R1 / G1) representing the ratio, and the oxygen saturation calculation table 84b, the first signal ratio and the first signal ratio. It includes an oxygen saturation calculation unit 84c that calculates the oxygen saturation corresponding to the two signal ratios, and an oxygen saturation image generation unit 84d that generates an oxygen saturation image based on the oxygen saturation. The oxygen saturation image becomes the fourth recognition target image obtained by the fourth image processing. The fourth recognition target image is a kind of recognition target image.
 なお、酸素飽和度算出用テーブル84bは、酸素飽和度と第1信号比及び第2信号比との相関関係が記憶されている。具体的には、酸素飽和度算出用テーブル84bは、図16に示すように、第1信号比(B2/G1)と第2信号比(R1/G1)を軸とする2次元空間に、酸素飽和度の等値線ELx、EL1、EL2、EL3、ELyなどを定義した2次元テーブルで構成される。例えば、等値線ELxは酸素飽和度が0%、等値線EL1は酸素飽和度が30%、等値線EL2は酸素飽和度が50%、等値線EL3は酸素飽和度が80%であることを表している。なお、第1信号比(B2/G1)と第2信号比(R1/G1)に対する等値線の位置及び形状は、光散乱の物理的なシミュレーションによって予め得られる。なお、第1信号比(B2/G1)と第2信号比(R1/G1)はlogスケールであることが好ましい。 The oxygen saturation calculation table 84b stores the correlation between the oxygen saturation and the first signal ratio and the second signal ratio. Specifically, as shown in FIG. 16, the oxygen saturation calculation table 84b has oxygen in a two-dimensional space centered on the first signal ratio (B2 / G1) and the second signal ratio (R1 / G1). It is composed of a two-dimensional table that defines the saturation lines ELx, EL1, EL2, EL3, ELy, and the like. For example, the isoline ELx has an oxygen saturation of 0%, the equivalence line EL1 has an oxygen saturation of 30%, the equivalence line EL2 has an oxygen saturation of 50%, and the equivalence line EL3 has an oxygen saturation of 80%. It represents that there is. The positions and shapes of the contour lines with respect to the first signal ratio (B2 / G1) and the second signal ratio (R1 / G1) are obtained in advance by physical simulation of light scattering. The first signal ratio (B2 / G1) and the second signal ratio (R1 / G1) are preferably on a log scale.
 第5認識対象画像生成部85は、第5認識対象画像を生成するための第5画像処理を行う。第5画像処理は、色彩拡張処理であり、具体的には、第2照明光用分光スペクトルSP4で第2照明光を発して得られたB2画像信号、G2画像信号、及びR2画像信号に対して施す処理である。第2照明光用スペクトルSP4は、図17に示すように、紫色光V及び青色光Bのピーク強度が、緑色光G及び赤色光Rのピーク強度よりも大きい光であることが好ましい。また、第2照明光用スペクトルSP2と比べると、赤色光Rの強度が大きいことが好ましい。 The fifth recognition target image generation unit 85 performs the fifth image processing for generating the fifth recognition target image. The fifth image processing is a color expansion processing, and specifically, with respect to the B2 image signal, the G2 image signal, and the R2 image signal obtained by emitting the second illumination light in the second illumination light spectral spectrum SP4. It is a process to be applied. As shown in FIG. 17, the second illumination light spectrum SP4 is preferably light in which the peak intensities of the purple light V and the blue light B are larger than the peak intensities of the green light G and the red light R. Further, it is preferable that the intensity of the red light R is higher than that of the second illumination light spectrum SP2.
 第5画像処理は、図18に示すように、B2画像信号とG2画像信号との比を表す第1信号比(B2/G2)、及び、R2画像信号とG2画像信号との比を表す第2信号比(G2/R2)を算出する信号比算出処理を行う色差拡張用信号比算出部85aと、第1信号比及び第2信号比に基づいて、複数の観察対象範囲の間の色差を拡張する色差拡張処理を行う色差拡張処理部85bと、色差拡張処理後の第1信号比及び第2信号比に基づいて、色差拡張画像を生成する色差拡張画像生成部85dが設けられている。色差拡張画像が、第5画像処理により得られる第5認識対象画像となる。第5認識対象画像は、認識対象画像の1種である。 In the fifth image processing, as shown in FIG. 18, the first signal ratio (B2 / G2) representing the ratio of the B2 image signal to the G2 image signal and the ratio of the R2 image signal to the G2 image signal are represented. The color difference between a plurality of observation target ranges is calculated based on the signal ratio calculation unit 85a for color difference expansion that performs signal ratio calculation processing for calculating the two signal ratios (G2 / R2) and the first signal ratio and the second signal ratio. A color difference expansion processing unit 85b that performs the color difference expansion processing to be expanded, and a color difference expansion image generation unit 85d that generates a color difference expansion image based on the first signal ratio and the second signal ratio after the color difference expansion processing are provided. The color difference expanded image becomes the fifth recognition target image obtained by the fifth image processing. The fifth recognition target image is a kind of recognition target image.
 色差拡張処理については、図19に示すように、第1信号比(B2/G2)及び第2信号比(G2/R2)からなる二次元空間で複数の観察対象範囲の間の距離を拡張することが好ましい。具体的には、二次元空間において、複数の観察対象範囲のうち第1範囲(「1」と表記)の位置を色差拡張処理前後で維持した状態で、第1範囲と第2範囲(「2」と表記)との距離、第1範囲と第3範囲(「3」と表記)との距離、及び、第1範囲と第4範囲(「4」と表記)との距離を拡張することが好ましい。色差拡張処理は、第1信号比及び第2信号比を極座標変換した上で、動径と角度を調整する方法により行うことが好ましい。なお、第1範囲は病変などが存在しない正常部で、第2~第4範囲は、病変等が存在する可能性がある異常部であることが好ましい。色差拡張処理により、色差拡張処理前の二次元空間での範囲A1から、色差拡張処理後では範囲A2へと広げられるため、色差が強調され、例えば、異常部と正常部との色の差が強調された画像となる。 Regarding the color difference expansion process, as shown in FIG. 19, the distance between a plurality of observation target ranges is expanded in a two-dimensional space consisting of a first signal ratio (B2 / G2) and a second signal ratio (G2 / R2). Is preferable. Specifically, in a two-dimensional space, the first range and the second range (“2”) are maintained in a state where the position of the first range (denoted as “1”) among a plurality of observation target ranges is maintained before and after the color difference expansion process. The distance between the first range and the third range (denoted as "3"), and the distance between the first range and the fourth range (denoted as "4") can be extended. preferable. It is preferable that the color difference expansion process is performed by a method of adjusting the radius and the angle after converting the first signal ratio and the second signal ratio into polar coordinates. It is preferable that the first range is a normal part in which no lesion or the like is present, and the second to fourth ranges are an abnormal part in which a lesion or the like may be present. By the color difference expansion processing, the range A1 in the two-dimensional space before the color difference expansion processing is expanded to the range A2 after the color difference expansion processing, so that the color difference is emphasized. The image will be emphasized.
 以上のように、内視鏡画像に対し各種の内容の画像処理を行うことにより、複数種類の認識対象画像を生成する。第n認識対象画像生成部86は、n種類目の認識対象画像を生成する。画像処理の方法又は内容は、上記に限らない。例えば、色差拡張処理の他、構造強調処理等の強調処理を行っても良い。内視鏡画像に対する強調処理の有無又は強調処理の種類により、認識対象画像の種類を区別し、区別した認識対象画像をそれぞれ1種の認識対象画像として取得する。なお、強調処理を実施する内視鏡画像は、第1画像処理から第n画像処理のうちいずれか1つの画像処理を行ったものでも行わないものでもよい。 As described above, by performing image processing of various contents on the endoscopic image, a plurality of types of recognition target images are generated. The nth recognition target image generation unit 86 generates the nth kind of recognition target image. The method or content of image processing is not limited to the above. For example, in addition to the color difference expansion processing, enhancement processing such as structure enhancement processing may be performed. The types of recognition target images are distinguished according to the presence or absence of enhancement processing on the endoscopic image or the type of enhancement processing, and each of the distinguished recognition target images is acquired as one type of recognition target image. The endoscopic image to be enhanced may or may not be subjected to any one of the first image processing to the nth image processing.
 構造強調処理は、観察対象における血管が強調されて表された内視鏡画像となるように、取得した内視鏡画像に対して行う処理である。具体的には、内視鏡画像としては、第1照明光を発して得られたB1画像信号、G1画像信号、及びR1画像信号、又は、第2照明光を発して得られたB2画像信号、G2画像信号、及びR2画像信号のいずれかを用いる。構造強調処理では、取得した内視鏡画像において、横軸に画素値(輝度値)を、縦軸に頻度を取ったグラフである濃度ヒストグラムを求め、画像処理部56のメモリ(図示せず)等に予め記憶しておいた階調補正テーブルにより、階調補正を行う。階調補正テーブルは、横軸が入力値を、縦軸が出力値を表し、入力値と出力値の対応関係を示す階調補正カーブを有しており、例えば、略S字形状の階調補正カーブに基づいて階調補正を行って、取得した内視鏡画像のダイナミックレンジを広げる。これにより、構造強調の強調処理前の原画像において濃度が低い部分は、濃度がより低く、濃度が高い部分はより高くなるようになるため、例えば、血管領域と血管が存在しない領域の濃度差が広がって、血管のコントラストが向上する。したがって、構造強調処理により処理された内視鏡画像は、血管のコントラストが向上されているため、血管の構造の視認性が高められており、より容易に、また、精度良く、例えば、血管の密集度が高い領域を特定領域として判定等に用いることができる画像である。 The structure enhancement process is a process performed on the acquired endoscopic image so that the blood vessel in the observation target is emphasized and represented as an endoscopic image. Specifically, as the endoscope image, a B1 image signal, a G1 image signal, and an R1 image signal obtained by emitting the first illumination light, or a B2 image signal obtained by emitting the second illumination light. , G2 image signal, and R2 image signal. In the structure enhancement processing, in the acquired endoscopic image, the pixel value (brightness value) is obtained on the horizontal axis, and the density histogram, which is a graph obtained by taking the frequency on the vertical axis, is obtained, and the memory of the image processing unit 56 (not shown). The gradation correction is performed by the gradation correction table stored in advance in the above. The gradation correction table has a gradation correction curve in which the horizontal axis represents an input value and the vertical axis represents an output value, and the correspondence between the input value and the output value is shown. Gradation correction is performed based on the correction curve to widen the dynamic range of the acquired endoscopic image. As a result, in the original image before the enhancement process of structural enhancement, the density is lower in the low density portion and higher in the high density portion. Therefore, for example, the density difference between the blood vessel region and the region where no blood vessel exists. Spreads and the contrast of blood vessels improves. Therefore, in the endoscopic image processed by the structure enhancement process, the contrast of the blood vessel is improved, so that the visibility of the structure of the blood vessel is enhanced, and it is easier and more accurate, for example, of the blood vessel. It is an image that can be used for determination or the like as a specific area having a high density.
 また、認識対象画像生成部71は、狭帯域光であることが好ましい紫色光V及び/又は青色光Bを含む照明光により照明した観察対象を撮影することにより得られる画像を、認識対象画像の1種として生成することが好ましい。生成された認識対象画像は、画像認識処理部72に送られる。 Further, the recognition target image generation unit 71 captures an image obtained by photographing an observation target illuminated by illumination light including purple light V and / or blue light B, which is preferably narrow band light, as the recognition target image. It is preferable to produce it as one kind. The generated recognition target image is sent to the image recognition processing unit 72.
 画像認識処理部72は、複数種類の認識対象画像に対する画像認識処理を、認識対象画像の種類毎に並列して行う。画像認識処理は、観察対象画像に写る特定状態を認識処理結果として出力するために行う。図20に示すように、画像認識処理部72は、認識対象画像の種類毎に設けた、第1画像認識部91、第2画像認識部92、第3画像認識部93、第4画像認識部94、第5画像認識部95、及び第n画像認識部96を備える。nは6以上の整数であり、認識対象画像の種類の数に応じた数の画像認識部を備える。画像認識処理部72は、認識対象画像の種類毎に画像認識部を備え、各画像認識部は、対応する認識対象画像に対する画像認識処理を、それぞれ並列して及び独立して行う。したがって、第1画像認識部91において第1認識対象画像の画像認識処理を実施し、第2画像認識部92において第2認識対象画像の画像認識処理を実施し、第3画像認識部93において第3認識対象画像の画像認識処理を実施し、第4画像認識部94において第4認識対象画像の画像認識処理を実施し、第5画像認識部95において第5認識対象画像の画像認識処理を実施し、第n画像認識部96において第n認識対象画像の画像認識処理を実施する。 The image recognition processing unit 72 performs image recognition processing for a plurality of types of recognition target images in parallel for each type of recognition target image. The image recognition process is performed in order to output a specific state reflected in the observation target image as a recognition process result. As shown in FIG. 20, the image recognition processing unit 72 includes a first image recognition unit 91, a second image recognition unit 92, a third image recognition unit 93, and a fourth image recognition unit, which are provided for each type of image to be recognized. It includes 94, a fifth image recognition unit 95, and an nth image recognition unit 96. n is an integer of 6 or more, and includes a number of image recognition units corresponding to the number of types of images to be recognized. The image recognition processing unit 72 includes an image recognition unit for each type of the recognition target image, and each image recognition unit performs image recognition processing for the corresponding recognition target image in parallel and independently. Therefore, the first image recognition unit 91 performs the image recognition processing of the first recognition target image, the second image recognition unit 92 performs the image recognition processing of the second recognition target image, and the third image recognition unit 93 performs the image recognition processing. 3 Image recognition processing of the recognition target image is performed, image recognition processing of the 4th recognition target image is carried out in the 4th image recognition unit 94, and image recognition processing of the 5th recognition target image is carried out in the 5th image recognition unit 95. Then, the nth image recognition unit 96 performs the image recognition processing of the nth recognition target image.
 これらの画像認識処理は、並列して及び独立して実施する。また、第1から第nまでの画像認識部において、このうちいくつの画像認識部を用いるかは、認識対象画像を何種類取得するかにより設定する。また、例えば、取得した複数種類の認識対象画像のうち1つの種類において、複数の認識対象画像を取得した場合は、そのうち少なくとも1つの認識対象画像に対して画像認識処理を行えば足りるが、取得した複数種類の認識対象画像の全てに対し画像認識処理を行うことで、認識精度が向上し、動画の記録動作の制御の精度も向上する等のため好ましい。 These image recognition processes are performed in parallel and independently. Further, in the first to nth image recognition units, how many image recognition units are used is set depending on how many types of recognition target images are acquired. Further, for example, when a plurality of recognition target images are acquired in one of the acquired plurality of types of recognition target images, it is sufficient to perform image recognition processing on at least one of the recognition target images. It is preferable to perform the image recognition process on all of the plurality of types of recognition target images because the recognition accuracy is improved and the control accuracy of the moving image recording operation is also improved.
 画像認識処理は、従来行われている画像認識処理の方法を用いることができる。画像認識処理部72において、第1画像認識部91から第n画像認識部96までの各画像認識部が、同じ方法の画像認識処理を実施しても良いし、認識対象画像の種類に応じて、各画像認識部が互いに異なる方法の画像認識処理を実施してもよい。認識対象画像の種類に応じて異なる方法の画像認識処理を実施する場合は、認識対象画像の種類に応じて、良好な画像処理結果が得られる画像認識処理の方法を選択して行うことが好ましい。 For the image recognition processing, a conventional method of image recognition processing can be used. In the image recognition processing unit 72, each image recognition unit from the first image recognition unit 91 to the nth image recognition unit 96 may perform the image recognition processing of the same method, depending on the type of the image to be recognized. , Each image recognition unit may perform image recognition processing by a method different from each other. When performing image recognition processing of different methods depending on the type of the image to be recognized, it is preferable to select and perform the image recognition processing method that can obtain good image processing results according to the type of the image to be recognized. ..
 画像認識処理の方法としては、画像処理を用いてパターン認識等を行う方法、又は、機械学習の技術を用いる方法等が挙げられる。具体的には、例えば、認識対象画像の画素値及び/又は輝度値等の画像に基づく値を用いる方法、画像から算出した酸素飽和度等の生体情報の値を用いる方法、又は、観察対象における特定状態と、特定状態を含む観察対象を撮影することにより得た各認識対象画像とを、予め対応付けた対応情報を用いる方法等が挙げられる。画像認識処理のこれらの方法により、認識対象画像に写る観察対象における部位等の検出、又は疾患等の判定等を行って得た観察対象の特定状態を、認識処理結果として出力する。 Examples of the image recognition processing method include a method of performing pattern recognition using image processing, a method of using machine learning technology, and the like. Specifically, for example, a method using an image-based value such as a pixel value and / or a brightness value of a recognition target image, a method using a value of biological information such as oxygen saturation calculated from the image, or an observation target. Examples thereof include a method of using correspondence information in which a specific state and each recognition target image obtained by photographing an observation target including the specific state are associated in advance. By these methods of the image recognition processing, the specific state of the observation target obtained by detecting the part or the like in the observation target reflected in the recognition target image or determining the disease or the like is output as the recognition processing result.
 観察対象における特定状態とは、観察対象が含む、部位、特定の構造、もしくは処置具等の生体由来でない物体、又は、病変の有無、病変もしくは疾患名、病変もしくは疾患であることの確率もしくは進行度、もしくは特異な生体情報値等をいう。また、観察対象における特定状態には、画像認識処理により得た画像認識結果の確度等も含む。部位としては、内視鏡画像に写る特徴的な部位であることが好ましい。例えば、上部消化管であれば、食道部、噴門部、胃穹窿部、胃体部、幽門部、胃角部、又は十二指腸球部等であり、大腸であれば、盲腸、回盲部、上行結腸、横行結腸、下行結腸、S状結腸、又は直腸等である。特定の構造としては、血管、腺管、ポリープ又はがん等の隆起部、又は陥没部等であり、生体由来でない物体としては、内視鏡に付属可能な生検鉗子、スネア、もしくは異物摘出デバイス等の処置具、又は腹腔鏡手術に用いる腹腔用処置具等である。病変又は疾患名としては、上部消化管又は大腸の内視鏡検査において見られる病変又は疾患が挙げられ、例えば、炎症、発赤、出血、潰瘍、もしくはポリープ、又は、胃炎、バレット食道、がんもしくは潰瘍性大腸炎等である。生体情報値とは、観察対象の生体情報の値であり、例えば、酸素飽和度、もしくは血管密集度、又は、色素による蛍光の値等である。 The specific state in the observation target is a non-living object such as a site, a specific structure, or a treatment tool, including the observation target, the presence or absence of a lesion, the name of the lesion or disease, and the probability or progression of the lesion or disease. Degree or peculiar biometric information value. Further, the specific state in the observation target includes the accuracy of the image recognition result obtained by the image recognition process and the like. The site is preferably a characteristic site that appears in the endoscopic image. For example, in the case of the upper gastrointestinal tract, it is the esophagus, the sulcus, the colon, the gastric body, the pylorus, the angular incisure, or the duodenal bulb, and in the case of the colon, the cecum, the circumflex, and the ascending. Colon, transverse colon, descending colon, sigmoid colon, or cecum. Specific structures include ridges or depressions such as blood vessels, ducts, polyps or cancers, and non-biological objects include biopsy forceps, snares, or foreign body removal that can be attached to an endoscope. A treatment tool such as a device, or a treatment tool for abdominal cavity used for laparoscopic surgery. Names of lesions or diseases include those found on endoscopy of the upper gastrointestinal tract or large intestine, such as inflammation, redness, bleeding, ulcers, or polyps, or gastritis, Barrett's esophagus, cancer or. Ulcerative colitis, etc. The biological information value is a value of biological information to be observed, and is, for example, an oxygen saturation, a blood vessel density, a fluorescence value due to a dye, or the like.
 画像認識処理部72は、観察対象の特定状態と、特定状態である観察対象を撮影することにより得た認識対象画像とを、予め対応付けた対応情報を取得する対応情報取得部を備えることが好ましい。そして、画像認識処理部72は、対応情報に基づき、新たに取得した認識対象画像に対する画像認識処理を行うことが好ましい。対応情報は、予め観察対象の特定状態が判明している場合に、この観察対象を撮影することにより得た認識対象画像と、観察対象の特定状態又は特定状態の領域等の情報等を対応させた情報である。 The image recognition processing unit 72 may include a correspondence information acquisition unit that acquires correspondence information in which the specific state of the observation target and the recognition target image obtained by photographing the observation target in the specific state are associated in advance. preferable. Then, it is preferable that the image recognition processing unit 72 performs image recognition processing on the newly acquired recognition target image based on the corresponding information. When the specific state of the observation target is known in advance, the correspondence information corresponds to the recognition target image obtained by photographing the observation target and the information such as the specific state of the observation target or the region of the specific state. Information.
 図21に示すように、特定状態について未知である新たに取得した認識対象画像を、対応情報取得部に入力することにより、対応情報取得部が備える認識対象画像と観察対象の特定状態とが対応付けられた対応情報を用いて、新たに取得した認識対象画像における特定状態を推定して認識処理結果として出力することができる。また、各対応情報取得部は、それぞれ新たに取得した認識対象画像と、推定により出力した認識処理結果が含む特定状態とを、対応情報としてさらに取得する学習又はフィードバックを行ってもよい。 As shown in FIG. 21, by inputting a newly acquired recognition target image unknown about a specific state into the correspondence information acquisition unit, the recognition target image provided in the correspondence information acquisition unit corresponds to the specific state of the observation target. Using the attached correspondence information, it is possible to estimate a specific state in the newly acquired recognition target image and output it as a recognition processing result. Further, each correspondence information acquisition unit may perform learning or feedback to further acquire the newly acquired recognition target image and the specific state included in the recognition processing result output by estimation as correspondence information.
 各画像認識部は、対応する認識対象画像についての対応情報を備える対応情報取得部を備える。対応情報取得部は、対応情報に基づき認識対象画像の画像認識処理を行い、この認識対象画像が含む観察対象の特定状態に関する領域等の詳細を出力する。なお、特定状態に関する詳細等の出力には、「特定状態を含まない」といった内容も含む。 Each image recognition unit includes a correspondence information acquisition unit that includes correspondence information about the corresponding recognition target image. The correspondence information acquisition unit performs image recognition processing of the recognition target image based on the correspondence information, and outputs details such as an area related to a specific state of the observation target included in the recognition target image. It should be noted that the output of details regarding the specific state includes the content such as "does not include the specific state".
 また、認識対象画像の種類により、画像認識処理により良好な結果を得ることができる観察対象の特定状態が異なる場合がある。したがって、認識対象画像の種類のそれぞれが、特定の種類の特定状態が対応付けられた対応情報取得部を備えることにより、各種類の認識対象画像が画像認識処理により良好な結果を得ることができるため好ましい。例えば、認識対象画像の種類が血管を強調した認識対象画像である場合、この種類の認識対象画像に対応する画像認識処理部は、観察対象の血管に関する特定状態が対応付けられた対応情報を備える画像認識処理部とし、観察対象の血管に関する特定状態を出力させる。 Also, depending on the type of image to be recognized, the specific state of the observation target that can obtain good results by image recognition processing may differ. Therefore, each type of recognition target image is provided with a corresponding information acquisition unit associated with a specific type of specific state, so that each type of recognition target image can obtain good results by image recognition processing. Therefore, it is preferable. For example, when the type of the recognition target image is a recognition target image in which a blood vessel is emphasized, the image recognition processing unit corresponding to this type of recognition target image includes correspondence information associated with a specific state regarding the observation target blood vessel. It is used as an image recognition processing unit and outputs a specific state related to the blood vessel to be observed.
 図20に示すように、第1画像認識部91は、第1対応情報取得部101を備え、第1認識対象画像に対して第1画像認識処理を行う。同様に、第2画像認識部92は、第2対応情報取得部102を備え、第2認識対象画像に対して第2画像認識処理を行い、第3画像認識部93は、第3対応情報取得部103を備え、第3認識対象画像に対して第3画像認識処理を行い、第4画像認識部94は、第4対応情報取得部104を備え、第4認識対象画像に対して第4画像認識処理を行い、第5画像認識部95は、第5対応情報取得部105を備え、第5認識対象画像に対して第5画像認識処理を行い、また、第n画像認識部96は、第n対応情報取得部100を備え、第n認識対象画像に対して第n画像認識処理を行う。 As shown in FIG. 20, the first image recognition unit 91 includes a first correspondence information acquisition unit 101, and performs a first image recognition process on a first recognition target image. Similarly, the second image recognition unit 92 includes a second correspondence information acquisition unit 102, performs a second image recognition process on the second recognition target image, and the third image recognition unit 93 acquires the third correspondence information. A unit 103 is provided to perform a third image recognition process on the third recognition target image, and a fourth image recognition unit 94 is provided with a fourth corresponding information acquisition unit 104 and a fourth image with respect to the fourth recognition target image. The fifth image recognition unit 95 includes the fifth corresponding information acquisition unit 105, performs the fifth image recognition process on the fifth recognition target image, and the nth image recognition unit 96 performs the recognition process. The n-correspondence information acquisition unit 100 is provided, and the nth image recognition process is performed on the nth recognition target image.
 各対応情報取得部は、例えば、機械学習における学習済みモデルである。より迅速に又は精度良く、新たに取得した認識対象画像における観察対象の特定状態が画像認識結果として得られることから、機械学習による学習済みモデルを対応情報取得部として用いた画像認識処理を行うことが好ましい。本実施形態においては、各対応情報取得部として、機械学習における学習済みモデルを用いて観察対象の特定状態を出力するための画像認識処理を行う。なお、この場合、学習済みモデルは、それぞれの種類の認識対象画像について学習したものを用いることが、良好な認識処理結果を得るために好ましい。したがって、例えば、第1対応情報取得部101と第2対応情報取得部102とは、互いに異なる学習済みモデルであることが好ましい。 Each correspondence information acquisition unit is, for example, a trained model in machine learning. Since the specific state of the observation target in the newly acquired image to be recognized can be obtained as the image recognition result more quickly or accurately, the image recognition process using the trained model by machine learning as the corresponding information acquisition unit is performed. Is preferable. In the present embodiment, each correspondence information acquisition unit performs image recognition processing for outputting a specific state of an observation target using a trained model in machine learning. In this case, it is preferable to use the trained model that has been trained for each type of recognition target image in order to obtain good recognition processing results. Therefore, for example, it is preferable that the first correspondence information acquisition unit 101 and the second correspondence information acquisition unit 102 are trained models that are different from each other.
 複数種類の認識対象画像は、少なくとも互いに種類が異なる第1認識対象画像と第2認識対象画像とを含み、画像認識処理部72は、第1認識対象画像に対しては、第1画像認識処理を行い、第2認識対象画像に対しては、第1画像認識処理とは異なる第2画像認識処理を行うことが好ましい。第1画像認識処理は、第1認識対象画像における特定の部位又は生体以外の物体の検出に関して行われることが好ましい。第2画像認識処理は、第2認識対象画像における特定状態の領域を含むとの判定に関して行われることが好ましい。 The plurality of types of recognition target images include at least a first recognition target image and a second recognition target image of different types from each other, and the image recognition processing unit 72 performs first image recognition processing for the first recognition target image. It is preferable to perform a second image recognition process different from the first image recognition process for the second image to be recognized. The first image recognition process is preferably performed with respect to the detection of a specific part or an object other than a living body in the first recognition target image. It is preferable that the second image recognition process is performed with respect to the determination that the region of the specific state is included in the second recognition target image.
 本実施形態では、第1照明光L1を用いた通常観察画像と同様の第1認識対象画像と、第2照明光L2aを用いた表層血管よりも浅い極表層血管などが強調された第3認識対象画像と、第2照明光L2bを用いた色差拡張画像である第5認識対象画像とを用い、それぞれ、第1画像認識部91、第3画像認識部93、及び第5画像認識部95により、各認識対象画像の画像認識処理を行う。第1認識対象画像は、通常観察画像であるため、第1画像認識処理は、特定の部位を良好に検出する。第3認識対象画像は、極表層血管などが強調された画像であるため、第3画像認識処理は、表層の粘膜等の病変を含むとの判定を良好に行う。第5認識対象画像は、色差拡張画像であるため、第5画像認識処理は、観察対象の特定の領域として、例えば、潰瘍性大腸炎の重症度のうち、重症の領域等の判定を良好に行う。なお、潰瘍性大腸炎の重症度は、例えば、軽症、中等症、重症、又は激症等に分類される。 In the present embodiment, the first recognition target image similar to the normal observation image using the first illumination light L1 and the third recognition in which the polar surface blood vessels shallower than the surface blood vessels using the second illumination light L2a are emphasized. The target image and the fifth recognition target image, which is a color difference expanded image using the second illumination light L2b, are used by the first image recognition unit 91, the third image recognition unit 93, and the fifth image recognition unit 95, respectively. , Performs image recognition processing for each recognition target image. Since the first recognition target image is a normal observation image, the first image recognition process satisfactorily detects a specific portion. Since the third recognition target image is an image in which the polar surface blood vessels and the like are emphasized, the third image recognition process satisfactorily determines that a lesion such as a mucous membrane on the surface layer is included. Since the fifth recognition target image is a color difference expanded image, the fifth image recognition process can satisfactorily determine, for example, a severe region among the severity of ulcerative colitis as a specific region to be observed. conduct. The severity of ulcerative colitis is classified into, for example, mild, moderate, severe, or severe.
 図22に示すように、本実施形態では、第1照明光L1を5フレームの第1A発光パターンで、及び第2照明光L2を1フレームの第2B発光パターンで発し、内視鏡画像を取得する。なお、第2照明光では、第2照明光L2bと第2照明光L2aとを順に切り替える。第2照明光L2bにより得た内視鏡画像に基づき第5認識対象画像IM2bを取得する。第2照明光L2aにより得た内視鏡画像に基づき第3認識対象画像IM2aを取得する。図22において、第2照明光L2により得た認識対象画像には斜線を施して示し、第2照明光L2aにより得た第3認識対象画像IM2aと、第2照明光L2bにより得た第5認識対象画像IM2bとでは、異なる種類の斜線により区別して示す。第1照明光L1により得た内視鏡画像に基づき、第1認識対象画像IM1を取得する。 As shown in FIG. 22, in the present embodiment, the first illumination light L1 is emitted in the first A emission pattern of five frames, and the second illumination light L2 is emitted in the second B emission pattern of one frame, and an endoscopic image is acquired. do. In the second illumination light, the second illumination light L2b and the second illumination light L2a are switched in order. The fifth recognition target image IM2b is acquired based on the endoscopic image obtained by the second illumination light L2b. The third recognition target image IM2a is acquired based on the endoscopic image obtained by the second illumination light L2a. In FIG. 22, the recognition target image obtained by the second illumination light L2 is shown with diagonal lines, and the third recognition target image IM2a obtained by the second illumination light L2a and the fifth recognition obtained by the second illumination light L2b are shown. The target image IM2b is distinguished from the target image IM2b by different types of diagonal lines. The first recognition target image IM1 is acquired based on the endoscopic image obtained by the first illumination light L1.
 第1画像認識部91は、第1認識対象画像IM1に対し、特定の部位、例えば、直腸部の検出を行い、検出に関する情報を含む第1認識処理結果を生成する。第3画像認識部93は、第3認識対象画像IM2aに対し、潰瘍性大腸炎の重症度が重症の領域を含むか否かの判定を行い、判定に関する情報を含む第3認識処理結果を生成する。第5画像認識部95は、第5認識対象画像IM2に対し、出血箇所を含むか否かの判定を行い、判定に関する情報を含む第5認識処理結果を生成する。 The first image recognition unit 91 detects a specific part, for example, the rectal part, with respect to the first recognition target image IM1, and generates a first recognition processing result including information on the detection. The third image recognition unit 93 determines whether or not the severity of ulcerative colitis includes a severe region with respect to the third recognition target image IM2a, and generates a third recognition processing result including information on the determination. do. The fifth image recognition unit 95 determines whether or not the fifth recognition target image IM2 includes a bleeding spot, and generates a fifth recognition processing result including information on the determination.
 検出に関する情報は、少なくとも検出があった「検出」又は検出がなかった「非検出」を含む。また、判定に関する情報は、少なくとも判定条件を満たす「合判定」又は判定条件を満たさない「否判定」を含む。したがって、第1認識処理結果は、第1認識対象画像が観察対象において直腸部を含むと推定した場合に、「検出」との情報を含む。また、第3認識処理結果は、第3認識対象画像が観察対象において潰瘍性大腸炎の重症度が重症の領域を含むと推定した場合に、「合判定」との情報を含む。また、第5認識処理結果は、第5認識対象画像が観察対象において出血箇所を含むと推定した場合に、「合判定」との情報を含む。 Information on detection includes at least "detection" with detection or "non-detection" without detection. Further, the information regarding the determination includes at least a "pass determination" that satisfies the determination condition or a "failure determination" that does not satisfy the determination condition. Therefore, the first recognition processing result includes the information of "detection" when it is estimated that the first recognition target image includes the rectal portion in the observation target. In addition, the third recognition processing result includes information of "confirmation determination" when the third recognition target image is estimated to include a region where the severity of ulcerative colitis is severe in the observation target. In addition, the fifth recognition processing result includes information of "match determination" when it is estimated that the fifth recognition target image includes a bleeding spot in the observation target.
 認識結果取得部73は、各画像認識部による画像認識処理により得られる、認識対象画像の認識処理結果を取得する。認識結果取得部73は、取得した全種類の認識対象画像の認識処理結果に基づく認識処理結果を取得する。認識処理結果は、対応する認識対象画像が予め設定した条件を満たすか又は満たさないかの情報を含むことが好ましい。予め設定した条件とは、認識対象画像を用いて画像認識処理を行った場合の認識処理結果が、「検出」又は「合判定」であることであり、認識処理結果が「検出」又は「合判定」の情報を含む場合は、条件を満たすとの情報を含む。 The recognition result acquisition unit 73 acquires the recognition processing result of the recognition target image obtained by the image recognition processing by each image recognition unit. The recognition result acquisition unit 73 acquires the recognition processing result based on the recognition processing result of all the acquired recognition target images. The recognition processing result preferably includes information on whether or not the corresponding recognition target image satisfies or does not satisfy a preset condition. The preset condition is that the recognition processing result when the image recognition processing is performed using the recognition target image is "detection" or "match judgment", and the recognition processing result is "detection" or "match". When the information of "judgment" is included, the information that the condition is satisfied is included.
 本実施形態では、特定部位の検出結果である第1認識処理結果と、特定状態の領域を含むとの判定結果である第3認識処理結果と第5認識処理結果との、3種類の認識処理結果を取得する。第3認識処理結果は、病変を含むか否かの判定結果であり、第5認識処理結果は、潰瘍性大腸炎の重症の領域を含むか否かの判定結果である。 In the present embodiment, there are three types of recognition processing: the first recognition processing result which is the detection result of the specific part, the third recognition processing result and the fifth recognition processing result which are the determination results including the region of the specific state. Get the result. The third recognition treatment result is a determination result of whether or not a lesion is included, and the fifth recognition treatment result is a determination result of whether or not a severe region of ulcerative colitis is included.
 図23に示すように、特定部位を検出している第1認識対象画像IM1をドット模様を付して示す。「第1認識」の行は、第1画像認識処理が、何を検出するかを示す。「第1認識」の1行下の「結果」は、第1認識処理結果が含む検出に関する情報を示す。同様に、「第5認識」の行は、第5画像認識処理が、何を判定するかを示し、その下の行の「結果」は、第5認識処理結果が含む判定に関する情報を示す。また、同様に、「第3認識」の行は、第3画像認識処理が、何を判定するかを示し、その下の行の「結果」は、第3認識処理結果が含む判定に関する情報を示す。 As shown in FIG. 23, the first recognition target image IM1 detecting a specific portion is shown with a dot pattern. The "first recognition" line indicates what the first image recognition process detects. The "result" one line below the "first recognition" indicates the information regarding the detection included in the first recognition processing result. Similarly, the "fifth recognition" line indicates what the fifth image recognition process determines, and the "result" in the line below it indicates information regarding the determination included in the fifth recognition process result. Similarly, the "third recognition" line indicates what the third image recognition process determines, and the "result" in the line below it contains information regarding the determination included in the third recognition process result. show.
 また、「第1認識」の「結果」において、「特定部位検出」と記載があるのは、第1認識対象画像IM1xにより、特定部位を検出したとの情報を得たことを示し、その後の第1認識対象画像IM1yにおいて、「特定部位非検出」と記載があるのは、第1認識対象画像IM1yにより、特定部位が非検出であったとの情報を得たことを示す。認識対象画像に付した「x」の符号は、同じ種類の認識対象画像の画像認識処理において、最初に「検出」又は「合判定」を得た最初の画像を示し、「y」の符号は、「検出」又は「合判定」を得た後に、「非検出」又は「否判定」を得た最初の画像を示す。「x」又は「y」の符号を付した場合の「結果」には、下線を引いて示す。以下の図において、同様の符号は同様の内容を示す。 Further, in the "result" of the "first recognition", the description of "specific site detection" indicates that the information that the specific site was detected by the first recognition target image IM1x was obtained, and thereafter. The description of "specific site non-detection" in the first recognition target image IM1y indicates that the information that the specific site was not detected was obtained by the first recognition target image IM1y. The code of "x" attached to the recognition target image indicates the first image for which "detection" or "match judgment" is first obtained in the image recognition process of the same type of recognition target image, and the code of "y" is used. , "Not detected" or "No" after obtaining "Detection" or "Failure" is shown. The "result" when the sign "x" or "y" is added is underlined. In the following figure, similar reference numerals indicate similar contents.
 図23に示すように、第1認識対象画像IM1xにおいて特定部位を検出した場合、第1認識処理結果は、認識対象画像IM1xが「特定部位を検出した」との情報を含む。本実施形態では、最初に第1認識対象画像IM1xにおいて特定部位を検出してから、その後に得られた第1認識対象画像IM1においても引き続き特定部位が検出され、第1認識対象画像IM1yにおいて特定部位が非検出となるまで、特定部位の検出が継続した。第3認識処理結果及び第5認識処理結果においては、図23に示す間、「合判定」の情報は1つも含まれなかった。 As shown in FIG. 23, when a specific part is detected in the first recognition target image IM1x, the first recognition processing result includes the information that the recognition target image IM1x "detects the specific part". In the present embodiment, a specific part is first detected in the first recognition target image IM1x, and then the specific part is continuously detected in the first recognition target image IM1 obtained thereafter, and the specific part is specified in the first recognition target image IM1y. Detection of a specific site continued until the site was not detected. In the third recognition processing result and the fifth recognition processing result, while shown in FIG. 23, no information of "pass determination" was included.
 なお、認識結果取得部73は、画像認識処理部72により各認識対象画像に基づく認識対象結果が生成され次第、認識処理結果を取得していく。認識結果取得部73は、新しく同種の認識処理結果が得られた場合、同種の直前の認識処理結果を削除して、常に全種類の認識対象画像に基づく最新の各認識処理結果を有する。全種類の認識対象画像とは、照明光の発光パターン等による1つの発光周期内での全種類であることが好ましい。照明光の発光パターンは、任意に、又は、認識処理結果等により切り替えることができるが、その場合は、切り替えた後の発光周期内での全種類とすることが好ましい。取得した全種類の認識対象画像の認識処理結果は、録画制御部74に送る。 The recognition result acquisition unit 73 acquires the recognition processing result as soon as the image recognition processing unit 72 generates the recognition target result based on each recognition target image. When a recognition result acquisition unit 73 of the same type is newly obtained, the recognition result acquisition unit 73 deletes the recognition processing result immediately before the same type, and always has the latest recognition processing results based on all types of recognition target images. It is preferable that all types of recognition target images are all types within one light emission cycle due to the light emission pattern of the illumination light or the like. The emission pattern of the illumination light can be switched arbitrarily or depending on the recognition processing result or the like, but in that case, it is preferable to use all types within the emission cycle after the switching. The recognition processing results of all the acquired recognition target images are sent to the recording control unit 74.
 録画制御部74は、取得した全種類の認識対象画像の認識処理結果に基づき、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像の動画を記録する際の記録の動作を制御する。取得した全種類の認識対象画像の認識処理結果に基づくとは、認識結果取得部73が取得した各種類で少なくとも1つの認識処理結果を、全種類分検討した上で、記録の動作を制御することである。 The recording control unit 74 controls the recording operation when recording a moving image of at least one type of recognition target image among a plurality of types of recognition target images based on the recognition processing results of all types of recognition target images acquired. Based on the recognition processing results of all the acquired recognition target images, the recording operation is controlled after examining at least one recognition processing result for each type acquired by the recognition result acquisition unit 73 for all types. That is.
 動画の記録を行う対象である認識対象画像の動画は、取得する複数種類の認識対象画像のうち、いずれか1種類又は2種類以上とすることができる。複数種類の認識対象画像のうち、1種類の動画を記録する場合は、第1認識対象画像であることが好ましい。第1認識対象画像は通常観察画像であり、自然な色により撮影された動画であり、記録した動画の用途が広く、一般的に有用だからである。 The moving image of the recognition target image that is the target for recording the moving image can be one or more of the multiple types of recognition target images to be acquired. When recording one type of moving image among a plurality of types of recognition target images, it is preferably the first recognition target image. This is because the first recognition target image is a normal observation image, which is a moving image taken in a natural color, and the recorded moving image has a wide range of uses and is generally useful.
 記録の動作とは、記録の実行に関する指示であり、例えば、記録が行われていない場合には、記録の開始を含む。また、すでに記録が行われている場合には、記録の継続及び停止を含む。なお、その他、記録の一時停止等、一定期間のみの録画又は停止等、記録の実行に関して通常行われる動作を含む。録画制御部74は、認識結果取得部73が取得した複数種類の認識対象画像に基づく全種類の認識処理結果から、記録の開始、及び停止等の動作を制御する。 The recording operation is an instruction regarding the execution of recording, and includes, for example, the start of recording when recording is not performed. Also, if recording has already been performed, it includes continuation and suspension of recording. In addition, it includes operations normally performed for recording execution, such as recording or stopping for a certain period of time, such as pausing recording. The recording control unit 74 controls operations such as start and stop of recording from all types of recognition processing results based on the plurality of types of recognition target images acquired by the recognition result acquisition unit 73.
 録画制御部74は、認識結果取得部73が取得した全種類の認識対象画像の認識処理結果のうち、少なくとも1つが条件を満たすとの情報を含む場合、記録を開始又は継続することが好ましい。また、認識結果取得部73が取得した全種類の認識対象画像の認識処理結果の全てが条件を満たさないとの情報を含み、かつ、記録が継続している場合、記録を停止することが好ましい。 When the recording control unit 74 includes information that at least one of the recognition processing results of all types of recognition target images acquired by the recognition result acquisition unit 73 satisfies the condition, it is preferable to start or continue recording. Further, when all the recognition processing results of all types of recognition target images acquired by the recognition result acquisition unit 73 include information that the conditions are not satisfied and the recording is continued, it is preferable to stop the recording. ..
 図24において、本実施形態では、録画制御部74は、第1認識対象画像IM1と、第5認識対象画像IM2bと、第3認識対象画像IM2aとの3種類の認識対象画像の認識処理結果に基づき、第1認識対象画像IM1の動画を記録する際の記録の動作を制御する。録画制御部74では、第1認識対象画像IM1の第1認識処理結果は、新たに取得された場合、直前に取得された第1認識処理結果と入れ替わり、第5認識対象画像IM2b又は第3認識対象画像IM2aは、それぞれ新たに取得された場合、直前に取得されたものと入れ替わることにより、最新の複数種類の認識対象画像に基づき、記録の動作を制御する。 In FIG. 24, in the present embodiment, the recording control unit 74 uses the recognition processing results of three types of recognition target images, the first recognition target image IM1, the fifth recognition target image IM2b, and the third recognition target image IM2a. Based on this, the operation of recording when recording the moving image of the first recognition target image IM1 is controlled. In the recording control unit 74, when the first recognition processing result of the first recognition target image IM1 is newly acquired, it is replaced with the first recognition processing result acquired immediately before, and the fifth recognition target image IM2b or the third recognition When each of the target images IM2a is newly acquired, the target image IM2a controls the recording operation based on the latest plurality of types of recognition target images by replacing the images acquired immediately before.
 録画制御部74は、第1認識対象画像IM1xの第1画像認識処理により、「検出」を含むとの認識処理結果が得られた時刻t1に、最新の第5認識対象画像であるIM2bの「否判定」との両者に基づき、録画がすでに開始されていない場合であるため、録画を開始する制御を行う。もし、録画がすでに開始されている場合であれば、録画を継続する制御を行うが、実質的には記録の動作をしないことにより、録画を継続するように制御する。第1認識対象画像又は第3認識対象画像による認識処理結果において、「検出」又は「合判定」の情報が含まれている間は、録画が継続される。その後、第1認識対象画像IM1yによる認識処理結果に「非検出」が含まれた時刻t2に、最新の第3認識対象画像IM2aにおいても「否判定」であり、両者とも否定的な認識処理結果であったため、録画を停止する制御を行う。なお、否定的な認識処理結果とは、「非検出」又は「否判定」であり、肯定的な認識処理結果とは、「検出」又は「合判定」である。本実施形態では、このようにして、時刻t1から時刻t2までの第1認識対象画像を録画した録画ファイルが得られる。 The recording control unit 74 sets the latest recognition target image IM2b "" at time t1 when the recognition processing result including "detection" is obtained by the first image recognition process of the first recognition target image IM1x. Since it is the case that the recording has not already started based on both of "No judgment", the control to start the recording is performed. If recording has already started, control is performed to continue recording, but by substantially not performing recording operation, control is performed to continue recording. Recording is continued as long as the information of "detection" or "match determination" is included in the recognition processing result of the first recognition target image or the third recognition target image. After that, at the time t2 when the recognition processing result by the first recognition target image IM1y includes "non-detection", the latest third recognition target image IM2a is also "no judgment", and both are negative recognition processing results. Therefore, control is performed to stop recording. The negative recognition processing result is "non-detection" or "negative judgment", and the positive recognition processing result is "detection" or "pass judgment". In the present embodiment, in this way, a recorded file in which the first recognition target image from the time t1 to the time t2 is recorded can be obtained.
 なお、本実施形態のように、第1認識対象画像IM1xによる肯定的な認識処理結果が取得された場合、第2照明光L2の種類を第2照明光L2bから第2照明光L2aに切り替え、取得する認識対象画像を第5認識対象画像IM2bから第3認識対象画像IM2aに切り替えるようにしてもよい。これにより、第1認識対象画像IM1により特定の部位が認識された場合に、予め設定した特定の部位の観察のために適切な第2照明光L2に自動的に切り替えられ、病変の見逃し等をより減少することができる。 When a positive recognition processing result by the first recognition target image IM1x is acquired as in the present embodiment, the type of the second illumination light L2 is switched from the second illumination light L2b to the second illumination light L2a. The recognition target image to be acquired may be switched from the fifth recognition target image IM2b to the third recognition target image IM2a. As a result, when a specific part is recognized by the first recognition target image IM1, it is automatically switched to the second illumination light L2 suitable for observing the specific part set in advance, and the lesion is overlooked. Can be reduced more.
 表示用画像生成部75は、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像を用いて、ディスプレイ18に表示するための表示画像を生成する。例えば、通常観察画像と同様である第1認識対象画像をディスプレイ18に継続して表示する場合は、表示用画像生成部75は、第1認識対象画像を用いて表示画像とするための画像処理を行って、表示画像を生成する。生成した表示画像は、表示制御部57に送られる。 The display image generation unit 75 uses at least one type of recognition target image out of a plurality of types of recognition target images to generate a display image to be displayed on the display 18. For example, when the first recognition target image similar to the normal observation image is continuously displayed on the display 18, the display image generation unit 75 uses the first recognition target image for image processing to obtain a display image. To generate a display image. The generated display image is sent to the display control unit 57.
 表示制御部57は、通常観察モードの場合には、通常画像をディスプレイ18に表示し、特殊観察モードの場合には、特殊観察画像をディスプレイ18に表示する。また、診断支援モードの場合には、認識対象画像のうち少なくとも1種類の認識対象画像を継続してディスプレイ18に表示する制御を行う。なお、ディスプレイ18に表示する種類の認識対象画像の種類が取得されないフレームにおいては、表示制御部57は、直前に取得された表示画像等を継続して表示する等の制御を行う。本実施形態では、通常画像と同様の第1認識対象画像に基づく表示画像が表示制御部57に送られるため、通常画像が継続してディスプレイ18に表示される。なお、表示はされないが、取得されている第5認識対象画像及び/又は第3認識対象画像を、指示により、表示画像としてディスプレイ18に表示するようにしてもよい。 The display control unit 57 displays the normal image on the display 18 in the normal observation mode, and displays the special observation image on the display 18 in the special observation mode. Further, in the case of the diagnosis support mode, control is performed so that at least one type of recognition target image among the recognition target images is continuously displayed on the display 18. In the frame where the type of the recognition target image to be displayed on the display 18 is not acquired, the display control unit 57 performs control such as continuously displaying the display image or the like acquired immediately before. In the present embodiment, since the display image based on the first recognition target image similar to the normal image is sent to the display control unit 57, the normal image is continuously displayed on the display 18. Although it is not displayed, the acquired fifth recognition target image and / or third recognition target image may be displayed on the display 18 as a display image according to an instruction.
 以上のように構成することにより、画像処理装置として機能するプロセッサ装置16、又は画像処理装置を備える内視鏡システム10等は、取得した全種類の認識対象画像の認識処理結果に基づき、認識対象画像の動画の記録の動作を制御するため、自動的に録画の制御が行われ、手動による録画の制御に関する煩わしさが解消される。また、1種類の認識対象画像の認識処理結果を用いて動画の記録の動作を制御する場合よりも、詳細に動画の記録の動作を制御できるため、目的に柔軟に対応した録画を自動的に行うことができる。また、録画の記録の動作を詳細に制御することから、不要なシーンの録画を保存する必要がなく、ストレージを節約でき、後に録画を利用する際にも検索しやすく、内視鏡検査の記録に関する使用性を向上することができる。また、複数種類の認識対象画像を用いることから、ディスプレイ18に表示される画像では認識しずらい病変であっても、自動的に録画される場合があり、病変の見逃し等を減少させることができる。このように、プロセッサ装置16、及び内視鏡システム10等は、複数種類の内視鏡画像に基づいて画像認識処理を並列して行う場合に、効率的に内視鏡画像等の録画を行うことができる。 With the above configuration, the processor device 16 that functions as an image processing device, the endoscope system 10 that includes the image processing device, and the like are recognized targets based on the recognition processing results of all types of acquired images to be recognized. In order to control the operation of recording a moving image, the recording is automatically controlled, and the troublesomeness of manually controlling the recording is eliminated. In addition, since it is possible to control the video recording operation in more detail than when controlling the video recording operation using the recognition processing result of one type of recognition target image, the recording that corresponds flexibly to the purpose is automatically performed. It can be carried out. In addition, since the operation of recording the recording is controlled in detail, there is no need to save the recording of unnecessary scenes, the storage can be saved, it is easy to search when using the recording later, and the endoscopy recording. It is possible to improve the usability of the. Further, since a plurality of types of recognition target images are used, even lesions that are difficult to recognize in the image displayed on the display 18 may be automatically recorded, which can reduce oversight of lesions and the like. can. As described above, the processor device 16, the endoscope system 10, and the like efficiently record the endoscope images and the like when the image recognition processes are performed in parallel based on a plurality of types of endoscope images. be able to.
 なお、図24の場合では、第1認識処理結果が特定部位を検出又は非検出であったことにより記録の動作が制御されたが、第1認識処理結果以外の認識処理結果に基づいて、記録の動作が制御されてもよい。図25に示すように、この場合は、第1照明光L1を5フレームの第1A発光パターンで、及び第2照明光L2を1フレームの第2B発光パターンで発し、内視鏡画像を取得する。なお、第2照明光では、第2照明光L2bと第2照明光L2cとを順に切り替える。第2照明光L2bにより得た内視鏡画像に基づき第5認識対象画像IM2bを取得する。第2照明光L2cにより得た内視鏡画像に基づき第2認識対象画像IM2cを取得する。図25において、第2照明光L2により得た認識対象画像には斜線を施して示し、第2照明光L2bにより得た第5認識対象画像IM2aと、第2照明光L2cにより得た第2認識対象画像IM2cとでは、異なる模様を付して区別して示す。第1照明光L1により得た内視鏡画像に基づき、第1認識対象画像IM1を取得する。 In the case of FIG. 24, the recording operation was controlled because the first recognition processing result detected or did not detect a specific part, but the recording was performed based on the recognition processing results other than the first recognition processing result. The operation of may be controlled. As shown in FIG. 25, in this case, the first illumination light L1 is emitted in the first A emission pattern of five frames, and the second illumination light L2 is emitted in the second B emission pattern of one frame to acquire an endoscopic image. .. In the second illumination light, the second illumination light L2b and the second illumination light L2c are switched in order. The fifth recognition target image IM2b is acquired based on the endoscopic image obtained by the second illumination light L2b. The second recognition target image IM2c is acquired based on the endoscopic image obtained by the second illumination light L2c. In FIG. 25, the recognition target image obtained by the second illumination light L2 is shaded and shown, and the fifth recognition target image IM2a obtained by the second illumination light L2b and the second recognition obtained by the second illumination light L2c are shown. The target image IM2c is distinguished from the target image IM2c by adding a different pattern. The first recognition target image IM1 is acquired based on the endoscopic image obtained by the first illumination light L1.
 時刻t1において、認識対象画像IM2bxにおいて病変の領域を含むと判定されたため、第5認識処理結果が条件を満たすとの情報を含むものとなる。録画制御部74は、全種類の認識処理結果である、第1認識処理結果、第5認識処理結果、及び第2認識処理結果においてそれぞれ最新のもののうち、1つ以上が条件を満たすとの情報を含むものとなったため、記録を開始する。なお、時刻t1においては、第2認識処理結果は取得されていない場合がありうるが、第5認識処理結果において条件を満たすために、時刻t1であっても、全種類の認識処理結果のうち1つ以上が条件を満たす。 At time t1, since it was determined in the recognition target image IM2bx that the area of the lesion was included, the information that the fifth recognition processing result satisfies the condition is included. Information that the recording control unit 74 satisfies one or more of the latest recognition processing results of all types, the first recognition processing result, the fifth recognition processing result, and the second recognition processing result. Since it contains, recording is started. It should be noted that, at time t1, the second recognition processing result may not be acquired, but in order to satisfy the condition in the fifth recognition processing result, even at time t1, among all types of recognition processing results. One or more meet the conditions.
 図25において、認識対象画像においてドット模様を付したものは、病変の領域を含むと判定されたものである。なお、第2照明光L2の照明パターンは、第5認識対象画像のための第2照明光IM2bと、第2認識対象画像のための第2照明光IM2cとを、順番に切り替えて用いるパターンである。その後、第5認識対象画像において、病変の領域を含むとの判定が続き、第1認識対象画像及び第5認識処理画像においては、条件を満たす検出又は判定が無かったため、録画が継続された。そして、認識対象画像IMB2yにおいて病変の領域を含まなくなった「非病変」と判定された際、第1認識対象画像及び第5認識処理画像においては、条件を満たす検出又は判定が無かったため、録画が停止された。このように、第2又は第5認識処理結果における判定に基づいても記録の動作を制御することができる。したがって、ディスプレイ18に表示される第1認識対象画像において、病変等が検出されない場合であっても、録画が自動的に行われるため、病変の可能性がある注目領域等の記録を逃すことがない。 In FIG. 25, the image to be recognized with a dot pattern is determined to include a lesion area. The illumination pattern of the second illumination light L2 is a pattern in which the second illumination light IM2b for the fifth recognition target image and the second illumination light IM2c for the second recognition target image are sequentially switched and used. be. After that, the determination that the region of the lesion was included in the fifth recognition target image continued, and the first recognition target image and the fifth recognition processed image did not detect or determine the condition, so recording was continued. Then, when it was determined in the recognition target image IMB2y that the area of the lesion was no longer included in the "non-lesion", the first recognition target image and the fifth recognition processed image were not detected or determined to satisfy the conditions, so that the recording was performed. It was stopped. In this way, the operation of recording can be controlled even based on the determination in the second or fifth recognition processing result. Therefore, even if a lesion or the like is not detected in the first recognition target image displayed on the display 18, recording is automatically performed, so that recording of a region of interest or the like that may be a lesion may be missed. do not have.
 なお、図24及び図25の場合では、第1認識処理結果又は第5認識処理結果の1種類の認識処理結果における肯定的な検出又は判定により、記録の動作が制御されたが、肯定的な検出又は判定が2種類以上の認識処理結果において生じた場合であっても、記録の動作は適切に制御される。図26に示すように、図25と同様の照明光発光パターンにおいて、時刻t1に、認識対象画像IM2bxにおいて病変の領域を含むと判定され、記録を開始する。その後、第1認識対象画像IM1xにおいて、処置具が検出されたが、すでに録画が開始していたため、記録の動作は行わず、そのまま録画を継続した。その後、時刻t2に、第1認識対象画像IM1yにおいて、処置具が非検出となったが、第5認識処理結果の病変の肯定的な判定が継続していたため、録画停止の動作は行わず、そのまま録画を継続した。その後、第5認識処理画像IM2byにおいて病変の判定が否定的となり、この時点の第1認識処理結果、第2認識処理結果、及び第5認識処理結果において、検出又は判定が肯定的であるとの条件がないため、録画を停止する動作を行った。このように、複数種類の認識処理結果において条件を満たすものが複数存在した場合であっても、適切に記録の動作を制御することができる。 In the case of FIGS. 24 and 25, the recording operation was controlled by a positive detection or determination in one type of recognition processing result of the first recognition processing result or the fifth recognition processing result, but it is positive. Even if the detection or determination occurs in two or more types of recognition processing results, the recording operation is appropriately controlled. As shown in FIG. 26, in the illumination light emission pattern similar to that in FIG. 25, it is determined that the recognition target image IM2bx includes the lesion region at time t1, and recording is started. After that, the treatment tool was detected in the first recognition target image IM1x, but since the recording had already started, the recording operation was not performed and the recording was continued as it was. After that, at time t2, the treatment tool was not detected in the first recognition target image IM1y, but since the positive determination of the lesion as a result of the fifth recognition process continued, the recording stop operation was not performed. Recording was continued as it was. After that, the determination of the lesion is negative in the fifth recognition processed image IM2by, and the detection or determination is positive in the first recognition processing result, the second recognition processing result, and the fifth recognition processing result at this time. Since there are no conditions, the operation to stop recording was performed. As described above, even when there are a plurality of recognition processing results satisfying the conditions, the recording operation can be appropriately controlled.
 なお、認識対象画像の種類を変更した場合であっても、記録の動作は適切に制御される。図27に示すように、この場合は、はじめに、第1照明光L1を5フレームの第1A発光パターンで、及び第2照明光L2を1フレームの第2B発光パターンで発し、内視鏡画像を取得する。なお、第2照明光では、第2照明光L2bと第2照明光L2cとを順に切り替える。第2照明光L2bにより得た内視鏡画像に基づき第5認識対象画像IM2bを取得する。第2照明光L2cにより得た内視鏡画像に基づき第2認識対象画像IM2cを取得する。第1認識処理結果において「検出」の情報が含まれた際に、第2照明光を第2照明光L2dに切り替え、第2照明光L2dに切り替えた後は、第2認識処理結果において「合判定」の後の「否判定」が含まれるまでは、第2照明光L2dと、第2照明光L2dに対応する第2認識対象画像の取得を継続する。 Even if the type of image to be recognized is changed, the recording operation is properly controlled. As shown in FIG. 27, in this case, first, the first illumination light L1 is emitted in the first A emission pattern of 5 frames, and the second illumination light L2 is emitted in the second B emission pattern of 1 frame, and the endoscopic image is obtained. get. In the second illumination light, the second illumination light L2b and the second illumination light L2c are switched in order. The fifth recognition target image IM2b is acquired based on the endoscopic image obtained by the second illumination light L2b. The second recognition target image IM2c is acquired based on the endoscopic image obtained by the second illumination light L2c. When the information of "detection" is included in the first recognition processing result, the second illumination light is switched to the second illumination light L2d, and after switching to the second illumination light L2d, the second recognition processing result is "combined". The acquisition of the second illumination light L2d and the second recognition target image corresponding to the second illumination light L2d is continued until the "rejection determination" after the "determination" is included.
 図27において、第2照明光L2により得た認識対象画像には斜線を施して示し、第2照明光L2bにより得た第5認識対象画像IM2aと、第2照明光L2cにより得た第2認識対象画像IM2cと、第4照明光L2dにより得た第4認識対象画像IM2dでは、異なる模様を付して区別して示す。第1照明光L1により得た内視鏡画像に基づき、第1認識対象画像IM1を取得する。 In FIG. 27, the recognition target image obtained by the second illumination light L2 is shaded and shown, and the fifth recognition target image IM2a obtained by the second illumination light L2b and the second recognition obtained by the second illumination light L2c are shown. The target image IM2c and the fourth recognition target image IM2d obtained by the fourth illumination light L2d are shown separately with different patterns. The first recognition target image IM1 is acquired based on the endoscopic image obtained by the first illumination light L1.
 図27に示すように、はじめに、時刻t1に、第5認識対象画像IM2bxにおいて病変の領域を含むと判定され、時刻t1以降、第1認識対象画像の記録を開始した。その後、時刻t2に、第2認識対象画像IM2cxにおいて、血管密との判定がされたが、すでに録画が開始していたため、記録の動作は行わず、そのまま録画を継続した。その後、時刻t3に、第1認識対象画像IM1xにおいて、処置具が検出された。これを契機として、第2照明光L2bと第2照明光L2aとが順番に照射されていたのが、第4認識対象画像用の第2照明光L2dに切り替わり、第2照明光として第2照明光L2dが継続して用いられた。時刻t4に、第4認識対象画像IM2dxにより、出血点があると判定されたが、録画が継続していたため、そのまま録画を継続した。 As shown in FIG. 27, first, at time t1, it was determined that the region of the lesion was included in the fifth recognition target image IM2bx, and recording of the first recognition target image was started after time t1. After that, at time t2, it was determined in the second recognition target image IM2cx that the blood vessel was dense, but since the recording had already started, the recording operation was not performed and the recording was continued as it was. Then, at time t3, the treatment tool was detected in the first recognition target image IM1x. With this as an opportunity, the second illumination light L2b and the second illumination light L2a were sequentially irradiated to the second illumination light L2d for the fourth recognition target image, and the second illumination as the second illumination light. Light L2d was continuously used. At time t4, it was determined by the fourth recognition target image IM2dx that there was a bleeding point, but since the recording was continued, the recording was continued as it was.
 その後、時刻t5に、第1認識対象画像IM1yにおいて、処置具が非検出となった。しかし、この時点での認識対象画像は、第1認識対象画像と第4認識対象画像との2種類が取得されており、第4認識対象画像による肯定的な判定が続いていたため、録画は継続した。その後、時刻t6において、第4認識対象画像IM2dの、出血点を判定しないとの情報により、検出又は判定が肯定的であるとの条件がないため、録画を停止する動作を行った。このように、認識対象画像の種類を、第5認識対象画像及び第3認識対象画像から、第4認識対象画像に変更した場合であっても、適切に記録の動作を制御することができる。 After that, at time t5, the treatment tool was not detected in the first recognition target image IM1y. However, as the recognition target image at this point, two types of the recognition target image, the first recognition target image and the fourth recognition target image, have been acquired, and the positive judgment by the fourth recognition target image has continued, so the recording is continued. did. After that, at time t6, the operation of stopping the recording was performed because there is no condition that the detection or determination is affirmative based on the information that the bleeding point is not determined in the fourth recognition target image IM2d. As described above, even when the type of the recognition target image is changed from the fifth recognition target image and the third recognition target image to the fourth recognition target image, the recording operation can be appropriately controlled.
 なお、図27の場合は、内視鏡による手技の一例における一連の動作を示している。例えば、内視鏡及び内視鏡先端部に備える処置具を用いて、病変部を切除等する場合、第5認識処理結果により病変の領域を判定することにより録画を開始し、第2認識処理結果により、処置具を用いて病変部を切除する範囲を確定するために、血管が密である領域を判定し、処置具を用いて処置を行い、第4認識処理結果により、処置後の出血点を判定し、出血点が消失した場合に録画を停止する。したがって、手技の一連の動作において、適切に記録の動作を制御することができる。 Note that FIG. 27 shows a series of operations in an example of an endoscopic procedure. For example, when excising a lesion using an endoscope and a treatment tool provided at the tip of the endoscope, recording is started by determining the lesion area based on the result of the fifth recognition process, and the second recognition process is performed. Based on the results, in order to determine the range of excision of the lesion using the treatment tool, the area where the blood vessels are dense is determined, the treatment is performed using the treatment tool, and the bleeding after the treatment is based on the result of the fourth recognition treatment. Judge the point and stop recording when the bleeding point disappears. Therefore, it is possible to appropriately control the recording operation in a series of operations of the procedure.
 なお、録画制御部74は、動画を記録する際に、記録の動作に対応する条件に関する情報を動画に付すことが好ましい。記録の動作に対応する条件に関する情報は、動作の契機となった認識処理結果の内容である。記録の動作には、記録の開始もしくは継続又は停止等が含まれる。また、認識処理結果の記録のため、動作の契機とはならなくても、認識処理結果の肯定的な内容に関する情報も付すことが好ましい。情報を動画に付す場合、動画記録における区切りであるチャプターとして記録してもよい。記録の動作とその動作に関する情報を動画に付すことにより、認識処理結果及び記録の動作が何によって起きたかが明確に記録される。したがって、録画したものを見直す際に、所望する録画部分に容易に到達することができるため使用性が向上する。 When recording a moving image, the recording control unit 74 preferably attaches information regarding conditions corresponding to the recording operation to the moving image. The information regarding the conditions corresponding to the recording operation is the content of the recognition processing result that triggered the operation. Recording operations include starting, continuing, or stopping recording. Further, since the recognition processing result is recorded, it is preferable to attach information on the positive content of the recognition processing result even if it does not trigger the operation. When the information is attached to a moving image, it may be recorded as a chapter which is a break in the moving image recording. By attaching the recording operation and information related to the operation to the moving image, the recognition processing result and what caused the recording operation are clearly recorded. Therefore, when reviewing what has been recorded, the desired recorded portion can be easily reached, which improves usability.
 動画は、個別の録画ファイルとして保存することが好ましい。また、場合により、記録を行う手段を複数備えてもよい。記録を行う手段を複数用いることにより、同時に複数の記録を行うことができる。例えば、第1照明光L1による画像である第1認識対象画像等と、第2照明光L2による画像である第2認識対象画像等とを、それぞれ独立して記録してもよい。この場合、録画ファイルには、録画した認識対象画像の種類の情報を付すことが好ましい。なお、録画ファイルのストレージは、画像処理装置に内蔵してもよいし、外部記憶装置としてもよい。また、画像処理装置が接続するネットワーク上に設置してもよい。 It is preferable to save the video as an individual recording file. Further, depending on the case, a plurality of means for recording may be provided. By using a plurality of recording means, a plurality of recordings can be performed at the same time. For example, the first recognition target image or the like which is an image by the first illumination light L1 and the second recognition target image or the like which is an image by the second illumination light L2 may be recorded independently. In this case, it is preferable to attach information on the type of the recorded recognition target image to the recorded file. The storage of the recorded file may be built in the image processing device or may be an external storage device. Further, it may be installed on the network to which the image processing device is connected.
 図28に示すように、記録の開始及び停止といった記録の動作により、一回の検査において、「ファイルI」及び「ファイルII」のようにファイル名を付して複数の録画ファイルを生成する。各録画ファイルは、記録の動作に対応する条件に関する情報として、記録の開始又は継続の契機となった認識処理結果と、記録の開始もしくは継続又は停止を実施した時刻情報を付している。対応する条件に関する情報は、記録の開始もしくは継続又は停止の契機となった認識処理結果に関する情報である。これらは、いわゆるタグとして録画ファイルに付すことが好ましい。録画ファイルであるファイルIには、第1認識処理結果の検出に関する「部位」が、タグとして付され、また、第1認識処理結果である部位の検出と非検出とに応じて、録画開始時刻及び停止時刻のt1及びt2が付されている。ファイルIIには、第1認識処理結果及び第3ないし第5認識処理結果の検出及び合判定に関する「病変/重症/処置具/出血点」が、タグとして付されている。また、第5認識処理結果又は第3認識処理結果である病変又は重症の合判定に応じて、録画開始時刻のt3が付され、第4認識処理結果である出血点の否判定に応じて、録画停止時刻のt4が付されている。 As shown in FIG. 28, by the recording operation such as the start and stop of recording, a plurality of recorded files are generated with file names such as "File I" and "File II" in one inspection. Each recording file is accompanied by information on the conditions corresponding to the operation of recording, the recognition processing result that triggered the start or continuation of recording, and the time information at which recording was started, continued, or stopped. The information regarding the corresponding condition is the information regarding the recognition processing result that triggered the start, continuation, or stop of recording. These are preferably attached to the recorded file as so-called tags. The file I, which is a recording file, is tagged with a "part" related to the detection of the first recognition processing result, and the recording start time is set according to the detection and non-detection of the part which is the first recognition processing result. And t1 and t2 of the stop time are attached. File II is tagged with "lesion / severe / treatment tool / bleeding point" relating to the detection and acceptance determination of the first recognition processing result and the third to fifth recognition processing results. Further, t3 of the recording start time is attached according to the judgment of the lesion or the seriousness which is the result of the fifth recognition processing or the result of the third recognition processing, and according to the judgment of whether or not the bleeding point is the result of the fourth recognition processing. The recording stop time t4 is attached.
 なお、複数種類の認識処理結果に基づき認識対象画像の動画を記録する際の記録の動作を制御することに加え、内視鏡による検査に関するその他の情報に基づき記録の動作を制御してもよい。その他の情報としては、例えば、内視鏡の形状を表示する手段を用いた場合、内視鏡の形状からわかるように、内視鏡先端部12dが予め設定した箇所にある場合に、録画を開始させることができる。予め設定する箇所は、例えば、患者データにより過去に病変があった箇所とする。これにより、病変の経過を記録し忘れることがない。また、場合によっては、認識処理結果のみならず、特定の認識対象画像を録画するように記録の動作を制御してもよい。例えば、酸素飽和度画像については全て記録したい場合は、第4認識対象画像を取得する際に記録を開始し、その他の認識対象画像の取得に切り替わった際に記録を停止するように設定してもよい。 In addition to controlling the recording operation when recording a moving image of the image to be recognized based on the results of a plurality of types of recognition processing, the recording operation may be controlled based on other information regarding the inspection by the endoscope. .. As other information, for example, when a means for displaying the shape of the endoscope is used, recording is performed when the tip portion 12d of the endoscope is located at a preset position, as can be seen from the shape of the endoscope. Can be started. The preset location is, for example, a location where there was a lesion in the past based on patient data. This ensures that the course of the lesion is recorded. Further, depending on the case, not only the recognition processing result but also the recording operation may be controlled so as to record a specific recognition target image. For example, if you want to record all oxygen saturation images, set to start recording when acquiring the 4th recognition target image and stop recording when switching to acquisition of other recognition target images. May be good.
 なお、表示制御部57は、認識対象画像を表示することに加えて、認識対象画像に認識処理結果の情報を表示したものを重畳して表示画像とし、これを表示するように制御してもよい。例えば、第1認識対象画像に、第3認識処理結果である重症箇所の表示をテキスト又は図等の情報により重畳して、ディスプレイ18に表示してもよい。また、記録の動作の情報を、認識対象画像に重畳して表示するように制御してもよい。例えば、ディスプレイ18の一部に、録画実行を示すインジケータを設け、録画実行時は注意を引くよう赤で点滅し、録画非実行時は消灯する等で表示してもよい。 In addition to displaying the recognition target image, the display control unit 57 may superimpose the recognition processing result information on the recognition target image to form a display image and control the display. good. For example, the display of the severely ill portion, which is the result of the third recognition process, may be superimposed on the first recognition target image by information such as text or a figure and displayed on the display 18. Further, the information of the recording operation may be controlled so as to be superimposed and displayed on the recognition target image. For example, an indicator indicating recording execution may be provided on a part of the display 18, and may be displayed by blinking in red to draw attention when recording is executed and turning off when recording is not executed.
 次に、画像解析処理装置であるプロセッサ装置16又は内視鏡システム10が行う動画の記録動作の制御に関する処理の一連の流れについて、図29に示すフローチャートに沿って説明を行う。第1A発光パターン及び第2Bパターンの照明光により、第1認識対象画像並びに第5認識対象画像及び第3認識対象画像の、複数種類の認識対象画像を取得する(ステップST110)。表示制御部が、そのうち、第1認識対象画像を継続してディスプレイ18に表示する。複数種類の認識対象画像に対して、画像認識処理を、認識対象画像の種類ごとに並列して行う(ステップST120)。したがって、第1認識対象画像により特定部位検出を実施し、第5認識対象画像により病変判定を実施し、第3認識対象画像により重症判定を、それぞれ独立して実施する。認識結果取得部73が、複数の認識処理結果を取得する(ステップST130)。録画制御部74は、複数種類の認識処理結果から、「検出」又は「合判定」の肯定的な検出又は判定結果が1つ以上ある場合には(ステップST140でY)、録画を開始する(ステップST150)。「検出」又は「合判定」の肯定的な検出又は判定結果がない場合には(ステップST140でN)、認識対象画像取得に戻る。 Next, a series of processes related to the control of the moving image recording operation performed by the processor device 16 or the endoscope system 10 which is an image analysis processing device will be described with reference to the flowchart shown in FIG. 29. A plurality of types of recognition target images of the first recognition target image, the fifth recognition target image, and the third recognition target image are acquired by the illumination light of the first A emission pattern and the second B pattern (step ST110). The display control unit continuously displays the first recognition target image on the display 18. Image recognition processing is performed in parallel for each type of recognition target image for a plurality of types of recognition target images (step ST120). Therefore, the specific site is detected by the first recognition target image, the lesion is determined by the fifth recognition target image, and the serious condition is determined independently by the third recognition target image. The recognition result acquisition unit 73 acquires a plurality of recognition processing results (step ST130). The recording control unit 74 starts recording when there is one or more positive detections or determination results of "detection" or "match determination" from the plurality of types of recognition processing results (Y in step ST140) (Y). Step ST150). If there is no positive detection or judgment result of "detection" or "match judgment" (N in step ST140), the process returns to the acquisition of the image to be recognized.
 録画開始の後、引き続き、複数種類の認識対象画像を取得する(ステップST160)。複数種類の認識対象画像に対して、画像認識処理を、認識対象画像の種類ごとに並列して行う(ステップST170)。認識結果取得部73が、複数の認識処理結果を取得する(ステップST180)。録画制御部74は、複数種類の認識処理結果から、「検出」又は「合判定」の肯定的な検出又は判定結果がない場合には(ステップST190でY)、録画を停止する(ステップST200)。「検出」又は「合判定」の肯定的な検出又は判定結果が1つ以上ある場合には(ステップST190でN)、認識対象画像取得に戻り、かつ、録画を継続する。 After starting recording, continuously acquire multiple types of recognition target images (step ST160). Image recognition processing is performed in parallel for each type of recognition target image for a plurality of types of recognition target images (step ST170). The recognition result acquisition unit 73 acquires a plurality of recognition processing results (step ST180). The recording control unit 74 stops recording when there is no positive detection or determination result of "detection" or "match determination" from the plurality of types of recognition processing results (Y in step ST190) (step ST200). .. When there is one or more positive detections or judgment results of "detection" or "match judgment" (N in step ST190), the process returns to the acquisition of the image to be recognized and the recording is continued.
 上記実施形態及び変形例等においては、プロセッサ装置16が画像処理装置として機能するが、プロセッサ装置16とは別に、画像処理部56を含む画像処理装置を設けてもよい。この他、図30に示すように、画像処理部56は、例えば内視鏡システム10から直接的に、または、PACS(Picture Archiving and Communication Systems)910から間接的に、内視鏡12で撮影したRAW画像を取得する診断支援装置911に設けることができる。また、図31に示すように、内視鏡システム10を含む、第1検査装置921、第2検査装置922、…、第K検査装置923等の各種検査装置と、ネットワーク926を介して接続する医療業務支援装置930に、画像処理部56を設けることができる。 In the above-described embodiment and modification, the processor device 16 functions as an image processing device, but an image processing device including an image processing unit 56 may be provided separately from the processor device 16. In addition, as shown in FIG. 30, the image processing unit 56 has taken an image with the endoscope 12, for example, directly from the endoscope system 10 or indirectly from the PACS (Picture Archiving and Communication Systems) 910. It can be provided in the diagnostic support device 911 that acquires a RAW image. Further, as shown in FIG. 31, it is connected to various inspection devices such as the first inspection device 921, the second inspection device 922, ..., the K inspection device 923, including the endoscope system 10, via the network 926. The image processing unit 56 can be provided in the medical service support device 930.
 上記各実施形態及び変形例は、その一部または全部を任意に組み合わせて実施することができる。また、上記各実施形態及び変形例においては、内視鏡12は可撓性の挿入部12aを有するいわゆる軟性内視鏡を用いているが、観察対象が嚥下して使用するカプセル型の内視鏡、外科手術等に使用する硬性内視鏡(腹腔鏡)を用いる場合も本発明は好適である。 Each of the above embodiments and modifications can be carried out by arbitrarily combining a part or all of them. Further, in each of the above embodiments and modifications, the endoscope 12 uses a so-called flexible endoscope having a flexible insertion portion 12a, but the observation target swallows and uses a capsule-type endoscope. The present invention is also suitable when a rigid endoscope (laparoscope) used for a mirror, surgery, or the like is used.
 上記実施形態及び変形例等は、画像用プロセッサを備え、内視鏡を用いて観察対象を撮影することにより得られる画像に基づいて画像認識処理を行う画像処理装置の作動方法であって、画像に基づく複数種類の認識対象画像を取得する画像取得ステップと、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像を継続してディスプレイに表示する制御を行う表示制御ステップと、複数種類の認識対象画像に対する画像認識処理を、認識対象画像の種類毎に並列して行う画像認識処理ステップと、画像認識処理により得られる認識処理結果を、認識対象画像の種類毎に取得する認識処理結果取得ステップと、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像の動画を記録する際に、認識対象画像の全種類の認識処理結果に基づき、記録の動作を制御する記録制御ステップとを備える画像認識処理装置の作動方法を含む。 The above-described embodiment and modification are an operation method of an image processing device including an image processor and performing image recognition processing based on an image obtained by photographing an observation target with an endoscope. An image acquisition step for acquiring a plurality of types of recognition target images based on the above, a display control step for controlling to continuously display at least one type of recognition target image among a plurality of types of recognition target images on a display, and a plurality of types. Image recognition processing step that performs image recognition processing for the recognition target image in parallel for each type of recognition target image, and recognition processing result acquisition that acquires the recognition processing result obtained by the image recognition processing for each type of recognition target image. A step and a recording control step that controls a recording operation based on the recognition processing results of all types of the recognition target image when recording a moving image of at least one type of the recognition target image among a plurality of types of recognition target images. The method of operating the image recognition processing device provided is included.
 また、上記実施形態及び変形例等は、画像用プロセッサを備え、内視鏡を用いて観察対象を撮影することにより得られる画像に基づいて画像認識処理を行う画像処理装置にインストールされる画像処理装置用プログラムにおいて、コンピュータに、画像に基づく複数種類の認識対象画像を取得する画像取得機能と、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像を継続してディスプレイに表示する制御を行う表示制御機能と、複数種類の認識対象画像に対する画像認識処理を、認識対象画像の種類毎に並列して行う画像認識処理機能と、画像認識処理により得られる認識処理結果を、認識対象画像の種類毎に取得する認識処理結果取得機能と、複数種類の認識対象画像のうち少なくとも1種類の認識対象画像の動画を記録する際に、認識対象画像の全種類の認識処理結果に基づき、記録の動作を制御する記録制御機能とを実現させるための画像処理装置用プログラムを含む。 Further, the above-described embodiment and modification are image processing installed in an image processing apparatus provided with an image processor and performing image recognition processing based on an image obtained by photographing an observation target with an endoscope. In the device program, the computer has an image acquisition function for acquiring multiple types of recognition target images based on images, and control for continuously displaying at least one type of recognition target image among the multiple types of recognition target images on the display. The display control function to be performed, the image recognition processing function to perform image recognition processing for a plurality of types of recognition target images in parallel for each type of recognition target image, and the recognition processing result obtained by the image recognition processing are the recognition target images. When recording a moving image of at least one type of recognition target image out of a plurality of types of recognition target images and a recognition processing result acquisition function acquired for each type, recording is performed based on the recognition processing results of all types of recognition target images. Includes a program for an image processing device to realize a recording control function that controls operation.
 上記実施形態において、画像処理装置であるプロセッサ装置16に含まれる中央制御部51、画像取得部52、DSP53、ノイズ低減部54、変換部55、画像処理部56、及び表示制御部57といった各種の処理を実行する処理部(processing unit)のハードウェア的な構造は、次に示すような各種のプロセッサ(processor)である。各種のプ
ロセッサには、ソフトウエア(プログラム)を実行して各種の処理部として機能する汎用的なプロセッサであるCPU(Central Processing Unit)、FPGA (Field Programmable Gate Array) などの製造後に回路構成を変更可能なプロセッサであるプログラマブル
ロジックデバイス(Programmable Logic Device:PLD)、各種の処理を実行するために専用に設計された回路構成を有するプロセッサである専用電気回路などが含まれる。
In the above embodiment, various types such as a central control unit 51, an image acquisition unit 52, a DSP 53, a noise reduction unit 54, a conversion unit 55, an image processing unit 56, and a display control unit 57 included in the processor device 16 which is an image processing device. The hardware structure of the processing unit that executes processing is various processors as shown below. For various processors, the circuit configuration is changed after manufacturing the CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), which is a general-purpose processor that executes software (program) and functions as various processing units. It includes a programmable logic device (PLD), which is a possible processor, a dedicated electric circuit, which is a processor having a circuit configuration specially designed for performing various processes, and the like.
 1つの処理部は、これら各種のプロセッサのうちの1つで構成されてもよいし、同種または異種の2つ以上のプロセッサの組み合わせ(例えば、複数のFPGAや、CPUとFPGAの組み合わせ)で構成されてもよい。また、複数の処理部を1つのプロセッサで構成してもよい。複数の処理部を1つのプロセッサで構成する例としては、第1に、クライアントやサーバなどのコンピュータに代表されるように、1つ以上のCPUとソフトウエアの組み合わせで1つのプロセッサを構成し、このプロセッサが複数の処理部として機能する形態がある。第2に、システムオンチップ(System On Chip:SoC)などに代表されるように、複数の処理部を含むシステム全体の機能を1つのIC(Integrated Circuit)チップで実現するプロセッサを使用する形態がある。このように、各種の処理部は、ハードウェア的な構造として、上記各種のプロセッサを1つ以上用いて構成される。 One processing unit may be composed of one of these various processors, or may be composed of a combination of two or more processors of the same type or different types (for example, a plurality of FPGAs or a combination of a CPU and an FPGA). May be done. Further, a plurality of processing units may be configured by one processor. As an example of configuring a plurality of processing units with one processor, first, as represented by a computer such as a client or a server, one processor is configured by a combination of one or more CPUs and software. There is a form in which this processor functions as a plurality of processing units. Second, as typified by System On Chip (SoC), there is a form that uses a processor that realizes the functions of the entire system including multiple processing units with one IC (Integrated Circuit) chip. be. As described above, the various processing units are configured by using one or more of the above-mentioned various processors as a hardware-like structure.
 さらに、これらの各種のプロセッサのハードウェア的な構造は、より具体的には、半導体素子などの回路素子を組み合わせた形態の電気回路(circuitry)である。 Furthermore, the hardware-like structure of these various processors is, more specifically, an electric circuit (circuitry) in which circuit elements such as semiconductor elements are combined.
 なお、本発明は、内視鏡画像を取得等する内視鏡システム、プロセッサ装置、その他関連する装置等の他に、内視鏡画像以外の医療画像(動画を含む)を取得するシステムまたは装置等においても利用できる。例えば、本発明は、超音波検査装置、X線画像撮影装置(CT(Computed Tomography)検査装置及びマンモグラフィ装置等を含む)、MRI(magnetic resonance imaging)装置、等に適用できる。 In addition to the endoscopic system for acquiring endoscopic images, processor devices, and other related devices, the present invention is a system or device for acquiring medical images (including moving images) other than endoscopic images. It can also be used in such cases. For example, the present invention can be applied to an ultrasonic inspection device, an X-ray imaging device (including a CT (Computed Tomography) inspection device, a mammography device, etc.), an MRI (magnetic resonance imaging) device, and the like.
 10 内視鏡システム
 12 内視鏡
 12a 挿入部
 12b 操作部
 12c 湾曲部
 12d 先端部
 12e アングルノブ
 12f スコープボタン1番
 12g スコープボタン2番
 12h ズーム操作部
 14 光源装置
 16 プロセッサ装置
 18 ディスプレイ
 19 キーボード
 20 光源部
 20a V-LED
 20b B-LED
 20c G-LED
 20d R-LED
 22 光源用プロセッサ
 30a 照明光学系
 30b 撮影光学系
 41 ライトガイド
 42 照明レンズ
 43 対物レンズ
 44 ズームレンズ
 45 イメージセンサ
 46 撮影用プロセッサ
 51 中央制御部
 52 画像取得部
 53 DSP
 54 ノイズ低減部
 55 変換部
 56 画像処理部
 57 表示制御部
 61 通常観察画像処理部
 62 特殊観察画像処理部
 63 診断支援画像処理部
 71 認識対象画像生成部
 72 画像認識処理部
 73 認識結果取得部
 74 録画制御部
 75 表示用画像生成部
 81 第1認識対象画像生成部
 82 第2認識対象画像生成部
 83 第3認識対象画像生成部
 84 第4認識対象画像生成部
 84a 酸素飽和度用信号比算出部
 84b 酸素飽和度算出用テーブル
 84c 酸素飽和度算出部
 84d 酸素飽和度画像生成部
 85 第5認識対象画像生成部
 85a 色差拡張用信号比算出部
 85b 色差拡張処理部
 85c 色差拡張画像生成部
 86 第n認識対象画像生成部
 91 第1画像認識部
 92 第2画像認識部
 93 第3画像認識部
 94 第4画像認識部
 95 第5画像認識部
 96 第n画像認識部
 101 第1対応情報取得部
 102 第2対応情報取得部
 103 第3対応情報取得部
 104 第4対応情報取得部
 105 第5対応情報取得部
 106 第n対応情報取得部
 910 PACS
 911 診断支援装置
 921 第1検査装置
 922 第2検査装置
 923 第K検査装置
 926 ネットワーク
 930 医療業務支援装置
 A1、A2 範囲
 ELx、EL1、EL2、EL3、EL4、ELy 酸素飽和度の等値線
 P1 第1照明期間
 P2 第2照明期間
 FL フレーム
 L1 第1照明光
 L2、L2a、L2b、L2c、L2d 第2照明光
 Q1、Q2、Q3、Q4、Q5 発光周期
 SP1、SP2、SP3、SP4 第2照明光用分光スペクトル
 IM1 第1認識対象画像
 IM2a 第5認識対象画像
 IM2b 第3認識対象画像
 IM2c 第2認識対象画像
 t1~t6 時刻
 ST110~ST200 ステップ
 
10 Endoscope system 12 Endoscope 12a Insertion part 12b Operation part 12c Curved part 12d Tip part 12e Angle knob 12f Scope button 1st 12g Scope button 2nd 12h Zoom operation part 14 Light source device 16 Processor device 18 Display 19 Keyboard 20 Light source Part 20a V-LED
20b B-LED
20c G-LED
20d R-LED
22 Light source processor 30a Illumination optical system 30b Imaging optical system 41 Light guide 42 Illumination lens 43 Objective lens 44 Zoom lens 45 Image sensor 46 Imaging processor 51 Central control unit 52 Image acquisition unit 53 DSP
54 Noise reduction unit 55 Conversion unit 56 Image processing unit 57 Display control unit 61 Normal observation image processing unit 62 Special observation image processing unit 63 Diagnosis support image processing unit 71 Recognition target image generation unit 72 Image recognition processing unit 73 Recognition result acquisition unit 74 Recording control unit 75 Display image generation unit 81 1st recognition target image generation unit 82 2nd recognition target image generation unit 83 3rd recognition target image generation unit 84 4th recognition target image generation unit 84a Oxygen saturation signal ratio calculation unit 84b Oxygen saturation calculation table 84c Oxygen saturation calculation unit 84d Oxygen saturation image generation unit 85 Fifth recognition target image generation unit 85a Color difference expansion signal ratio calculation unit 85b Color difference expansion processing unit 85c Color difference expansion image generation unit 86th n Recognition target image generation unit 91 1st image recognition unit 92 2nd image recognition unit 93 3rd image recognition unit 94 4th image recognition unit 95 5th image recognition unit 96 nth image recognition unit 101 1st correspondence information acquisition unit 102 2 Correspondence information acquisition unit 103 3rd correspondence information acquisition unit 104 4th correspondence information acquisition unit 105 5th correspondence information acquisition unit 106 nth correspondence information acquisition unit 910 PACS
911 Diagnostic support device 921 1st inspection device 922 2nd inspection device 923 2nd inspection device 926 Network 930 Medical business support device A1, A2 range ELx, EL1, EL2, EL3, EL4, ELy Oxygen saturation contour line P1 No. 1 Illumination period P2 2nd illumination period FL frame L1 1st illumination light L2, L2a, L2b, L2c, L2d 2nd illumination light Q1, Q2, Q3, Q4, Q5 Emission cycle SP1, SP2, SP3, SP4 2nd illumination light Spectral spectrum IM1 1st recognition target image IM2a 5th recognition target image IM2b 3rd recognition target image IM2c 2nd recognition target image t1 to t6 Time ST110 to ST200 Step

Claims (20)

  1.  内視鏡を用いて観察対象を撮影することにより得られる画像に基づいて画像認識処理を行う画像処理装置であって、
     画像用プロセッサを備え、
     前記画像用プロセッサは、
     前記画像に基づく複数種類の認識対象画像を取得し、
     複数種類の前記認識対象画像のうち少なくとも1種類の前記認識対象画像を継続してディスプレイに表示する制御を行い、
     複数種類の前記認識対象画像に対する前記画像認識処理を、前記認識対象画像の種類毎に並列して行い、
     前記画像認識処理により得られる認識処理結果を取得し、
     取得した全種類の前記認識対象画像の前記認識処理結果に基づき、複数種類の前記認識対象画像のうち少なくとも1種類の前記認識対象画像の動画を記録する際の前記記録の動作を制御する画像処理装置。
    An image processing device that performs image recognition processing based on an image obtained by photographing an observation target with an endoscope.
    Equipped with an image processor
    The image processor is
    Acquire multiple types of recognition target images based on the above images,
    Control is performed to continuously display at least one type of the recognition target image among the plurality of types of the recognition target images on the display.
    The image recognition process for a plurality of types of the recognition target image is performed in parallel for each type of the recognition target image.
    The recognition processing result obtained by the image recognition processing is acquired, and
    Image processing that controls the operation of recording when recording a moving image of at least one type of the recognition target image among a plurality of types of the recognition target images based on the recognition processing results of all the acquired types of the recognition target images. Device.
  2.  前記認識処理結果は、前記認識対象画像が予め設定した条件を満たすか又は満たさないかの情報を含み、
     前記画像用プロセッサは、前記全種類の前記認識処理結果のうち1つ以上が前記条件を満たすとの前記情報を含む場合、前記記録を開始又は継続する請求項1に記載の画像処理装置。
    The recognition processing result includes information on whether or not the recognition target image satisfies or does not satisfy a preset condition.
    The image processor according to claim 1, wherein the image processor starts or continues the recording when one or more of the recognition processing results of all kinds include the information that the condition is satisfied.
  3.  前記画像用プロセッサは、前記全種類の前記認識処理結果のすべてが前記条件を満たさないとの情報を含み、かつ、前記記録が継続している場合、前記記録を停止する請求項2に記載の画像処理装置。 The second aspect of claim 2, wherein the image processor includes information that all of the recognition processing results of all kinds do not satisfy the conditions, and stops the recording when the recording is continued. Image processing device.
  4.  前記画像用プロセッサは、前記動画を記録する際に、前記記録の開始もしくは継続又は停止の対応する前記条件に関する情報を前記動画に付す請求項3に記載の画像処理装置。 The image processor according to claim 3, wherein when the moving image is recorded, the image processor attaches information regarding the corresponding condition of start, continuation, or stop of the recording to the moving image.
  5.  前記画像用プロセッサは、前記条件を満たす前記観察対象と前記条件を満たす前記観察対象を撮影することにより得た前記認識対象画像とを対応付けた対応情報を予め取得し、
     前記対応情報に基づき、新たに取得した前記認識対象画像に対する前記画像認識処理を行う請求項2ないし4のいずれか1項に記載の画像処理装置。
    The image processor previously acquires correspondence information in which the observation target satisfying the above conditions and the recognition target image obtained by photographing the observation target satisfying the conditions are associated with each other.
    The image processing apparatus according to any one of claims 2 to 4, wherein the image recognition process is performed on the newly acquired image to be recognized based on the corresponding information.
  6.  前記画像用プロセッサは、前記対応情報を前記認識対象画像の種類毎に取得し、
     新たに取得した前記認識対象画像に対する前記画像認識処理を、対応する種類の前記対応情報に基づき行う請求項5に記載の画像処理装置。
    The image processor acquires the correspondence information for each type of the recognition target image, and obtains the correspondence information for each type of the recognition target image.
    The image processing apparatus according to claim 5, wherein the image recognition process for the newly acquired image to be recognized is performed based on the corresponding information of the corresponding type.
  7.  前記条件は、前記画像用プロセッサにより、特定の部位又は生体以外の物体が検出されたこと、又は、特定状態の領域を含むと判定されたことである請求項2ないし6のいずれか1項に記載の画像処理装置。 The condition is any one of claims 2 to 6, wherein the image processor has detected an object other than a specific part or a living body, or it has been determined that the image processor includes a region in a specific state. The image processing unit described.
  8.  複数種類の前記認識対象画像は、少なくとも互いに種類が異なる第1認識対象画像と第2認識対象画像とを含み、
     前記画像用プロセッサは、前記第1認識対象画像に対しては、第1画像認識処理を行い、前記第2認識対象画像に対しては、前記第1画像認識処理とは異なる第2画像認識処理を行う請求項1ないし7のいずれか1項に記載の画像処理装置。
    The plurality of types of the recognition target images include at least a first recognition target image and a second recognition target image of different types from each other.
    The image processor performs a first image recognition process on the first recognition target image, and performs a second image recognition process on the second recognition target image, which is different from the first image recognition process. The image processing apparatus according to any one of claims 1 to 7.
  9.  前記第1画像認識処理は、前記第1認識対象画像における特定の部位又は生体以外の物体の検出に関して行われる請求項8に記載の画像処理装置。 The image processing apparatus according to claim 8, wherein the first image recognition process is performed for detecting a specific part or an object other than a living body in the first recognition target image.
  10.  前記第2画像認識処理は、前記第2認識対象画像における特定状態の領域を含むとの判定に関して行われる請求項8又は9に記載の画像処理装置。 The image processing apparatus according to claim 8 or 9, wherein the second image recognition process is performed for determining that the second image to be recognized includes a region in a specific state.
  11.  前記画像に対し強調処理を行うことにより前記認識対象画像を生成し、
     前記画像用プロセッサは、前記強調処理の有無又は種類により前記認識対象画像の種類を区別し、区別した前記認識対象画像をそれぞれ1種の前記認識対象画像として取得する請求項1ないし10のいずれか1項に記載の画像処理装置。
    The recognition target image is generated by performing enhancement processing on the image, and the recognition target image is generated.
    The image processor distinguishes the types of the recognition target images according to the presence or absence or type of the enhancement processing, and any one of claims 1 to 10 for acquiring the distinguished recognition target images as one type of the recognition target image. The image processing apparatus according to item 1.
  12.  前記強調処理の種類は、色彩拡張処理及び/又は構造強調処理であり、
     前記画像用プロセッサは、互いに異なる種類の強調処理を行うことにより生成した前記認識対象画像を、それぞれ1種の前記認識対象画像として取得する請求項11に記載の画像処理装置。
    The types of the enhancement processing are color expansion processing and / or structure enhancement processing.
    The image processing apparatus according to claim 11, wherein the image processor acquires the recognition target image generated by performing different types of enhancement processing as one type of the recognition target image.
  13.  請求項1ないし12のいずれか1項に記載の画像処理装置と、
     前記観察対象に照射する照明光を発する光源部とを備える内視鏡システム。
    The image processing apparatus according to any one of claims 1 to 12.
    An endoscope system including a light source unit that emits illumination light to irradiate the observation target.
  14.  前記画像用プロセッサは、前記光源部が発する互いに分光スペクトルが異なる複数の照明光のそれぞれにより照明した前記観察対象を撮影することにより得られる前記画像を、それぞれ1種の前記認識対象画像として取得する請求項13に記載の内視鏡システム。 The image processor acquires the image obtained by photographing the observation target illuminated by each of a plurality of illumination lights having different spectral spectra from each other emitted by the light source unit as one kind of recognition target image. The endoscope system according to claim 13.
  15.  前記画像用プロセッサは、前記光源部が発する白色の照明光により照明した前記観察対象を撮影することにより得られる前記画像を、前記認識対象画像の1種として取得する請求項13に記載の内視鏡システム。 The internal vision according to claim 13, wherein the image processor acquires the image obtained by photographing the observation target illuminated by the white illumination light emitted by the light source unit as one kind of the recognition target image. Mirror system.
  16.  前記画像用プロセッサは、前記光源部が発する予め設定した波長帯域の狭帯域光を含む照明光により照明した前記観察対象を撮影することにより得られる前記画像を、前記認識対象画像の1種として取得する請求項13に記載の内視鏡システム。 The image processor acquires the image obtained by photographing the observation target illuminated by the illumination light including the narrow band light of the preset wavelength band emitted by the light source unit as one kind of the recognition target image. 13. The endoscope system according to claim 13.
  17.  前記光源部は、互いに分光スペクトルが異なる複数の照明光のそれぞれを、予め設定した順序により繰り返し発光する請求項13ないし16のいずれか1項に記載の内視鏡システム。 The endoscope system according to any one of claims 13 to 16, wherein the light source unit repeatedly emits light of a plurality of illumination lights having different spectral spectra in a preset order.
  18.  前記光源部は、互いに分光スペクトルが異なる第1照明光と第2照明光とを発し、
     第1照明期間中に前記第1照明光を第1発光パターンにより発し、第2照明期間中に前記第2照明光を第2発光パターンにより発し、かつ、前記第1照明光と前記第2照明光とを切り替える光源用プロセッサと、
     前記第1照明光によって照明された観察対象を撮影して得られる第1画像信号と、前記第2照明光によって照明された前記観察対象を撮影して得られる第2画像信号とを出力するイメージセンサとを備え、
     前記画像用プロセッサは、
     前記第1画像信号に基づく前記第1認識対象画像に対して第1画像認識処理を行い、前記第2画像信号に基づく前記第2認識対象画像に対して第2画像認識処理を行い、
     前記第1画像認識処理による第1画像認識結果と、前記第2画像認識処理による第2画像認識結果とに基づき前記録画の動作を制御する請求項8を引用する請求項13に記載の内視鏡システム。
    The light source unit emits first illumination light and second illumination light having different spectral spectra from each other.
    The first illumination light is emitted by the first emission pattern during the first illumination period, the second illumination light is emitted by the second emission pattern during the second illumination period, and the first illumination light and the second illumination are emitted. A processor for light sources that switches between light and
    An image that outputs a first image signal obtained by photographing an observation object illuminated by the first illumination light and a second image signal obtained by photographing the observation object illuminated by the second illumination light. Equipped with a sensor
    The image processor is
    The first image recognition process is performed on the first recognition target image based on the first image signal, and the second image recognition process is performed on the second recognition target image based on the second image signal.
    The endoscope according to claim 13, wherein claim 8 for controlling the operation of recording based on the first image recognition result by the first image recognition process and the second image recognition result by the second image recognition process is cited. Mirror system.
  19.  内視鏡を用いて観察対象を撮影することにより得られる画像に基づいて画像認識処理を行う画像処理装置の作動方法であって、
     前記画像に基づく複数種類の認識対象画像を取得する画像取得ステップと、
     複数種類の前記認識対象画像のうち少なくとも1種類の前記認識対象画像を継続してディスプレイに表示する制御を行う表示制御ステップと、
     複数種類の前記認識対象画像に対する前記画像認識処理を、前記認識対象画像の種類毎に並列して行う画像認識処理ステップと、
     前記画像認識処理により得られる認識処理結果を、前記認識対象画像の前記種類毎に取得する認識処理結果取得ステップと、
     複数種類の前記認識対象画像のうち少なくとも1種類の前記認識対象画像の動画を記録する際に、取得した全種類の前記認識対象画像の前記認識処理結果に基づき、前記記録の動作を制御する記録制御ステップとを備える画像処理装置の作動方法。
    It is an operation method of an image processing device that performs image recognition processing based on an image obtained by photographing an observation target with an endoscope.
    An image acquisition step for acquiring a plurality of types of recognition target images based on the image, and
    A display control step for controlling the continuous display of at least one type of the recognition target image among the plurality of types of the recognition target images on the display, and a display control step.
    An image recognition processing step in which the image recognition processing for a plurality of types of the recognition target images is performed in parallel for each type of the recognition target image, and
    A recognition processing result acquisition step of acquiring the recognition processing result obtained by the image recognition processing for each type of the recognition target image, and
    Recording that controls the operation of the recording based on the recognition processing results of all the acquired recognition target images when recording a moving image of at least one of the recognition target images among the plurality of types of the recognition target images. A method of operating an image processing device including a control step.
  20.  内視鏡を用いて観察対象を撮影することにより得られる画像に基づいて画像認識処理を行う画像処理装置にインストールされる画像処理装置用プログラムにおいて、
     コンピュータに、
     前記画像に基づく複数種類の認識対象画像を取得する画像取得機能と、
     複数種類の前記認識対象画像のうち少なくとも1種類の前記認識対象画像を継続してディスプレイに表示する制御を行う表示制御機能と、
     複数種類の前記認識対象画像に対する前記画像認識処理を、前記認識対象画像の種類毎に並列して行う画像認識処理機能と、
     前記画像認識処理により得られる認識処理結果を、前記認識対象画像の前記種類毎に取得する認識処理結果取得機能と、
     複数種類の前記認識対象画像のうち少なくとも1種類の前記認識対象画像の動画を記録する際に、取得した全種類の前記認識対象画像の前記認識処理結果に基づき、前記記録の動作を制御する記録制御機能とを実現させるための画像処理装置用プログラム。
     
    In a program for an image processing device installed in an image processing device that performs image recognition processing based on an image obtained by photographing an observation target with an endoscope.
    On the computer
    An image acquisition function that acquires multiple types of recognition target images based on the image, and
    A display control function that controls the continuous display of at least one of the recognition target images among a plurality of types of the recognition target images on the display, and a display control function.
    An image recognition processing function that performs the image recognition processing for a plurality of types of the recognition target images in parallel for each type of the recognition target image.
    A recognition processing result acquisition function for acquiring the recognition processing result obtained by the image recognition processing for each type of the recognition target image, and
    Recording that controls the operation of the recording based on the recognition processing results of all the acquired recognition target images when recording a moving image of at least one of the recognition target images among the plurality of types of the recognition target images. A program for image processing equipment to realize control functions.
PCT/JP2021/010864 2020-09-15 2021-03-17 Image processing device, endoscope system, operation method for image processing device, and program for image processing device WO2022059233A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020154464A JP2023178526A (en) 2020-09-15 2020-09-15 Image processing device, endoscope system, operation method of image processing device, and image processing device program
JP2020-154464 2020-09-15

Publications (1)

Publication Number Publication Date
WO2022059233A1 true WO2022059233A1 (en) 2022-03-24

Family

ID=80776799

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/010864 WO2022059233A1 (en) 2020-09-15 2021-03-17 Image processing device, endoscope system, operation method for image processing device, and program for image processing device

Country Status (2)

Country Link
JP (1) JP2023178526A (en)
WO (1) WO2022059233A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006271871A (en) * 2005-03-30 2006-10-12 Olympus Medical Systems Corp Image processor for endoscope
JP2008237394A (en) * 2007-03-26 2008-10-09 Olympus Medical Systems Corp Endoscope system
JP2013188364A (en) * 2012-03-14 2013-09-26 Fujifilm Corp Endoscope system, processor device therefor, and exposure amount control method therein
WO2017216922A1 (en) * 2016-06-16 2017-12-21 オリンパス株式会社 Image processing device and image processing method
WO2020165978A1 (en) * 2019-02-13 2020-08-20 オリンパス株式会社 Image recording device, image recording method, and image recording program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006271871A (en) * 2005-03-30 2006-10-12 Olympus Medical Systems Corp Image processor for endoscope
JP2008237394A (en) * 2007-03-26 2008-10-09 Olympus Medical Systems Corp Endoscope system
JP2013188364A (en) * 2012-03-14 2013-09-26 Fujifilm Corp Endoscope system, processor device therefor, and exposure amount control method therein
WO2017216922A1 (en) * 2016-06-16 2017-12-21 オリンパス株式会社 Image processing device and image processing method
WO2020165978A1 (en) * 2019-02-13 2020-08-20 オリンパス株式会社 Image recording device, image recording method, and image recording program

Also Published As

Publication number Publication date
JP2023178526A (en) 2023-12-18

Similar Documents

Publication Publication Date Title
JP6785941B2 (en) Endoscopic system and how to operate it
JP6785948B2 (en) How to operate medical image processing equipment, endoscopic system, and medical image processing equipment
JP6285383B2 (en) Image processing apparatus, endoscope system, operation method of image processing apparatus, and operation method of endoscope system
WO2018159083A1 (en) Endoscope system, processor device, and endoscope system operation method
JPWO2010122884A1 (en) Fluorescence imaging apparatus and method of operating fluorescence imaging apparatus
JP7335399B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, ENDOSCOPE SYSTEM, AND METHOD OF OPERATION OF MEDICAL IMAGE PROCESSING APPARATUS
JP2023076644A (en) endoscope system
JP2022525113A (en) Near-infrared fluorescence imaging and related systems and computer program products for blood flow and perfusion visualization
JP2020065685A (en) Endoscope system
JP6924837B2 (en) Medical image processing system, endoscopy system, diagnostic support device, and medical service support device
US20230237659A1 (en) Image processing apparatus, endoscope system, operation method of image processing apparatus, and non-transitory computer readable medium
US20230141302A1 (en) Image analysis processing apparatus, endoscope system, operation method of image analysis processing apparatus, and non-transitory computer readable medium
JP7163386B2 (en) Endoscope device, method for operating endoscope device, and program for operating endoscope device
US20190246874A1 (en) Processor device, endoscope system, and method of operating processor device
WO2022059233A1 (en) Image processing device, endoscope system, operation method for image processing device, and program for image processing device
JP7386347B2 (en) Endoscope system and its operating method
JP7214886B2 (en) Image processing device and its operating method
WO2021006121A1 (en) Image processing device, endoscope system, and operation method for image processing device
JP6285373B2 (en) Endoscope system, processor device, and operation method of endoscope system
WO2022009478A1 (en) Image processing device, endoscope system, operation method for image processing device, and program for image processing device
JP7090705B2 (en) Endoscope device, operation method and program of the endoscope device
JP7090706B2 (en) Endoscope device, operation method and program of the endoscope device
WO2022210508A1 (en) Processor device, medical image processing device, medical image processing system, and endoscopic system
WO2021210331A1 (en) Image processing device and operating method therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868921

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21868921

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP