WO2023089717A1 - Information processing device, information processing method, and recording medium - Google Patents

Information processing device, information processing method, and recording medium Download PDF

Info

Publication number
WO2023089717A1
WO2023089717A1 PCT/JP2021/042366 JP2021042366W WO2023089717A1 WO 2023089717 A1 WO2023089717 A1 WO 2023089717A1 JP 2021042366 W JP2021042366 W JP 2021042366W WO 2023089717 A1 WO2023089717 A1 WO 2023089717A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lesion
still images
information processing
images
Prior art date
Application number
PCT/JP2021/042366
Other languages
French (fr)
Japanese (ja)
Inventor
憲一 上條
翔太 大塚
達 木村
敦 丸亀
雅弘 西光
亮作 志野
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to PCT/JP2021/042366 priority Critical patent/WO2023089717A1/en
Publication of WO2023089717A1 publication Critical patent/WO2023089717A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof

Definitions

  • This disclosure relates to processing of information regarding endoscopy.
  • Patent Literature 1 proposes a method of selecting and outputting an image of interest from a group of time-series images obtained by capturing the inside of a lumen of a subject in time series.
  • One object of the present disclosure is to provide an information processing device capable of selecting images attached to reports and images used for AI qualitative judgment from a huge number of images taken in endoscopy. to do.
  • an information processing device includes: endoscopic image acquisition means for acquiring an endoscopic image; a still image acquiring means for acquiring a plurality of still images of lesions included in the endoscopic image; Identity determination means for determining identity of lesions included in the plurality of still images; selecting means for selecting a representative image that best represents the lesion from a plurality of still images determined to correspond to the same lesion.
  • an information processing method includes: Acquiring endoscopic images, Acquiring a plurality of still images of lesions included in the endoscopic image, determining the identity of the lesions included in the plurality of still images; A representative image that best represents the lesion is selected from a plurality of still images determined to correspond to the same lesion.
  • the recording medium comprises Acquiring endoscopic images, Acquiring a plurality of still images of lesions included in the endoscopic image, determining the identity of the lesions included in the plurality of still images;
  • a program is recorded that causes a computer to select a representative image that best represents the lesion from a plurality of still images determined to correspond to the same lesion.
  • FIG. 1 is a block diagram showing a schematic configuration of an endoscopy system
  • FIG. 2 is a block diagram showing the hardware configuration of the image processing device
  • FIG. 2 is a block diagram showing the functional configuration of the image processing device
  • a display example of an inspection report is shown.
  • 4 shows another display example of an inspection report.
  • 4 is a flowchart of display processing by the image processing device; It is a block diagram which shows the functional structure of the information processing apparatus of 2nd Embodiment.
  • 9 is a flowchart of processing by the information processing apparatus of the second embodiment;
  • FIG. 1 shows a schematic configuration of an endoscopy system 100.
  • the endoscopy system 100 acquires an image captured by an endoscopy examiner and an image captured by an AI during an examination (including treatment) using an endoscope. Then, if there are a plurality of images of the same lesion among the images taken by the examiner or AI, the endoscopy system 100 selects and displays a representative image from among them.
  • the endoscopy system 100 of this embodiment is characterized in that an image that best represents a lesion is selected as a representative image and displayed. This saves the inspector the trouble of selecting an image to be attached to the inspection report, and enables efficient creation of the inspection report.
  • the endoscopy system 100 mainly includes an image processing device 1, a display device 2, and an endoscope 3 connected to the image processing device 1.
  • the image processing apparatus 1 acquires from the endoscope 3 an image captured by the endoscope 3 during the endoscopy (hereinafter also referred to as an "endoscopic image Ic"), and performs the endoscopy.
  • the display device 2 is caused to display display data for confirmation by the inspector.
  • the image processing apparatus 1 acquires, as an endoscopic image Ic, a moving image of the interior of an organ captured by the endoscope 3 during an endoscopy.
  • the examiner finds a lesion during the endoscopy, the examiner operates the endoscope 3 to input an instruction to photograph the lesion position.
  • the image processing apparatus 1 generates a lesion image showing a lesion position based on an imaging instruction from an examiner.
  • the image processing apparatus 1 generates a lesion image, which is a still image, from the endoscopic image Ic, which is a moving image, based on the imaging instruction of the examiner. At this time, if the generated lesion image is not clear, the image processing apparatus 1 may notify the examiner to input the imaging instruction again.
  • the display device 2 is a display or the like that performs a predetermined display based on a display signal supplied from the image processing device 1 .
  • the endoscope 3 mainly includes an operation unit 36 for inputting air supply, water supply, angle adjustment, imaging instruction, etc. by the examiner, and a flexible endoscope which is inserted into an organ to be examined of the examinee.
  • a flexible shaft 37 a flexible shaft 37 , a distal end portion 38 containing an imaging unit such as an ultra-compact imaging device, and a connecting portion 39 for connecting to the image processing apparatus 1 .
  • the inspection object is not limited to the large intestine, but the gastrointestinal tract (digestive organ) such as the stomach, esophagus, small intestine, duodenum, etc. good too.
  • the part to be detected in the endoscopy is not limited to the lesion part, and may be any part that the examiner needs to pay attention to (also called “part of attention").
  • Such points of interest include a lesion site, an inflamed site, a surgical scar or other cut site, a fold or projection site, and the tip portion 38 of the endoscope 3 in the lumen. It may be a place that is easy to contact (easy to get stuck) on the wall surface of the.
  • FIG. 2 shows the hardware configuration of the image processing apparatus 1.
  • the image processing apparatus 1 mainly includes a processor 11, a memory 12, an interface 13, an input unit 14, a light source unit 15, a sound output unit 16, and a database (hereinafter referred to as "DB") 17. ,including.
  • DB database
  • the processor 11 executes a predetermined process by executing a program or the like stored in the memory 12.
  • the processor 11 is a processor such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or a TPU (Tensor Processing Unit). Note that the processor 11 may be composed of a plurality of processors. Processor 11 is an example of a computer.
  • the memory 12 is composed of various volatile memories used as working memory, such as RAM (Random Access Memory) and ROM (Read Only Memory), and non-volatile memory for storing information necessary for processing of the image processing apparatus 1. be done.
  • the memory 12 may include an external storage device such as a hard disk connected to or built into the image processing apparatus 1, or may include a storage medium such as a detachable flash memory or disk medium.
  • the memory 12 stores a program for the image processing apparatus 1 to execute each process in this embodiment.
  • the memory 12 temporarily stores a series of endoscopic images Ic captured by the endoscope 3 during endoscopic examination.
  • the memory 12 temporarily stores lesion images captured based on imaging instructions from the examiner during the endoscopy. These images are stored in the memory 12 in association with, for example, subject identification information (for example, patient ID), time stamp information, and the like.
  • the interface 13 performs an interface operation between the image processing device 1 and an external device.
  • the interface 13 supplies the display data Id generated by the processor 11 to the display device 2 .
  • the interface 13 supplies illumination light generated by the light source unit 15 to the endoscope 3 .
  • the interface 13 also supplies the processor 11 with electrical signals indicating the endoscopic image Ic supplied from the endoscope 3 .
  • the interface 13 may be a communication interface such as a network adapter for performing wired or wireless communication with an external device, and may be a hardware interface conforming to USB (Universal Serial Bus), SATA (Serial AT Attachment), or the like. may
  • the input unit 14 generates an input signal based on the operation of the inspector.
  • the input unit 14 is, for example, a button, touch panel, remote controller, voice input device, or the like.
  • the light source unit 15 generates light to be supplied to the distal end portion 38 of the endoscope 3 .
  • the light source unit 15 may also incorporate a pump or the like for sending out water or air to be supplied to the endoscope 3 .
  • the sound output unit 16 outputs sound under the control of the processor 11 .
  • the DB 17 stores endoscopic images and lesion information acquired by the subject's past endoscopic examinations.
  • the lesion information includes lesion images and related information.
  • the DB 17 may include an external storage device such as a hard disk connected to or built into the image processing apparatus 1, or may include a storage medium such as a detachable flash memory. Note that instead of providing the DB 17 in the endoscopy system 100, the DB 17 may be provided in an external server or the like, and related information may be acquired from the server through communication.
  • FIG. 3 is a block diagram showing the functional configuration of the image processing apparatus 1. As shown in FIG.
  • the image processing device 1 functionally includes a position detection unit 21 , an inspection data generation unit 22 , an AI determination unit 23 , and a display data generation unit 24 .
  • An endoscope image Ic is input from the endoscope 3 to the image processing device 1 .
  • the endoscopic image Ic is input to the position detection unit 21 , inspection data generation unit 22 and AI determination unit 23 .
  • the position detection unit 21 detects the position of the endoscope 3, that is, the imaging position of the endoscope image, based on the endoscope image Ic. Specifically, the position detection unit 21 detects the imaging position by image analysis of the input endoscope image Ic.
  • the imaging position may be three-dimensional coordinates of the organ to be inspected, but may be information indicating at least one of a plurality of parts of the organ to be inspected. For example, as shown in FIG.
  • the large intestine is composed of multiple parts such as the cecum, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum. Therefore, when the examination target is the large intestine, the position detection unit 21 should be able to detect at least which of the above parts the imaging position belongs to.
  • the position detection unit 21 detects the image at that time based on the pattern of the mucous membrane, the presence or absence of folds, the shape of the folds, the number of folds passed by the movement of the endoscope 3, etc. included in the endoscopic image. It is possible to estimate which part of the large intestine the position belongs to. Further, the position detection unit 21 estimates the moving speed of the endoscope 3 based on the endoscopic image, calculates the moving distance in the large intestine based on the moving speed and time, and determines the imaging position. can be estimated. Furthermore, the position detection unit 21 may detect the imaging position using the insertion length of the endoscope 3 inserted into the organ in addition to the image analysis of the endoscopic image. In this embodiment, the method of detecting the imaging position in the organ to be inspected is not limited to a specific method. The position detection unit 21 outputs the detected imaging position to the inspection data generation unit 22 .
  • patient information of a patient who is a subject is input to the image processing apparatus 1 through the input unit 14 .
  • the patient information is information that uniquely identifies a patient, and may use a patient name, an ID uniquely assigned to each patient, or a personal identification number such as my number.
  • Patient information is input to the examination data generator 22 .
  • the inspection data generation unit 22 generates inspection data of the imaging position based on the endoscopic image Ic and the imaging position detected by the position detection unit 21 and temporarily stores it in the memory 12 . Specifically, when the inspector operates the endoscope 3, the inspection data generator 22 generates an endoscopic image Ic captured by the endoscope 3, the patient name, the patient ID, the date and time of the inspection, the image Inspection data including imaging information including positions and the like are generated and stored in the memory 12 . The test data generation unit 22 outputs the generated test data to the AI determination unit 23 and the display data generation unit 24 .
  • the AI determination unit 23 performs image analysis based on the endoscopic image Ic and determines the presence or absence of a lesion.
  • the AI determination unit 23 uses an image recognition model or the like prepared in advance to detect a lesion-like portion included in the endoscopic image.
  • the AI determination unit 23 outputs the position (lesion position) and the lesion-likeness score as a determination result.
  • the AI determination unit 23 does not detect a lesion-like portion, it outputs a determination result indicating that there is no lesion.
  • the AI determination unit 23 acquires a lesion image, which is a still image generated based on the imaging instruction of the examiner, from the test data generated by the test data generation unit 22 . Then, the AI determination unit 23 groups images related to the same lesion from among the plurality of lesion images. Specifically, the AI determination unit 23 performs pattern matching of lesion sites on a plurality of lesion images captured within a predetermined period of time, and determines whether or not the images relate to the same lesion. When the AI determination unit 23 determines that the images are related to the same lesion, the images are grouped. Note that the AI determination unit 23 may determine whether or not the images relate to the same lesion by using the above-described pattern matching and lesion position information.
  • the AI determination unit 23 selects a representative image from among the grouped images.
  • the representative image is an image that best represents the lesion in the grouped image group.
  • the AI determination unit 23 selects a representative image using criteria exemplified below.
  • the AI determination unit 23 selects the image in which the lesion is shown the largest among the grouped images as the representative image.
  • the AI determination unit 23 selects the image in which the lesion is most centered among the grouped images as the representative image.
  • the AI determination unit 23 selects the most focused image among the grouped images as the representative image.
  • the AI determination unit 23 selects the image with the highest lesion-likeness score among the grouped images as the representative image. Note that the AI determination unit 23 may select a representative image based on a combination of these criteria.
  • the representative image is not limited to one.
  • An examiner may image a lesion using different types of illumination light during an endoscopy. Therefore, for example, when the examiner switches the illumination light of the endoscope from white light to special light and photographs the same lesion, the representative image may be selected for each type of light.
  • the AI determination unit 23 outputs the representative image thus selected to the display data generation unit 24 .
  • the AI determination unit 23 may perform qualitative determination on the representative image and output the result of the qualitative determination to the display data generation unit 24 .
  • the lesion image is not limited to a still image generated based on the imaging instruction of the examiner, and may be a still image generated by the AI determination unit 23 by estimating the lesion site by itself based on the endoscopic image Ic. There may be.
  • the display data generation unit 24 generates display data Id using the inspection data input from the inspection data generation unit 22 and the representative image data input from the AI determination unit 23 , and outputs the display data Id to the display device 2 .
  • FIG. 5 shows a display example of an inspection report.
  • a representative image of a patient's lesion and information related to the lesion hereinafter referred to as "related information" are displayed.
  • related information 41 and a lesion image 42 are displayed in the display area 40 in addition to basic information such as the patient name, patient ID, and examination date and time.
  • the related information 41 is information about lesions included in the lesion image 42 .
  • the related information 41 includes lesion position, lesion size, macroscopic type (endoscopic findings), tissue type (pathological diagnosis result), treatment, and the like.
  • a lesion image 42 is a representative image selected by the AI determination unit 23 from a plurality of images grouped as corresponding to the same lesion. That is, the representative image is the image that best represents the lesion from among the grouped images.
  • Related information 41 and lesion image 42 are displayed for each detected lesion.
  • the lesion displayed in this manner may be an image captured by the examiner, or may be an image captured automatically by the AI determination unit 23 .
  • the image processing apparatus 1 selects a representative image from among a plurality of images corresponding to the same lesion captured during endoscopic examination, and attaches it to the examination report. This saves the inspector the trouble of selecting an image, and makes it possible to efficiently create an inspection report.
  • FIG. 6 shows another display example of an inspection report.
  • a plurality of still images determined to be the same lesion and a representative image selected from the plurality of still images are displayed in association with the imaging time of the endoscopic image Ic. .
  • the display example of FIG. 6 is used in addition to the inspection report shown in FIG. 5 as necessary.
  • a plurality of still images and a representative image selected from the plurality of still images are displayed on the timeline indicating the elapsed time of the examination.
  • the image processing apparatus 1 groups images corresponding to the same lesion among the images captured between 14:10 and 14:20. Then, the image processing apparatus 1 displays the grouped image group 51 and the representative image 53 selected from the image group 51 on the timeline. In addition, the image processing apparatus 1 groups images corresponding to the same lesion among the images captured between 14:20 and 14:30. Then, the image processing apparatus 1 displays the grouped image group 52 and the representative image 54 selected from the image group 52 on the timeline. In this way, by displaying the representative images in a manner associated with the imaging time and the group of images corresponding to the same lesion, the examiner can easily view the images captured by himself and the images captured by the AI in chronological order. It is possible to grasp the
  • FIG. 7 is a flow chart of display processing by the image processing device 1 . This processing is realized by executing a program prepared in advance by the processor 11 shown in FIG. 2 and operating as each element shown in FIG.
  • the examination data generation unit 22 acquires a lesion image taken by a doctor (step S11). Also, the AI determination unit 23 independently captures a lesion image based on the endoscopic image Ic, and the examination data generation unit 22 acquires the image captured by the AI determination unit 23 (step S12). Note that step S12 may be executed before step S11 or may be executed simultaneously with step S11.
  • the AI determination unit 23 groups images related to the same lesion from a plurality of lesion images captured within a predetermined period of time (step S13). Next, the AI determination unit 23 determines a representative image that best represents the lesion from each grouped image group (step S14). Further, the AI determination unit 23 performs qualitative determination on the lesion of the representative image (step S15).
  • the display data generation unit 24 uses the inspection data generated by the inspection data generation unit 22, the representative image determined by the AI determination unit 23, and the qualitative determination result of the AI determination unit 23 to generate the image shown in FIG.
  • Display data as exemplified is generated and output to the display device 2 (step S16).
  • the display device 2 displays the received display data (step S17). Thus, a display such as the display example of FIG. 5 is performed.
  • FIG. 8 is a block diagram showing the functional configuration of the information processing apparatus according to the second embodiment.
  • the information processing device 70 includes endoscope image acquisition means 71 , still image acquisition means 72 , identity determination means 73 , representative image selection means 74 , and display device 75 .
  • FIG. 9 is a flowchart of processing by the information processing apparatus of the second embodiment.
  • the endoscopic image acquisition means 71 acquires an endoscopic image (step S71).
  • the still image acquiring means 72 acquires a plurality of still images of lesions included in the endoscopic image (step S72).
  • the identity determining means 73 determines the identity of the lesions included in the plurality of still images (step S73).
  • the representative image selection means 74 selects a representative image that best represents the lesion from a plurality of still images determined to correspond to the same lesion, and displays it on the display device 75 (step S74).
  • the information processing apparatus 70 of the second embodiment it is possible to save the examiner the trouble of selecting images to be attached to the examination report after the endoscopy, and to efficiently create the examination report.
  • endoscopic image acquisition means for acquiring an endoscopic image
  • a still image acquiring means for acquiring a plurality of still images of lesions included in the endoscopic image
  • Identity determination means for determining identity of lesions included in the plurality of still images
  • selection means for selecting a representative image that best represents the lesion from a plurality of still images determined to correspond to the same lesion
  • Appendix 2 The information processing apparatus according to appendix 1, wherein the representative image is a still image showing the largest lesion among the plurality of still images determined to correspond to the same lesion.
  • Appendix 3 The information processing apparatus according to appendix 1, wherein the representative image is a still image in which the lesion is most centered among the plurality of still images determined to correspond to the same lesion.
  • Appendix 4 The information processing apparatus according to appendix 1, wherein the representative image is the most focused still image among the plurality of still images determined to correspond to the same lesion.
  • Appendix 5 comprising determination means for determining the likelihood of a lesion for the plurality of still images;
  • Appendix 6 display means for displaying an image showing a plurality of still images determined to correspond to the same lesion and a representative image selected from the plurality of still images in association with the imaging time of the endoscopic image;
  • Appendix 7 The information processing apparatus according to any one of appendices 1 to 6, wherein the plurality of still images include still images captured based on an imaging instruction by an examiner.
  • Appendix 8 The information processing apparatus according to any one of appendices 1 to 6, wherein the plurality of still images include still images automatically captured by AI imaging means that detects and captures a lesion in the endoscopic image. .
  • a recording medium recording a program for causing a computer to select a representative image that best represents a lesion from a plurality of still images determined to correspond to the same lesion.
  • Reference Signs List 1 image processing device 2 display device 3 endoscope 11 processor 12 memory 17 database (DB) 21 position detection unit 22 inspection data generation unit 23 AI determination unit 24 display data generation unit 100 endoscopy system

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)

Abstract

An endoscopic image acquisition means acquires an endoscopic image. A still image acquisition means acquires a plurality of still images in each of which an image of a lesion included in the endoscopic image is captured. An identity determination means determines the identity of lesions respectively included in the plurality of still images. A representative image selection means selects a representative image in which the lesion is best shown among a plurality of still images that are determined as corresponding to the same lesion and then displays the selected image on a display device.

Description

情報処理装置、情報処理方法、及び、記録媒体Information processing device, information processing method, and recording medium
 本開示は、内視鏡検査に関する情報の処理に関する。 This disclosure relates to processing of information regarding endoscopy.
 内視鏡検査中に、医師が数十枚の撮影を行うことがあるが、撮影した画像の枚数が多いと、その中から検査レポートに添付する画像を選定することに時間を要していた。また、病変と疑わしい部位について、質的判定を行うAI(Artificial Intelligence)が存在するが、医師が病変と疑わしい部位を撮影するたびにAIが判定処理を行うと、判定結果の確認が煩雑になり、AI処理の負荷も大きくなる。例えば、特許文献1では、被検体の管腔内等を時系列に沿って撮影した時系列画像群の中から、注目すべき画像を選出して出力する手法を提案している。 During an endoscopy, the doctor may take dozens of images, but if there are many images taken, it takes time to select the images to attach to the inspection report. . In addition, there is AI (Artificial Intelligence) that makes qualitative judgments on areas suspected of being lesions, but if the AI performs judgment processing every time a doctor takes an image of a suspected lesion, checking the judgment results becomes complicated. , the load of AI processing also increases. For example, Patent Literature 1 proposes a method of selecting and outputting an image of interest from a group of time-series images obtained by capturing the inside of a lumen of a subject in time series.
特開2011-024727号公報JP 2011-024727 A
 医師が撮影した複数の画像が、同じ部位の同じ病変に関するものである場合は、画像の枚数を絞ったうえで、レポートの添付画像や質的判定の画像として用いることが望ましい。 If multiple images taken by a doctor are of the same lesion in the same area, it is desirable to limit the number of images and use them as images attached to reports or images for qualitative judgment.
 本開示の1つの目的は、内視鏡検査において撮影された膨大な画像の中から、レポートの添付画像や、AIの質的判定に利用する画像を選定することが可能な情報処理装置を提供することにある。 One object of the present disclosure is to provide an information processing device capable of selecting images attached to reports and images used for AI qualitative judgment from a huge number of images taken in endoscopy. to do.
 本開示の一つの観点では、情報処理装置は、
 内視鏡画像を取得する内視鏡画像取得手段と、
 前記内視鏡画像に含まれる病変を撮影した複数の静止画を取得する静止画取得手段と、
 前記複数の静止画に含まれる病変の同一性を判定する同一性判定手段と、
 同一の病変に対応すると判定された複数の静止画から、前記病変を最も良く表している代表画像を選択する選択手段と、を備える。
In one aspect of the present disclosure, an information processing device includes:
endoscopic image acquisition means for acquiring an endoscopic image;
a still image acquiring means for acquiring a plurality of still images of lesions included in the endoscopic image;
Identity determination means for determining identity of lesions included in the plurality of still images;
selecting means for selecting a representative image that best represents the lesion from a plurality of still images determined to correspond to the same lesion.
 本開示の他の観点では、情報処理方法は、
 内視鏡画像を取得し、
 前記内視鏡画像に含まれる病変を撮影した複数の静止画を取得し、
 前記複数の静止画に含まれる病変の同一性を判定し、
 同一の病変に対応すると判定された複数の静止画から、前記病変を最も良く表している代表画像を選択する。
In another aspect of the present disclosure, an information processing method includes:
Acquiring endoscopic images,
Acquiring a plurality of still images of lesions included in the endoscopic image,
determining the identity of the lesions included in the plurality of still images;
A representative image that best represents the lesion is selected from a plurality of still images determined to correspond to the same lesion.
 本開示のさらに他の観点では、記録媒体は、
 内視鏡画像を取得し、
 前記内視鏡画像に含まれる病変を撮影した複数の静止画を取得し、
 前記複数の静止画に含まれる病変の同一性を判定し、
 同一の病変に対応すると判定された複数の静止画から、前記病変を最も良く表している代表画像を選択する処理をコンピュータに実行させるプログラムを記録する。
In yet another aspect of the present disclosure, the recording medium comprises
Acquiring endoscopic images,
Acquiring a plurality of still images of lesions included in the endoscopic image,
determining the identity of the lesions included in the plurality of still images;
A program is recorded that causes a computer to select a representative image that best represents the lesion from a plurality of still images determined to correspond to the same lesion.
 本開示によれば、内視鏡検査において撮影された画像の中から、レポートの添付画像や、AIの質的判定に利用する画像を選定することが可能となる。 According to the present disclosure, it is possible to select an image attached to a report or an image used for qualitative judgment of AI from among the images taken in the endoscopy.
内視鏡検査システムの概略構成を示すブロック図である。1 is a block diagram showing a schematic configuration of an endoscopy system; FIG. 画像処理装置のハードウェア構成を示すブロック図である。2 is a block diagram showing the hardware configuration of the image processing device; FIG. 画像処理装置の機能構成を示すブロック図である。2 is a block diagram showing the functional configuration of the image processing device; FIG. 大腸の構造を模式的に示す。Schematic representation of the structure of the large intestine. 検査レポートの表示例を示す。A display example of an inspection report is shown. 検査レポートの他の表示例を示す。4 shows another display example of an inspection report. 画像処理装置による表示処理のフローチャートである。4 is a flowchart of display processing by the image processing device; 第2実施形態の情報処理装置の機能構成を示すブロック図である。It is a block diagram which shows the functional structure of the information processing apparatus of 2nd Embodiment. 第2実施形態の情報処理装置による処理のフローチャートである。9 is a flowchart of processing by the information processing apparatus of the second embodiment;
 以下、図面を参照して、本開示の好適な実施形態について説明する。
 <第1実施形態>
 [システム構成]
 図1は、内視鏡検査システム100の概略構成を示す。内視鏡検査システム100は、内視鏡を利用した検査(治療を含む)の際に、内視鏡検査の検査者が撮影した画像と、AIが撮影した画像とを取得する。そして、内視鏡検査システム100は、検査者またはAIが撮影した画像ついて、同一の病変に関する画像が複数ある場合は、その中から、代表画像を選択し、表示する。特に、本実施形態の内視鏡検査システム100は、病変を最もよく表している画像を代表画像として選択し、表示する点に特徴を有する。これにより、検査者は、検査レポートに添付する画像を選択する手間が省け、効率的に検査レポートを作成することが可能となる。
Preferred embodiments of the present disclosure will be described below with reference to the drawings.
<First embodiment>
[System configuration]
FIG. 1 shows a schematic configuration of an endoscopy system 100. As shown in FIG. The endoscopy system 100 acquires an image captured by an endoscopy examiner and an image captured by an AI during an examination (including treatment) using an endoscope. Then, if there are a plurality of images of the same lesion among the images taken by the examiner or AI, the endoscopy system 100 selects and displays a representative image from among them. In particular, the endoscopy system 100 of this embodiment is characterized in that an image that best represents a lesion is selected as a representative image and displayed. This saves the inspector the trouble of selecting an image to be attached to the inspection report, and enables efficient creation of the inspection report.
 図1に示すように、内視鏡検査システム100は、主に、画像処理装置1と、表示装置2と、画像処理装置1に接続された内視鏡スコープ3と、を備える。 As shown in FIG. 1, the endoscopy system 100 mainly includes an image processing device 1, a display device 2, and an endoscope 3 connected to the image processing device 1.
 画像処理装置1は、内視鏡検査中に内視鏡スコープ3が撮影する画像(以下、「内視鏡画像Ic」とも呼ぶ。)を内視鏡スコープ3から取得し、内視鏡検査の検査者が確認するための表示データを表示装置2に表示させる。具体的に、画像処理装置1は、内視鏡検査中に、内視鏡スコープ3により撮影された臓器内の動画を内視鏡画像Icとして取得する。また、検査者は、内視鏡検査中に病変を見つけると、内視鏡スコープ3を操作して病変位置の撮影指示を入力する。画像処理装置1は、検査者による撮影指示に基づいて、病変位置を写した病変画像を生成する。具体的には、画像処理装置1は、動画である内視鏡画像Icから、検査者の撮影指示に基づいて静止画である病変画像を生成する。このとき、生成した病変画像が鮮明でない場合は、画像処理装置1は、検査者に、再度撮影指示を入力するよう通知してもよい。 The image processing apparatus 1 acquires from the endoscope 3 an image captured by the endoscope 3 during the endoscopy (hereinafter also referred to as an "endoscopic image Ic"), and performs the endoscopy. The display device 2 is caused to display display data for confirmation by the inspector. Specifically, the image processing apparatus 1 acquires, as an endoscopic image Ic, a moving image of the interior of an organ captured by the endoscope 3 during an endoscopy. Further, when the examiner finds a lesion during the endoscopy, the examiner operates the endoscope 3 to input an instruction to photograph the lesion position. The image processing apparatus 1 generates a lesion image showing a lesion position based on an imaging instruction from an examiner. Specifically, the image processing apparatus 1 generates a lesion image, which is a still image, from the endoscopic image Ic, which is a moving image, based on the imaging instruction of the examiner. At this time, if the generated lesion image is not clear, the image processing apparatus 1 may notify the examiner to input the imaging instruction again.
 表示装置2は、画像処理装置1から供給される表示信号に基づき所定の表示を行うディスプレイ等である。 The display device 2 is a display or the like that performs a predetermined display based on a display signal supplied from the image processing device 1 .
 内視鏡スコープ3は、主に、検査者が送気、送水、アングル調整、撮影指示などの入力を行うための操作部36と、被検者の検査対象となる臓器内に挿入され、柔軟性を有するシャフト37と、超小型撮影素子などの撮影部を内蔵した先端部38と、画像処理装置1と接続するための接続部39とを有する。 The endoscope 3 mainly includes an operation unit 36 for inputting air supply, water supply, angle adjustment, imaging instruction, etc. by the examiner, and a flexible endoscope which is inserted into an organ to be examined of the examinee. a flexible shaft 37 , a distal end portion 38 containing an imaging unit such as an ultra-compact imaging device, and a connecting portion 39 for connecting to the image processing apparatus 1 .
 なお、以下では、主に大腸の内視鏡検査における処理を前提として説明を行うが、検査対象は、大腸に限らず、胃、食道、小腸、十二指腸などの消化管(消化器)であってもよい。 In the following, explanation will be given mainly on the premise of processing in endoscopic examination of the large intestine, but the inspection object is not limited to the large intestine, but the gastrointestinal tract (digestive organ) such as the stomach, esophagus, small intestine, duodenum, etc. good too.
 また、内視鏡検査において検出対象となる部位は、病変部位に限らず、検査者が注目する必要がある任意の箇所(「注目箇所」とも呼ぶ。)であってもよい。このような注目箇所は、病変部位、炎症が生じている箇所、手術痕その他の切り傷が生じている箇所、ひだや突起が生じている箇所、内視鏡スコープ3の先端部38が管腔内の壁面において接触しやすい(つっかえやすい)箇所などであってもよい。 In addition, the part to be detected in the endoscopy is not limited to the lesion part, and may be any part that the examiner needs to pay attention to (also called "part of attention"). Such points of interest include a lesion site, an inflamed site, a surgical scar or other cut site, a fold or projection site, and the tip portion 38 of the endoscope 3 in the lumen. It may be a place that is easy to contact (easy to get stuck) on the wall surface of the.
 [ハードウェア構成]
 図2は、画像処理装置1のハードウェア構成を示す。画像処理装置1は、主に、プロセッサ11と、メモリ12と、インターフェース13と、入力部14と、光源部15と、音出力部16と、データベース(以下、「DB」と記す。)17と、を含む。これらの各要素は、データバス19を介して接続されている。
[Hardware configuration]
FIG. 2 shows the hardware configuration of the image processing apparatus 1. As shown in FIG. The image processing apparatus 1 mainly includes a processor 11, a memory 12, an interface 13, an input unit 14, a light source unit 15, a sound output unit 16, and a database (hereinafter referred to as "DB") 17. ,including. Each of these elements is connected via a data bus 19 .
 プロセッサ11は、メモリ12に記憶されているプログラム等を実行することにより、所定の処理を実行する。プロセッサ11は、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、TPU(Tensor Processing Unit)などのプロセッサである。なお、プロセッサ11は、複数のプロセッサから構成されてもよい。プロセッサ11は、コンピュータの一例である。 The processor 11 executes a predetermined process by executing a program or the like stored in the memory 12. The processor 11 is a processor such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or a TPU (Tensor Processing Unit). Note that the processor 11 may be composed of a plurality of processors. Processor 11 is an example of a computer.
 メモリ12は、RAM(Random Access Memory)、ROM(Read Only Memory)などの、作業メモリとして使用される各種の揮発性メモリ及び画像処理装置1の処理に必要な情報を記憶する不揮発性メモリにより構成される。なお、メモリ12は、画像処理装置1に接続又は内蔵されたハードディスクなどの外部記憶装置を含んでもよく、着脱自在なフラッシュメモリやディスク媒体などの記憶媒体を含んでもよい。メモリ12には、画像処理装置1が本実施形態における各処理を実行するためのプログラムが記憶される。 The memory 12 is composed of various volatile memories used as working memory, such as RAM (Random Access Memory) and ROM (Read Only Memory), and non-volatile memory for storing information necessary for processing of the image processing apparatus 1. be done. Note that the memory 12 may include an external storage device such as a hard disk connected to or built into the image processing apparatus 1, or may include a storage medium such as a detachable flash memory or disk medium. The memory 12 stores a program for the image processing apparatus 1 to execute each process in this embodiment.
 また、メモリ12は、プロセッサ11の制御に基づき、内視鏡検査において内視鏡スコープ3が撮影した一連の内視鏡画像Icを一時的に記憶する。また、メモリ12は、内視鏡検査中に検査者の撮影指示に基づいて撮影された病変画像を一時的に記憶する。これらの画像は、例えば、被検者の識別情報(例えば患者ID)、及び、タイムスタンプの情報等と関連付けられてメモリ12に記憶される。 In addition, under the control of the processor 11, the memory 12 temporarily stores a series of endoscopic images Ic captured by the endoscope 3 during endoscopic examination. In addition, the memory 12 temporarily stores lesion images captured based on imaging instructions from the examiner during the endoscopy. These images are stored in the memory 12 in association with, for example, subject identification information (for example, patient ID), time stamp information, and the like.
 インターフェース13は、画像処理装置1と外部装置とのインターフェース動作を行う。例えば、インターフェース13は、プロセッサ11が生成した表示データIdを表示装置2に供給する。また、インターフェース13は、光源部15が生成する照明光を内視鏡スコープ3に供給する。また、インターフェース13は、内視鏡スコープ3から供給される内視鏡画像Icを示す電気信号をプロセッサ11に供給する。インターフェース13は、外部装置と有線又は無線により通信を行うためのネットワークアダプタなどの通信インターフェースであってもよく、USB(Universal Serial Bus)、SATA(Serial AT Attachment)などに準拠したハードウェアインターフェースであってもよい。 The interface 13 performs an interface operation between the image processing device 1 and an external device. For example, the interface 13 supplies the display data Id generated by the processor 11 to the display device 2 . Also, the interface 13 supplies illumination light generated by the light source unit 15 to the endoscope 3 . The interface 13 also supplies the processor 11 with electrical signals indicating the endoscopic image Ic supplied from the endoscope 3 . The interface 13 may be a communication interface such as a network adapter for performing wired or wireless communication with an external device, and may be a hardware interface conforming to USB (Universal Serial Bus), SATA (Serial AT Attachment), or the like. may
 入力部14は、検査者の操作に基づく入力信号を生成する。入力部14は、例えば、ボタン、タッチパネル、リモートコントローラ、音声入力装置等である。光源部15は、内視鏡スコープ3の先端部38に供給するための光を生成する。また、光源部15は、内視鏡スコープ3に供給する水や空気を送り出すためのポンプ等も内蔵してもよい。音出力部16は、プロセッサ11の制御に基づき音を出力する。 The input unit 14 generates an input signal based on the operation of the inspector. The input unit 14 is, for example, a button, touch panel, remote controller, voice input device, or the like. The light source unit 15 generates light to be supplied to the distal end portion 38 of the endoscope 3 . The light source unit 15 may also incorporate a pump or the like for sending out water or air to be supplied to the endoscope 3 . The sound output unit 16 outputs sound under the control of the processor 11 .
 DB17は、被検者の過去の内視鏡検査により取得された内視鏡画像、及び、病変情報を記憶している。病変情報は、病変画像と、関連情報とを含む。DB17は、画像処理装置1に接続又は内蔵されたハードディスクなどの外部記憶装置を含んでもよく、着脱自在なフラッシュメモリなどの記憶媒体を含んでもよい。なお、DB17を内視鏡検査システム100内に備える代わりに、外部のサーバなどにDB17を設け、通信により当該サーバから関連情報を取得するようにしてもよい。 The DB 17 stores endoscopic images and lesion information acquired by the subject's past endoscopic examinations. The lesion information includes lesion images and related information. The DB 17 may include an external storage device such as a hard disk connected to or built into the image processing apparatus 1, or may include a storage medium such as a detachable flash memory. Note that instead of providing the DB 17 in the endoscopy system 100, the DB 17 may be provided in an external server or the like, and related information may be acquired from the server through communication.
 [機能構成]
 図3は、画像処理装置1の機能構成を示すブロック図である。画像処理装置1は、機能的には、位置検出部21と、検査データ生成部22と、AI判定部23と、表示データ生成部24と、を含む。
[Function configuration]
FIG. 3 is a block diagram showing the functional configuration of the image processing apparatus 1. As shown in FIG. The image processing device 1 functionally includes a position detection unit 21 , an inspection data generation unit 22 , an AI determination unit 23 , and a display data generation unit 24 .
 画像処理装置1には、内視鏡スコープ3から内視鏡画像Icが入力される。内視鏡画像Icは、位置検出部21と、検査データ生成部22と、AI判定部23に入力される。位置検出部21は、内視鏡画像Icに基づいて、内視鏡スコープ3の位置、即ち、内視鏡画像の撮影位置を検出する。詳しくは、位置検出部21は、入力された内視鏡画像Icの画像解析により、撮影位置を検出する。ここで、撮影位置とは、検査対象の臓器における3次元座標であってもよいが、少なくとも検査対象の臓器における複数の部位のいずれかを示す情報であればよい。例えば、図4に示すように、大腸は、盲腸、上行結腸、横行結腸、下行結腸、S状結腸、直腸などの複数の部位により構成される。よって、検査対象が大腸である場合、位置検出部21は、少なくとも撮影位置が上記の各部位のいずれに属するかを検出できればよい。 An endoscope image Ic is input from the endoscope 3 to the image processing device 1 . The endoscopic image Ic is input to the position detection unit 21 , inspection data generation unit 22 and AI determination unit 23 . The position detection unit 21 detects the position of the endoscope 3, that is, the imaging position of the endoscope image, based on the endoscope image Ic. Specifically, the position detection unit 21 detects the imaging position by image analysis of the input endoscope image Ic. Here, the imaging position may be three-dimensional coordinates of the organ to be inspected, but may be information indicating at least one of a plurality of parts of the organ to be inspected. For example, as shown in FIG. 4, the large intestine is composed of multiple parts such as the cecum, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum. Therefore, when the examination target is the large intestine, the position detection unit 21 should be able to detect at least which of the above parts the imaging position belongs to.
 具体的に、位置検出部21は、内視鏡画像に含まれる粘膜の模様、ひだの有無、ひだの形状、内視鏡スコープ3の移動により通過したひだの数などに基づいて、その時の撮影位置が大腸のどの部位に属するかを推定することができる。また、位置検出部21は、内視鏡画像に基づいて内視鏡スコープ3の移動速度を推定し、移動速度と時間とに基づいて大腸内での移動距離を計算することにより、撮影位置を推定してもよい。さらに、位置検出部21は、内視鏡画像の画像解析に加えて、臓器内に挿入した内視鏡スコープ3の挿入長を用いて、撮影位置を検出してもよい。本実施形態では、検査対象の臓器における撮影位置の検出方法は特定の方法に限定されるものではない。位置検出部21は、検出した撮影位置を検査データ生成部22へ出力する。 Specifically, the position detection unit 21 detects the image at that time based on the pattern of the mucous membrane, the presence or absence of folds, the shape of the folds, the number of folds passed by the movement of the endoscope 3, etc. included in the endoscopic image. It is possible to estimate which part of the large intestine the position belongs to. Further, the position detection unit 21 estimates the moving speed of the endoscope 3 based on the endoscopic image, calculates the moving distance in the large intestine based on the moving speed and time, and determines the imaging position. can be estimated. Furthermore, the position detection unit 21 may detect the imaging position using the insertion length of the endoscope 3 inserted into the organ in addition to the image analysis of the endoscopic image. In this embodiment, the method of detecting the imaging position in the organ to be inspected is not limited to a specific method. The position detection unit 21 outputs the detected imaging position to the inspection data generation unit 22 .
 また、画像処理装置1には、入力部14を通じて、被検者である患者の患者情報が入力される。患者情報は、患者を一意に識別する情報であり、患者名や患者毎に一意に付与したIDなどの他、マイナンバーなどの個人識別番号を利用してもよい。患者情報は、検査データ生成部22に入力される。 In addition, patient information of a patient who is a subject is input to the image processing apparatus 1 through the input unit 14 . The patient information is information that uniquely identifies a patient, and may use a patient name, an ID uniquely assigned to each patient, or a personal identification number such as my number. Patient information is input to the examination data generator 22 .
 検査データ生成部22は、内視鏡画像Icと、位置検出部21が検出した撮影位置とに基づいて撮影位置の検査データを生成し、メモリ12に一時的に記憶する。具体的には、検査者が内視鏡スコープ3を操作すると、検査データ生成部22は、内視鏡スコープ3により撮影された内視鏡画像Icと、患者名、患者ID、検査日時、撮影位置などを含む撮影情報とを含む検査データを生成し、メモリ12に記憶する。検査データ生成部22は、生成した検査データをAI判定部23と、表示データ生成部24へ出力する。 The inspection data generation unit 22 generates inspection data of the imaging position based on the endoscopic image Ic and the imaging position detected by the position detection unit 21 and temporarily stores it in the memory 12 . Specifically, when the inspector operates the endoscope 3, the inspection data generator 22 generates an endoscopic image Ic captured by the endoscope 3, the patient name, the patient ID, the date and time of the inspection, the image Inspection data including imaging information including positions and the like are generated and stored in the memory 12 . The test data generation unit 22 outputs the generated test data to the AI determination unit 23 and the display data generation unit 24 .
 AI判定部23は、内視鏡画像Icに基づいて画像解析を行い、病変の有無を判定する。AI判定部23は、予め用意された画像認識モデルなどを用いて、内視鏡画像に含まれる病変らしい箇所を検出する。AI判定部23は、病変らしい箇所を検出した場合、その位置(病変位置)と、病変らしさのスコアを判定結果として出力する。一方、AI判定部23は、病変らしい箇所を検出しなかった場合、病変無しとの判定結果を出力する。 The AI determination unit 23 performs image analysis based on the endoscopic image Ic and determines the presence or absence of a lesion. The AI determination unit 23 uses an image recognition model or the like prepared in advance to detect a lesion-like portion included in the endoscopic image. When detecting a lesion-like portion, the AI determination unit 23 outputs the position (lesion position) and the lesion-likeness score as a determination result. On the other hand, when the AI determination unit 23 does not detect a lesion-like portion, it outputs a determination result indicating that there is no lesion.
 また、AI判定部23は、検査データ生成部22が生成した検査データから、検査者の撮影指示に基づいて生成された静止画である病変画像を取得する。そして、AI判定部23は、複数の病変画像の中から、同一の病変に関する画像をグルーピングする。具体的には、AI判定部23は、所定の時間内に撮影された複数の病変画像について、病変部位のパターンマッチングを行い、同一の病変に関する画像かどうかを判定する。AI判定部23は、同一の病変に関する画像であると判定した場合は、その画像をグルーピングする。なお、AI判定部23は、上記のパターンマッチングと病変の位置情報を用いて、同一の病変に関する画像かどうかを判定してもよい。 Also, the AI determination unit 23 acquires a lesion image, which is a still image generated based on the imaging instruction of the examiner, from the test data generated by the test data generation unit 22 . Then, the AI determination unit 23 groups images related to the same lesion from among the plurality of lesion images. Specifically, the AI determination unit 23 performs pattern matching of lesion sites on a plurality of lesion images captured within a predetermined period of time, and determines whether or not the images relate to the same lesion. When the AI determination unit 23 determines that the images are related to the same lesion, the images are grouped. Note that the AI determination unit 23 may determine whether or not the images relate to the same lesion by using the above-described pattern matching and lesion position information.
 さらに、AI判定部23は、グルーピングした画像群の中から、代表画像を選択する。ここで、代表画像とは、グルーピングした画像群の中で、病変を最も良く表している画像である。具体的に、AI判定部23は、以下に例示する基準を用いて代表画像を選択する。第1の例では、AI判定部23は、グルーピングした画像の中で、病変が最も大きく写っている画像を代表画像として選択する。第2の例では、AI判定部23は、グルーピングした画像の中で、病変が最も中央に写っている画像を代表画像として選択する。第3の例では、AI判定部23は、グルーピングした画像の中で、画像のピントが最も合っている画像を代表画像として選択する。第4の例では、AI判定部23は、グルーピングした画像の中で、病変らしさのスコアが最も高い画像を代表画像として選択する。なお、AI判定部23は、これらの基準を組み合わせた基準によって、代表画像を選択してもよい。 Furthermore, the AI determination unit 23 selects a representative image from among the grouped images. Here, the representative image is an image that best represents the lesion in the grouped image group. Specifically, the AI determination unit 23 selects a representative image using criteria exemplified below. In the first example, the AI determination unit 23 selects the image in which the lesion is shown the largest among the grouped images as the representative image. In a second example, the AI determination unit 23 selects the image in which the lesion is most centered among the grouped images as the representative image. In the third example, the AI determination unit 23 selects the most focused image among the grouped images as the representative image. In a fourth example, the AI determination unit 23 selects the image with the highest lesion-likeness score among the grouped images as the representative image. Note that the AI determination unit 23 may select a representative image based on a combination of these criteria.
 なお、代表画像は1枚に限らない。検査者は、内視鏡検査中に異なる種類の照明光を使用して病変の撮影を行うことがある。よって、例えば、検査者が、内視鏡の照明光を白色光から特殊光へ切り替えて同一の病変を撮影した場合などは、光の種類ごとに代表画像を選択してもよい。 It should be noted that the representative image is not limited to one. An examiner may image a lesion using different types of illumination light during an endoscopy. Therefore, for example, when the examiner switches the illumination light of the endoscope from white light to special light and photographs the same lesion, the representative image may be selected for each type of light.
 AI判定部23は、こうして選択した代表画像を表示データ生成部24へ出力する。このとき、AI判定部23は、代表画像について質的判定を行い、質的判定の結果を表示データ生成部24へ出力してもよい。 The AI determination unit 23 outputs the representative image thus selected to the display data generation unit 24 . At this time, the AI determination unit 23 may perform qualitative determination on the representative image and output the result of the qualitative determination to the display data generation unit 24 .
 なお、病変画像は、検査者の撮影指示に基づいて生成された静止画に限らず、AI判定部23が、内視鏡画像Icに基づいて、自ら病変部位を推定し、生成した静止画であってもよい。 The lesion image is not limited to a still image generated based on the imaging instruction of the examiner, and may be a still image generated by the AI determination unit 23 by estimating the lesion site by itself based on the endoscopic image Ic. There may be.
 表示データ生成部24は、検査データ生成部22から入力される検査データ、及び、AI判定部23から入力される代表画像データを用いて表示データIdを生成し、表示装置2へ出力する。 The display data generation unit 24 generates display data Id using the inspection data input from the inspection data generation unit 22 and the representative image data input from the AI determination unit 23 , and outputs the display data Id to the display device 2 .
 [表示例]
 次に、表示装置2による表示例を説明する。
 (検査レポート)
 図5は、検査レポートの表示例を示す。この例では、ある患者の病変について、代表画像と、その病変に関連する情報(以下、「関連情報」と呼ぶ。)とが表示されている。
[Display example]
Next, display examples by the display device 2 will be described.
(inspection report)
FIG. 5 shows a display example of an inspection report. In this example, a representative image of a patient's lesion and information related to the lesion (hereinafter referred to as "related information") are displayed.
 具体的に、図5の表示例では表示エリア40内に、患者名、患者ID及び検査日時などの基本情報に加えて、関連情報41と、病変画像42とが表示されている。関連情報41は、病変画像42に含まれる病変についての情報である。関連情報41は、病変位置、病変サイズ、肉眼型(内視鏡所見)、組織型(病理診断結果)、及び、処置などを含む。病変画像42は、同一の病変に対応するとしてグルーピングされた複数の画像から、AI判定部23により選択された代表画像である。即ち、代表画像は、グルーピングされた複数の画像から、その病変を最も良く表している画像である。関連情報41と病変画像42は、検出された病変毎に表示される。なお、こうして表示される病変は、検査者が撮影した画像であってもよく、AI判定部23が自動的に撮影した画像であってもよい。 Specifically, in the display example of FIG. 5, related information 41 and a lesion image 42 are displayed in the display area 40 in addition to basic information such as the patient name, patient ID, and examination date and time. The related information 41 is information about lesions included in the lesion image 42 . The related information 41 includes lesion position, lesion size, macroscopic type (endoscopic findings), tissue type (pathological diagnosis result), treatment, and the like. A lesion image 42 is a representative image selected by the AI determination unit 23 from a plurality of images grouped as corresponding to the same lesion. That is, the representative image is the image that best represents the lesion from among the grouped images. Related information 41 and lesion image 42 are displayed for each detected lesion. The lesion displayed in this manner may be an image captured by the examiner, or may be an image captured automatically by the AI determination unit 23 .
 このように、本実施形態では、画像処理装置1は、内視鏡検査において撮影された同一の病変に対応する複数の画像の中から代表画像を選択し、検査レポートに添付する。これにより、検査者が画像を選択する手間が省け、効率的に検査レポートを作成することが可能となる。 As described above, in this embodiment, the image processing apparatus 1 selects a representative image from among a plurality of images corresponding to the same lesion captured during endoscopic examination, and attaches it to the examination report. This saves the inspector the trouble of selecting an image, and makes it possible to efficiently create an inspection report.
 図6は、検査レポートの他の表示例を示す。この例は、同一の病変と判定された複数の静止画と、当該複数の静止画から選択された代表画像とを、内視鏡画像Icにおける撮影時刻と対応付けて表示した場合の例である。なお、図6の表示例は、必要に応じて、図5に示す検査レポートに追加して用いられる。この例では、ある患者の内視鏡検査において、14:10から14:20の間に、検査者が撮影した画像が複数あり、14:20から14:30の間に、AIが撮影した画像が複数あるとする。このとき、表示エリア50では、検査の経過時間を示すタイムライン上に、複数の静止画と、複数の静止画から選択された代表画像とを表示している。 FIG. 6 shows another display example of an inspection report. In this example, a plurality of still images determined to be the same lesion and a representative image selected from the plurality of still images are displayed in association with the imaging time of the endoscopic image Ic. . Note that the display example of FIG. 6 is used in addition to the inspection report shown in FIG. 5 as necessary. In this example, during an endoscopy of a certain patient, there are multiple images captured by the examiner between 14:10 and 14:20, and images captured by the AI between 14:20 and 14:30. Suppose there are multiple At this time, in the display area 50, a plurality of still images and a representative image selected from the plurality of still images are displayed on the timeline indicating the elapsed time of the examination.
 具体的に、画像処理装置1は、14:10から14:20の間に撮影された画像について、同一の病変に対応する画像をグルーピングする。そして、画像処理装置1は、グルーピングした画像群51と、画像群51から選択した代表画像53とをタイムライン上に表示する。また、画像処理装置1は、14:20から14:30の間に撮影された画像について、同一の病変に対応する画像をグルーピングする。そして、画像処理装置1は、グルーピングした画像群52と、画像群52から選択した代表画像54とをタイムライン上に表示する。このように、撮影時刻や同一の病変に対応する画像群と対応付けた態様で代表画像を表示することで、検査者は、自身が撮影した画像及び、AIが撮影した画像を時系列順に容易に把握することが可能となる。 Specifically, the image processing apparatus 1 groups images corresponding to the same lesion among the images captured between 14:10 and 14:20. Then, the image processing apparatus 1 displays the grouped image group 51 and the representative image 53 selected from the image group 51 on the timeline. In addition, the image processing apparatus 1 groups images corresponding to the same lesion among the images captured between 14:20 and 14:30. Then, the image processing apparatus 1 displays the grouped image group 52 and the representative image 54 selected from the image group 52 on the timeline. In this way, by displaying the representative images in a manner associated with the imaging time and the group of images corresponding to the same lesion, the examiner can easily view the images captured by himself and the images captured by the AI in chronological order. It is possible to grasp the
 [画像表示処理]
 次に、上記のような表示を行う表示処理について説明する。図7は、画像処理装置1による表示処理のフローチャートである。この処理は、図2に示すプロセッサ11が予め用意されたプログラムを実行し、図3に示す各要素として動作することにより実現される。
[Image display processing]
Next, display processing for performing the display as described above will be described. FIG. 7 is a flow chart of display processing by the image processing device 1 . This processing is realized by executing a program prepared in advance by the processor 11 shown in FIG. 2 and operating as each element shown in FIG.
 まず、検査データ生成部22は、医師が撮影した病変画像を取得する(ステップS11)。また、AI判定部23は、内視鏡画像Icに基づいて独自に病変画像を撮影し、検査データ生成部22はAI判定部23が撮影した画像を取得する(ステップS12)。なお、ステップS12は、ステップS11よりも前に実行されてもよく、ステップS11と同時に実行されてもよい。 First, the examination data generation unit 22 acquires a lesion image taken by a doctor (step S11). Also, the AI determination unit 23 independently captures a lesion image based on the endoscopic image Ic, and the examination data generation unit 22 acquires the image captured by the AI determination unit 23 (step S12). Note that step S12 may be executed before step S11 or may be executed simultaneously with step S11.
 次に、AI判定部23は、所定の時間内に撮影された複数の病変画像から、同一の病変に関する画像をグルーピングする(ステップS13)。次に、AI判定部23は、グルーピングした各画像群の中から、病変を最も良く表している代表画像を決定する(ステップS14)。さらに、AI判定部23は、代表画像の病変について、質的判定を行う(ステップS15)。 Next, the AI determination unit 23 groups images related to the same lesion from a plurality of lesion images captured within a predetermined period of time (step S13). Next, the AI determination unit 23 determines a representative image that best represents the lesion from each grouped image group (step S14). Further, the AI determination unit 23 performs qualitative determination on the lesion of the representative image (step S15).
 次に、表示データ生成部24は、検査データ生成部22が生成した検査データと、AI判定部23が決定した代表画像と、AI判定部23の質的判定結果とを用いて、図5に例示するような表示データを生成して表示装置2へ出力する(ステップS16)。表示装置2は、受信した表示データを表示する(ステップS17)。こうして、図5の表示例のような表示が行われる。 Next, the display data generation unit 24 uses the inspection data generated by the inspection data generation unit 22, the representative image determined by the AI determination unit 23, and the qualitative determination result of the AI determination unit 23 to generate the image shown in FIG. Display data as exemplified is generated and output to the display device 2 (step S16). The display device 2 displays the received display data (step S17). Thus, a display such as the display example of FIG. 5 is performed.
 <第2実施形態>
 図8は、第2実施形態の情報処理装置の機能構成を示すブロック図である。情報処理装置70は、内視鏡画像取得手段71と、静止画取得手段72と、同一性判定手段73と、代表画像選択手段74と、表示装置75とを備える。
<Second embodiment>
FIG. 8 is a block diagram showing the functional configuration of the information processing apparatus according to the second embodiment. The information processing device 70 includes endoscope image acquisition means 71 , still image acquisition means 72 , identity determination means 73 , representative image selection means 74 , and display device 75 .
 図9は、第2実施形態の情報処理装置による処理のフローチャートである。内視鏡画像取得手段71は、内視鏡画像を取得する(ステップS71)。次に、静止画取得手段72は、内視鏡画像に含まれる病変を撮影した複数の静止画を取得する(ステップS72)。同一性判定手段73は、複数の静止画に含まれる病変の同一性を判定する(ステップS73)。代表画像選択手段74は、同一の病変に対応すると判定された複数の静止画から、病変を最も良く表している代表画像を選択し、表示装置75に表示する(ステップS74)。 FIG. 9 is a flowchart of processing by the information processing apparatus of the second embodiment. The endoscopic image acquisition means 71 acquires an endoscopic image (step S71). Next, the still image acquiring means 72 acquires a plurality of still images of lesions included in the endoscopic image (step S72). The identity determining means 73 determines the identity of the lesions included in the plurality of still images (step S73). The representative image selection means 74 selects a representative image that best represents the lesion from a plurality of still images determined to correspond to the same lesion, and displays it on the display device 75 (step S74).
 第2実施形態の情報処理装置70によれば、内視鏡検査後に、検査者が検査レポートに添付する画像を選択する手間が省け、効率的に検査レポートを作成することが可能となる。 According to the information processing apparatus 70 of the second embodiment, it is possible to save the examiner the trouble of selecting images to be attached to the examination report after the endoscopy, and to efficiently create the examination report.
 上記の実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。 Some or all of the above embodiments can also be described as the following additional remarks, but are not limited to the following.
 (付記1)
 内視鏡画像を取得する内視鏡画像取得手段と、
 前記内視鏡画像に含まれる病変を撮影した複数の静止画を取得する静止画取得手段と、
 前記複数の静止画に含まれる病変の同一性を判定する同一性判定手段と、
 同一の病変に対応すると判定された複数の静止画から、前記病変を最も良く表している代表画像を選択する選択手段と、
 を備える情報処理装置。
(Appendix 1)
endoscopic image acquisition means for acquiring an endoscopic image;
a still image acquiring means for acquiring a plurality of still images of lesions included in the endoscopic image;
Identity determination means for determining identity of lesions included in the plurality of still images;
selection means for selecting a representative image that best represents the lesion from a plurality of still images determined to correspond to the same lesion;
Information processing device.
 (付記2)
 前記代表画像は、前記同一の病変に対応すると判定された複数の静止画のうち、病変が最も大きく写っている静止画である付記1に記載の情報処理装置。
(Appendix 2)
The information processing apparatus according to appendix 1, wherein the representative image is a still image showing the largest lesion among the plurality of still images determined to correspond to the same lesion.
 (付記3)
 前記代表画像は、前記同一の病変に対応すると判定された複数の静止画のうち、病変が最も中央に写っている静止画である付記1に記載の情報処理装置。
(Appendix 3)
The information processing apparatus according to appendix 1, wherein the representative image is a still image in which the lesion is most centered among the plurality of still images determined to correspond to the same lesion.
 (付記4)
 前記代表画像は、前記同一の病変に対応すると判定された複数の静止画のうち、最もピントが合っている静止画である付記1に記載の情報処理装置。
(Appendix 4)
The information processing apparatus according to appendix 1, wherein the representative image is the most focused still image among the plurality of still images determined to correspond to the same lesion.
 (付記5)
 前記複数の静止画について病変らしさを判定する判定手段を備え、
 前記代表画像は、前記判定手段により最も病変らしいと判定された静止画である付記1に記載の情報処理装置。
(Appendix 5)
comprising determination means for determining the likelihood of a lesion for the plurality of still images;
The information processing apparatus according to appendix 1, wherein the representative image is a still image that is determined to be most likely to be a lesion by the determining means.
 (付記6)
 前記同一の病変に対応すると判定された複数の静止画と、当該複数の静止画から選択された代表画像とを、前記内視鏡画像における撮影時刻と対応付けて示した画像を表示する表示手段を備える付記1乃至5のいずれか一項に記載の情報処理装置。
(Appendix 6)
display means for displaying an image showing a plurality of still images determined to correspond to the same lesion and a representative image selected from the plurality of still images in association with the imaging time of the endoscopic image; The information processing apparatus according to any one of appendices 1 to 5, comprising:
 (付記7)
 前記複数の静止画は、検査者の撮影指示に基づいて撮影された静止画を含む付記1乃至6のいずれか一項に記載の情報処理装置。
(Appendix 7)
7. The information processing apparatus according to any one of appendices 1 to 6, wherein the plurality of still images include still images captured based on an imaging instruction by an examiner.
 (付記8)
 前記複数の静止画は、前記内視鏡画像中の病変を検出して撮影するAI撮影手段により自動的に撮影された静止画を含む付記1乃至6のいずれか一項に記載の情報処理装置。
(Appendix 8)
7. The information processing apparatus according to any one of appendices 1 to 6, wherein the plurality of still images include still images automatically captured by AI imaging means that detects and captures a lesion in the endoscopic image. .
 (付記9)
 内視鏡画像を取得し、
 前記内視鏡画像に含まれる病変を撮影した複数の静止画を取得し、
 前記複数の静止画に含まれる病変の同一性を判定し、
 同一の病変に対応すると判定された複数の静止画から、前記病変を最も良く表している代表画像を選択する情報処理方法。
(Appendix 9)
Acquiring endoscopic images,
Acquiring a plurality of still images of lesions included in the endoscopic image,
determining the identity of the lesions included in the plurality of still images;
An information processing method for selecting a representative image that best represents a lesion from a plurality of still images determined to correspond to the same lesion.
 (付記10)
 内視鏡画像を取得し、
 前記内視鏡画像に含まれる病変を撮影した複数の静止画を取得し、
 前記複数の静止画に含まれる病変の同一性を判定し、
 同一の病変に対応すると判定された複数の静止画から、前記病変を最も良く表している代表画像を選択する処理をコンピュータに実行させるプログラムを記録した記録媒体。
(Appendix 10)
Acquiring endoscopic images,
Acquiring a plurality of still images of lesions included in the endoscopic image,
determining the identity of the lesions included in the plurality of still images;
A recording medium recording a program for causing a computer to select a representative image that best represents a lesion from a plurality of still images determined to correspond to the same lesion.
 以上、実施形態及び実施例を参照して本開示を説明したが、本開示は上記実施形態及び実施例に限定されるものではない。本開示の構成や詳細には、本開示のスコープ内で当業者が理解し得る様々な変更をすることができる。 Although the present disclosure has been described above with reference to the embodiments and examples, the present disclosure is not limited to the above embodiments and examples. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present disclosure within the scope of the present disclosure.
 1 画像処理装置
 2 表示装置
 3 内視鏡スコープ
 11 プロセッサ
 12 メモリ
 17 データベース(DB)
 21 位置検出部
 22 検査データ生成部
 23 AI判定部
 24 表示データ生成部
 100 内視鏡検査システム
Reference Signs List 1 image processing device 2 display device 3 endoscope 11 processor 12 memory 17 database (DB)
21 position detection unit 22 inspection data generation unit 23 AI determination unit 24 display data generation unit 100 endoscopy system

Claims (10)

  1.  内視鏡画像を取得する内視鏡画像取得手段と、
     前記内視鏡画像に含まれる病変を撮影した複数の静止画を取得する静止画取得手段と、
     前記複数の静止画に含まれる病変の同一性を判定する同一性判定手段と、
     同一の病変に対応すると判定された複数の静止画から、前記病変を最も良く表している代表画像を選択する選択手段と、
     を備える情報処理装置。
    endoscopic image acquisition means for acquiring an endoscopic image;
    a still image acquiring means for acquiring a plurality of still images of lesions included in the endoscopic image;
    Identity determination means for determining identity of lesions included in the plurality of still images;
    selection means for selecting a representative image that best represents the lesion from a plurality of still images determined to correspond to the same lesion;
    Information processing device.
  2.  前記代表画像は、前記同一の病変に対応すると判定された複数の静止画のうち、病変が最も大きく写っている静止画である請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the representative image is a still image showing the largest lesion among the plurality of still images determined to correspond to the same lesion.
  3.  前記代表画像は、前記同一の病変に対応すると判定された複数の静止画のうち、病変が最も中央に写っている静止画である請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the representative image is a still image in which the lesion is most centered among the plurality of still images determined to correspond to the same lesion.
  4.  前記代表画像は、前記同一の病変に対応すると判定された複数の静止画のうち、最もピントが合っている静止画である請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the representative image is the most focused still image among the plurality of still images determined to correspond to the same lesion.
  5.  前記複数の静止画について病変らしさを判定する判定手段を備え、
     前記代表画像は、前記判定手段により最も病変らしいと判定された静止画である請求項1に記載の情報処理装置。
    comprising determination means for determining the likelihood of a lesion for the plurality of still images;
    2. The information processing apparatus according to claim 1, wherein the representative image is a still image that is determined to be most lesion-like by the determining means.
  6.  前記同一の病変に対応すると判定された複数の静止画と、当該複数の静止画から選択された代表画像とを、前記内視鏡画像における撮影時刻と対応付けて示した画像を表示する表示手段を備える請求項1乃至5のいずれか一項に記載の情報処理装置。 display means for displaying an image showing a plurality of still images determined to correspond to the same lesion and a representative image selected from the plurality of still images in association with the imaging time of the endoscopic image; The information processing apparatus according to any one of claims 1 to 5, comprising:
  7.  前記複数の静止画は、検査者の撮影指示に基づいて撮影された静止画を含む請求項1乃至6のいずれか一項に記載の情報処理装置。 The information processing apparatus according to any one of claims 1 to 6, wherein the plurality of still images include still images taken based on an imaging instruction from an examiner.
  8.  前記複数の静止画は、前記内視鏡画像中の病変を検出して撮影するAI撮影手段により自動的に撮影された静止画を含む請求項1乃至6のいずれか一項に記載の情報処理装置。 7. The information processing according to any one of claims 1 to 6, wherein said plurality of still images include still images automatically captured by AI imaging means for detecting and capturing a lesion in said endoscopic image. Device.
  9.  内視鏡画像を取得し、
     前記内視鏡画像に含まれる病変を撮影した複数の静止画を取得し、
     前記複数の静止画に含まれる病変の同一性を判定し、
     同一の病変に対応すると判定された複数の静止画から、前記病変を最も良く表している代表画像を選択する情報処理方法。
    Acquiring endoscopic images,
    Acquiring a plurality of still images of lesions included in the endoscopic image,
    determining the identity of the lesions included in the plurality of still images;
    An information processing method for selecting a representative image that best represents a lesion from a plurality of still images determined to correspond to the same lesion.
  10.  内視鏡画像を取得し、
     前記内視鏡画像に含まれる病変を撮影した複数の静止画を取得し、
     前記複数の静止画に含まれる病変の同一性を判定し、
     同一の病変に対応すると判定された複数の静止画から、前記病変を最も良く表している代表画像を選択する処理をコンピュータに実行させるプログラムを記録した記録媒体。
    Acquiring endoscopic images,
    Acquiring a plurality of still images of lesions included in the endoscopic image,
    determining the identity of the lesions included in the plurality of still images;
    A recording medium recording a program for causing a computer to select a representative image that best represents a lesion from a plurality of still images determined to correspond to the same lesion.
PCT/JP2021/042366 2021-11-18 2021-11-18 Information processing device, information processing method, and recording medium WO2023089717A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/042366 WO2023089717A1 (en) 2021-11-18 2021-11-18 Information processing device, information processing method, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/042366 WO2023089717A1 (en) 2021-11-18 2021-11-18 Information processing device, information processing method, and recording medium

Publications (1)

Publication Number Publication Date
WO2023089717A1 true WO2023089717A1 (en) 2023-05-25

Family

ID=86396457

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/042366 WO2023089717A1 (en) 2021-11-18 2021-11-18 Information processing device, information processing method, and recording medium

Country Status (1)

Country Link
WO (1) WO2023089717A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010172673A (en) * 2009-02-02 2010-08-12 Fujifilm Corp Endoscope system, processor for endoscope, and endoscopy aiding method
JP2011024727A (en) 2009-07-23 2011-02-10 Olympus Corp Image processing device, program and method
JP2015173827A (en) * 2014-03-14 2015-10-05 オリンパス株式会社 image processing apparatus, image processing method, and image processing program
JP2015181594A (en) * 2014-03-20 2015-10-22 オリンパス株式会社 Image processing device, image processing method, and image processing program
WO2019220848A1 (en) * 2018-05-17 2019-11-21 富士フイルム株式会社 Endoscope device, endoscope operation method, and program
WO2021199152A1 (en) * 2020-03-30 2021-10-07 日本電気株式会社 Information processing device, display method, and non-transitory computer-readable medium having program stored therein

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010172673A (en) * 2009-02-02 2010-08-12 Fujifilm Corp Endoscope system, processor for endoscope, and endoscopy aiding method
JP2011024727A (en) 2009-07-23 2011-02-10 Olympus Corp Image processing device, program and method
JP2015173827A (en) * 2014-03-14 2015-10-05 オリンパス株式会社 image processing apparatus, image processing method, and image processing program
JP2015181594A (en) * 2014-03-20 2015-10-22 オリンパス株式会社 Image processing device, image processing method, and image processing program
WO2019220848A1 (en) * 2018-05-17 2019-11-21 富士フイルム株式会社 Endoscope device, endoscope operation method, and program
WO2021199152A1 (en) * 2020-03-30 2021-10-07 日本電気株式会社 Information processing device, display method, and non-transitory computer-readable medium having program stored therein

Similar Documents

Publication Publication Date Title
US11690494B2 (en) Endoscope observation assistance apparatus and endoscope observation assistance method
JP5291955B2 (en) Endoscopy system
US9204781B2 (en) Image processing apparatus and image processing method
US20180263568A1 (en) Systems and Methods for Clinical Image Classification
US20090051695A1 (en) Image processing apparatus, computer program product, and image processing method
JP2014527837A (en) Systematically alphanumeric coded endoscopy and endoscope positioning system
WO2018211674A1 (en) Image processing device, image processing method, and program
JP2009039449A (en) Image processor
WO2021075418A1 (en) Image processing method, teacher data generation method, trained model generation method, illness development prediction method, image processing device, image processing program, and recording medium on which program is recorded
JP2009022446A (en) System and method for combined display in medicine
JPWO2020165978A1 (en) Image recorder, image recording method and image recording program
US20230360221A1 (en) Medical image processing apparatus, medical image processing method, and medical image processing program
WO2023126999A1 (en) Image processing device, image processing method, and storage medium
WO2023089717A1 (en) Information processing device, information processing method, and recording medium
JP2021065606A (en) Image processing method, teacher data generation method, learned model generation method, disease onset prediction method, image processing device, image processing program, and recording medium that records the program
WO2023089718A1 (en) Information processing device, information processing method, and recording medium
WO2023089715A1 (en) Image display device, image display method, and recording medium
WO2023089716A1 (en) Information display device, information display method, and recording medium
WO2023089719A1 (en) Video editing device, video editing method, and recording medium
WO2024042895A1 (en) Image processing device, endoscope, image processing method, and program
WO2024121886A1 (en) Endoscopic examination assistance device, endoscopic examination assistance method, and recording medium
WO2024121885A1 (en) Information processing device, information processing method, and recording medium
JP7264407B2 (en) Colonoscopy observation support device for training, operation method, and program
WO2023275974A1 (en) Image processing device, image processing method, and storage medium
JP7448923B2 (en) Information processing device, operating method of information processing device, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21964732

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023562002

Country of ref document: JP

Kind code of ref document: A