WO2023095208A1 - Dispositif de guidage d'insertion d'endoscope, procédé de guidage d'insertion d'endoscope, procédé d'acquisition d'informations d'endoscope, dispositif de serveur de guidage et procédé d'apprentissage de modèle d'inférence d'image - Google Patents

Dispositif de guidage d'insertion d'endoscope, procédé de guidage d'insertion d'endoscope, procédé d'acquisition d'informations d'endoscope, dispositif de serveur de guidage et procédé d'apprentissage de modèle d'inférence d'image Download PDF

Info

Publication number
WO2023095208A1
WO2023095208A1 PCT/JP2021/043003 JP2021043003W WO2023095208A1 WO 2023095208 A1 WO2023095208 A1 WO 2023095208A1 JP 2021043003 W JP2021043003 W JP 2021043003W WO 2023095208 A1 WO2023095208 A1 WO 2023095208A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
endoscope
information
inference
inference model
Prior art date
Application number
PCT/JP2021/043003
Other languages
English (en)
Japanese (ja)
Inventor
浩一 新谷
憲 谷
学 市川
智子 後町
修 野中
暁 吉田
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Priority to PCT/JP2021/043003 priority Critical patent/WO2023095208A1/fr
Publication of WO2023095208A1 publication Critical patent/WO2023095208A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof

Definitions

  • the present invention provides an endoscope insertion guide device that allows a medical device such as an endoscope to be inserted into the body or the like to observe a site such as an affected or lesioned site, and then to easily search for the site again.
  • the present invention relates to an endoscope insertion guide method, an endoscope information acquisition method, a guide server device, and an image inference model learning method.
  • the surgical system includes a magnetic tracking device and/or optical position sensor for locating the flexible conductor endoscope, transmits the position of the endoscope to the main workstation, and imagewise identifies the position of the endoscope. We will check and take action.
  • Patent Document 1 needs to be equipped with a magnetic tracking device or the like in order to detect the position of the endoscope or the like, and is a large-scale device in order to specify the site.
  • a magnetic tracking device or the like in order to detect the position of the endoscope or the like, and is a large-scale device in order to specify the site.
  • you want to observe the site of interest such as a previously observed position or a discovered lesion again at the time of reexamination, or if you want to specify the site from outside the lumen, or if you want to identify the site of interest in a medical chart, etc.
  • the present invention has been made in view of such circumstances, and provides an endoscope insertion guide device capable of easily re-detecting a position observed by a medical device such as an endoscope.
  • An object of the present invention is to provide a guide method, an endoscope information acquisition method, a guide server device, and an image inference model learning method.
  • an endoscope insertion guide apparatus provides different A similar image inference unit having a similar image inference model trained with the same or similar affected area images obtained by a plurality of observation methods, and a position of the affected area during reexamination of the specific subject with a second endoscope a re-insertion auxiliary information generating unit for generating insertion auxiliary information for guiding the reconfirmation position according to the result obtained by inputting the image at the time of re-examination to the similar image inference unit in order to re-observe the .
  • An endoscope insertion guide apparatus is the endoscope insertion guide apparatus according to the first invention, wherein the similar image inference model makes an inference by canceling changes in the diseased part image due to deformation of the observation target organ.
  • An endoscope insertion guide device according to a third aspect of the invention is the endoscope insertion guide device according to the first aspect, wherein the similar image inference model is an insufflation operation, a water supply operation, a suction operation, and an operation speed during insertion and withdrawal of the endoscope.
  • An inference model learned from the correspondence relationship between images of observation target organs deformed by at least one of change and posture change is used.
  • a fourth aspect of the invention is directed to an endoscope insertion guide apparatus according to the first aspect, wherein the similar image inference model is adapted to detect changes in continuous images of the observation target organ obtained during inspection by the first endoscope. infer.
  • a fifth aspect of the present invention provides an endoscope insertion guide apparatus according to any one of the second to fourth aspects, wherein the observation target organ is the large intestine or the small intestine.
  • An endoscope information acquisition method performs an inspection including a position change operation of an endoscope sensor unit up to a specific target at a first timing, and performs at least a part of continuous images during the inspection. as guide information for generating a guide for changing the position of the endoscope to be used in an examination performed at a second timing after the first timing; and at the first timing, and a guide information determining step of determining the guide information based on at least part of the acquired continuous images.
  • An endoscope information acquiring method is the method according to the sixth aspect, wherein the guiding information corresponds to the specific object when the endoscope sensor section is inserted at the second timing. This is information that enables the specific object to be specified so that the position change operation is performed to a position where the image to be acquired can be acquired.
  • An endoscope information acquisition method is the endoscope information acquisition method according to the seventh invention, wherein the information enabling the identification of the specific object is information for similar image determination inferred using AI.
  • An endoscopic information acquisition method is characterized in that, in the seventh aspect, the information enabling identification of the specific object is information for determining similarity of changes in images inferred using AI. is.
  • a tenth aspect of the present invention is an endoscope information acquiring method according to the sixth aspect, wherein the guide information is determined based on the endoscope position change information, and the endoscope position change information is the first endoscope position change information. Obtained by cross-referencing the endoscopic image information in the insertion direction and the withdrawal direction obtained at the timing of , or by cross-referencing the endoscopic image change and the endoscopic operation information obtained at the second timing Information.
  • An endoscopic information acquiring method according to an eleventh aspect is, in the sixth aspect, recording citation information based on at least a part of the continuous images acquired at the first timing, wherein the citation information is , contains confirmation results regarding informed consent.
  • An endoscope information acquisition method is the sixth aspect, wherein the acquisition step further obtains operation information related to operation of the endoscope during an endoscopy of the specific subject. get.
  • An endoscopic information acquiring method is the twelfth aspect, wherein the acquiring step includes, in the first observation method and the second observation method, the operation information to get
  • An endoscopic information acquisition method is the thirteenth aspect, further comprising the steps of performing the first observation method and the second observation method using at least a part of the continuous images and/or the operation information. It has an auxiliary information obtaining step of outputting auxiliary information for finding the position of the affected part found when performing one of the observation methods, and for finding the position of the affected part when performing the other observation method.
  • An endoscopic information acquisition method is the fourteenth aspect of the invention, wherein the auxiliary information acquisition step comprises the above-mentioned information acquired by the first observation method and the second observation method during the endoscopy. Using the affected area image and/or the operation information, an inference model is generated for outputting auxiliary information for finding the affected area position by reasoning.
  • An endoscope insertion guide apparatus comprises: a first image acquiring unit for acquiring an endoscopic image of a specific subject in an examination with a first endoscope; a similar image estimating unit having a similarity estimation model for estimating the similarity of an image based on the above; and the endoscopic image obtained by the examination by the second endoscope are input to the similar image estimating unit to estimate the similarity of the images, and determine whether or not the similarity is higher than a predetermined level. and a similar image determination unit for determining, and guides to the position of an image similar to the lesion in the examination by the second endoscope.
  • a seventeenth aspect of the present invention is an endoscope insertion guide apparatus according to the sixteenth aspect, wherein the similar image inference model makes an inference by canceling changes in the diseased part image due to deformation of the observation target organ.
  • An endoscope insertion guide device according to an eighteenth aspect is the endoscope insertion guide device according to the sixteenth aspect, wherein the similar image inference model includes, at the time of inserting and removing the endoscope, an air supply operation, a water supply operation, a suction operation, and an operation speed.
  • An inference model learned from the correspondence relationship between images of observation target organs deformed by at least one of change and posture change is used.
  • An endoscope insertion guide method provides an affected part position frame in which an affected part requiring treatment is photographed based on the frames of an inspection result moving image obtained in an endoscopy of a specific subject. as a position in the frame of the moving image, and guiding the insertion position for re-observing the position of the affected part during re-examination of the specific subject according to the position of the affected part frame in the frame of the moving image. and a re-insertion auxiliary information generating step of generating insertion auxiliary information to be inserted.
  • a twentieth aspect of the present invention is an endoscope insertion guide method according to the nineteenth aspect, wherein the reinsertion auxiliary information generation step further includes a similar image inference step of inferring a similar image.
  • a twenty-first aspect of the present invention is an endoscope insertion guide method according to the twentieth aspect, wherein the similar image inference step infers a change in the diseased part image due to deformation of the observed organ.
  • An endoscope insertion guide method according to a twenty-second aspect of the present invention is the method according to the twentieth aspect, wherein the similar image inference step includes, at the time of inserting and removing the endoscope, an air supply operation, a water supply operation, a suction operation, and an operation speed.
  • An inference model learned from the correspondence relationship between images of observation target organs deformed by at least one of change and posture change is used.
  • a guide server includes an affected area image acquisition unit that acquires an affected area image obtained by an endoscopy of a specific subject and an affected area image obtained by a plurality of different observation methods. a similar image inference unit having an image transformation inference model trained with similar affected area images; a transmitting unit configured to transmit target image information converted for the second endoscope by the inference model to the second endoscope.
  • An image inference model learning method includes an image acquisition step of acquiring images of a specific target object imaged by a plurality of observation methods, and adding the specified object to each of the images obtained by the plurality of observation methods. It has a step of annotating a feature of an image of an object, and a step of learning so as to input an image corresponding to one of the annotation regions and output an image corresponding to the other annotation region.
  • a method for creating an inference model according to a twenty-fifth aspect of the present invention provides an image of the specific object photographed by a second observation method different from the first observation method with respect to the image of the specific object photographed by the first observation method.
  • teacher data is created, and learning is performed using this teacher data, so that when the image of the object photographed by the first observation method is input, the second observation method Create an inference model that makes it possible to infer the object image captured by
  • An endoscope insertion guide method provides a similar image inference model created by the inference model creation method according to claim 25, and inferring a target image by inputting a specific object image obtained by photographing the specific object; a re-insertion auxiliary information generating step for generating insertion auxiliary information for guiding a position for re-confirmation according to the target image for re-observation.
  • An endoscope insertion guiding method provides a step of using as a target image a specific object image obtained by photographing the specific object obtained by an examination of a specific subject with a first endoscope; In order to re-observe the position of the affected part when re-examining the specific subject with the second endoscope, the similar image inference model created by the inference model creation method according to claim 25 is re-examined.
  • An inference model creation method is the inference model creation method according to the twenty-fifth invention, wherein the first observation method and the second observation method move the imaging unit at the distal end of the endoscope in a direction of insertion into the subject. and an observation method by photographing when the photographing unit at the distal end of the endoscope is moved in the withdrawal direction.
  • An inference model creation method is the inference model creation method according to the twenty-fifth invention, wherein the first observation method and the second observation method are performed in a direction in which the imaging unit at the distal end of the endoscope is inserted into or removed from the object. This is an observation method by photography when the endoscope is bent, inserted, or removed under different control in the observation by photography when the endoscope is moved to the left.
  • a guide method using an inference model according to a thirtieth aspect of the invention is capable of detecting a temporal change in a group of time-series image frames captured before and after an image of an object captured by the first observation method. a step of inputting each frame of a group of time-series frames into an inference model to obtain a time-series change of the group of image frames when photographed by the second observation method; and outputting an endoscope navigation guide until an image of the object is captured, such that the image change during acquisition is adapted to the image change that may occur.
  • an endoscope insertion guide device an endoscope insertion guide method, an endoscope information acquisition method, and a guide that can easily detect again a position observed by a medical device such as an endoscope.
  • a server device and an image inference model learning method can be provided.
  • FIG. 1 is a block diagram mainly showing an electrical configuration of an endoscope system according to a first embodiment of the present invention
  • FIG. 1 is a block diagram mainly showing an electrical configuration of an endoscope system according to a first embodiment of the present invention
  • FIG. 1A shows a state in which the endoscope system according to the first embodiment of the present invention is inserted into the body
  • FIG. 4A shows how the endoscope is taken out and then observed while being reinserted into the body.
  • FIG. 4A is a diagram illustrating generation of an inference model and inference in the endoscope system according to the first embodiment of the present invention
  • FIG. It is a figure which shows a mode that inference is carried out using an inference model.
  • FIG. 4 is a flow chart showing the operation of a device (endoscope) in the endoscope system according to the first embodiment of the present invention; 4 is a flow chart showing the operation of the auxiliary device in the endoscope system according to the first embodiment of the present invention; 4 is a flow chart showing inference model creation and inference operations in the endoscope system according to the first embodiment of the present invention. 4 is a flow chart showing inference model creation and inference operations in the endoscope system according to the first embodiment of the present invention.
  • FIG. 4 is a block diagram mainly showing an electrical configuration of an endoscope system according to a second embodiment of the present invention; FIG.
  • FIG. 4 is a block diagram mainly showing an electrical configuration of an endoscope system according to a second embodiment of the present invention
  • 9 is a flow chart showing the operation of a device (endoscope) in the endoscope system according to the second embodiment of the present invention
  • 9 is a flow chart showing the operation of the robot endoscope in the endoscope system according to the second embodiment of the present invention
  • FIG. 8 shows a state in which an endoscope system according to a second embodiment of the present invention is inserted into the body, (a) showing observation while inserting the endoscope, and (b) showing a state inside a robot. It is a figure which shows a mode that it observes, inserting a scope.
  • FIG. 10 is a diagram showing image changes and imaging timings during insertion of the endoscope in the endoscope system according to the second embodiment of the present invention;
  • a medical device such as an endoscope can be inserted into the body to observe the inside of the body and perform treatment.
  • an endoscope When inserting the endoscope, it is often the case that the inside of the body is observed, etc., while the endoscope is once inserted toward the back of the body and then pulled out.
  • the technical conditions such as illuminating the inside of body cavities and ducts, sufficient reflected light, and being able to acquire images with a relatively wide-angle camera, it is not a technique that is particular about the gastrointestinal tract, nor is it a technique that is particular about the inside of the body. , and can also be applied to industrial endoscopes.
  • the present embodiment can be applied in various scenes as long as it is used to search for a target site by insertion.
  • the tip of the endoscope and the inside of the digestive tract may not fit together because the endoscope etc. pushes against the walls of the digestive tract inside the body. It is not possible to secure a sufficient distance between the walls (organs to be observed), and deformation occurs depending on how the endoscope and the organ to be observed are in contact with each other, and the relationship between the pressure of the operator and other factors. is hidden due to the distance between the endoscope and the organ or the deformation of the organ to be observed.
  • the endoscope is withdrawn, the walls of the gastrointestinal tract are stretched by the endoscope. easier to see. The affected area can be observed by repeating such insertion and withdrawal, but there is a need to check only the necessary affected area without performing such an operation.
  • FIG. 2(a) shows a state in which a doctor inserts the endoscope 10 into the large intestine Co from the anus, moves the distal end to the far end, injects air into the large intestine Co, and performs an examination while pulling out the endoscope. show.
  • the endoscope 10 is pulled out, the endoscope 10 is inserted into the large intestine Co and the tip portion is advanced to the back, so that the wall surface on the anus side is wrinkled from the tip surface of the endoscope 10.
  • the wall surface on the back side (opposite side to the anus) of the tip surface is wrinkled.
  • a site (affected site) Re such as a tumor or polyp in the large intestine Co is located at a position where the wall surface extends, so that it is easy to observe with the endoscope 10 . That is, when the endoscope 10 is first inserted and moved inward, the portion Re is hidden by the wrinkles of the wall surface and is not easy to see. That is, when the endoscope 10 is inserted into the large intestine Co and moved all the way to the back, the part Re is difficult to see, but when it is pulled out from the back, it becomes easy to find, as shown in FIG. 2(a). .
  • the endoscope itself cannot recognize it.
  • the image changes as the endoscope advances and retreats it is possible to know what kind of operation was performed on the endoscope itself by determining the change in the acquired image. For example, if the acquired images are arranged in chronological order or the change in the object acquired in the images acquired at adjacent timings is observed, when the object is bent to the right, the object in the middle moves to the left. Similarly, the image changes in the direction opposite to the facing direction when facing left or up or down. Further, when the endoscope is advanced after being inserted into the lumen, the image changes such that the target image detected in the central portion flows toward the four sides of the imaging device. When pulled out, the image changes in the opposite direction.
  • the speed of change in the obtained image is also important guide information.
  • the image changes slowly the doctor is operating carefully, and if the image changes quickly, the doctor is trying to operate quickly. It is meaningful to refer to it when doing so. Effective use of such determination of changes in continuous time-series images will be described in detail with reference to FIG. In FIG. 11, it can be seen that the image of the hollow portion of the lumen in the central portion gradually becomes larger and flows around the screen, advancing along the lumen. Also, when the luminal cavity drifts more and more to the right, the physician knows that he has bent the tip of the endoscope to the left.
  • Such a guide is effective as a guide to be communicated to the doctor at the next examination (or to pass on technology or eliminate medical disparities), and is also effective when communicating operations to the robot.
  • Such guide information is also a technology that leads to labor saving and automation.
  • a system may be used in which the doctor in charge and the endoscope operator are informed in advance that the information is to be handed over, and additional items are added.
  • the intention may be communicated to the patient and informed consent (IC) may be obtained.
  • IC informed consent
  • the patient's will may be confirmed as to whether or not this IC may be taken over, or it may be decided next time whether or not to take over.
  • a mobile terminal such as a smart phone or an information terminal to confirm and answer. It is only necessary to change the utilization of the data based on the reply by the smartphone or the like.
  • FIG. 2(b) shows that after treatment, surgery, or the like has been performed on a tumor, polyp, or the like at the site Re, the endoscope 10 is reinserted into the colon Co in order to observe the progress after the treatment/surgery. to show how it is.
  • a normal examination method similarly to the case of FIG. Observe.
  • a high degree of skill is required to insert the endoscope 10 to the end, and after the endoscope 10 has been inserted to the end, it takes time to observe the endoscope while pulling it out.
  • the above-described normal examination method can be performed at the same level as the first examination. It is desired that a skilled doctor operate it.
  • the doctor in charge of the surgery will have to take time and effort.
  • the site (affected area) Re can be found when the endoscope 10 is inserted, not when it is withdrawn, during the second and subsequent examinations. That is, at the time of the first examination, the images taken from the start of insertion of the endoscope 10 into the digestive tract such as the large intestine until the end of withdrawal are recorded, and the site (affected area) discovered at the time of withdrawal is recorded. An image at the time of insertion corresponding to the image of Re is stored as target information.
  • the operation is performed so that the site (affected area) Re can be found without inserting the endoscope 10 all the way.
  • the patient is guided to the part (affected part) Re, and when the patient is approaching the vicinity, the fact is reported, and when the patient has reached the vicinity, the fact is reported.
  • the vicinity of the position where the inspection image was taken is the site (affected area). It can be said that this is the position of Re.
  • an inference model may be used to infer the position of the site (affected area) Re.
  • an inference model is generated by creating teacher data from images and the like at the time of insertion and extraction.
  • an inference model may be used to perform AI guidance to the site (affected area) Re.
  • the endoscope 10 When the endoscope 10 is pushed inward, even if the display unit of the endoscope 10 displays that the endoscope 10 is in the vicinity of the site (affected site) Re, it may be difficult to see due to folds or the like in the digestive tract. In that case, the part (affected part) Re may be searched for while pulling it out a little. In addition, the case of reexamination was explained, but in addition to reexamination, in order to confirm the site to be operated on in advance, it is possible to perform the first examination and insert the endoscope for the second surgery. , the site (affected area) Re may be found by a similar method.
  • FIGS. 1A and 1B are block diagrams showing the configuration of the endoscope system according to the first embodiment.
  • This endoscope system comprises an endoscope 10A, an in-hospital system, an auxiliary device 30 provided in a server or the like, and a second endoscope 10B.
  • the endoscope 10A and the second endoscope 10B are inserted from the anus through the rectum, for example, to examine a region with a long distance to the small intestine. It is a mirror, and a case where a colonoscopy is performed on a subject (patient) will be described as an example.
  • the endoscope 10A is used for the first examination of the subject (patient), and the second endoscope 10B is used for the second and subsequent examinations of the subject (patient). Described as an endoscope.
  • the endoscope 10A and the second endoscope 10B may be endoscopes of the same model, but are described here as different models.
  • the second endoscope 10B may be the same device.
  • the situation physical and health conditions such as the patient's affected area, changes in doctors, changes in physical and mental constraints and leeway such as fatigue and habituation, or changes in assistants, peripheral devices, and environments
  • the examination may not be exactly the same. Therefore, when a plurality of examinations are performed at different timings (in many cases, the dates are different, but reexaminations on the same day may sometimes be assumed), the information may be inherited.
  • the endoscope 10A is used by a doctor to observe the inside of the large intestine and perform treatment, surgery, and the like.
  • This endoscope 10A has a control section 11A, an imaging section 12A, a light source section 13A, a display section 14A, an ID management section 15A, a recording section 16A, an operation section 17A, and an inference engine 19A.
  • the control unit 11A is composed of one or more processors having a processing device such as a CPU (Central Processing Unit), a memory storing a program (the program may be stored in the recording unit 16A), etc., and executes the program. and controls each part in the endoscope 10A.
  • the control unit 11A performs various controls when the endoscope 10A performs a colon examination of a subject (patient), and transfers image data P1 acquired during the examination to an auxiliary device provided in an in-hospital system, server, or the like. 30 is controlled.
  • the imaging section 12A is provided at the distal end of the insertion section of the endoscope 10A, and has an optical lens, an imaging element, an imaging circuit, and the like.
  • the imaging unit 12A is composed of a small imaging element and an imaging optical system for forming an image of an object thereon, and is assumed to have predetermined specifications such as a focus position and a focal length of the imaging unit. Further, the imaging unit 12A may be provided with autofocus, and can determine the object distance, the size of the object, and the like.
  • the imaging field angle ⁇ (see FIG. 2(a)) of the imaging unit 12A is about 140 degrees to 170 degrees, and can photograph over a wide range.
  • the imaging unit 12A acquires image data at predetermined time intervals determined by the frame rate, and records the data in the recording unit 16A. Further, when the release button in the operation section 17A is operated, the imaging section 12A acquires still image data, and the still image data is recorded in the recording section 16A.
  • the image P1 is image data acquired by the imaging unit 12A, and is transmitted to the input unit 32 of the auxiliary device 30.
  • the image P1 is time-series data
  • the image P11 is an image acquired immediately after the tip of the endoscope 10A is inserted into the anus
  • the image P17 is an image acquired immediately before the endoscope 10A is extracted from the anus. be.
  • the image P11 and the image P17 are images taken at the same position.
  • Image P13 is an image taken at the position of the site (affected area) Re (see FIG. 2A) when the endoscope 10A is inserted deep into the large intestine. This is an image of the site (affected part) Re taken when the endoscopic device is being pulled out from the back of the body.
  • the imaging unit 12A of the endoscope 10A functions as a first image acquisition unit that acquires an endoscopic image of a specific subject through examination with the first endoscope.
  • the imaging unit 12A functions as an affected area image acquiring unit that acquires an affected area image of an affected area obtained by endoscopy of a specific subject.
  • the light source section 13A has a light source, a light source control section, and the like.
  • the light source unit 13A illuminates the object with proper brightness, and the distance to the object can be measured by adjusting the brightness.
  • a light source is arranged at the distal end of the endoscope 10A to illuminate the inside of the body, such as an affected area, and a light source control section controls illumination by the light source.
  • the display unit 14A displays an image inside the body based on the image data acquired by the imaging unit 12A. Further, the display unit 14A can display an operation guide superimposed on the inspection image. For example, the display is performed so as to indicate the vicinity of the site (affected part) Re. Furthermore, a menu screen for operation and display of the endoscope 10A can also be displayed.
  • the ID management unit 15A performs ID management for identifying subjects (patients) when a doctor performs an examination using the endoscope 10A. For example, a doctor may input the subject's (patient's) ID through the operation unit 17A of the endoscope 10A. Also, the ID management unit 15A may associate an ID with the image data acquired by the imaging unit 12A.
  • the recording unit 16A has an electrically rewritable nonvolatile memory, and records adjustment values for operating the endoscope 10A, programs used in the control unit 11A, and the like. Also, the image data acquired by the imaging unit 12A is recorded.
  • the operation unit 17A includes various operation units such as an operation unit for bending the distal end of the endoscope 10A in an arbitrary direction, an operation unit for a light source, an operation unit for capturing an image, an operation unit for a treatment tool, and the like. It has an operation part.
  • the subject's (patient's) ID may be input through the operation unit 17A.
  • the inference model is placed in the inference engine 19A.
  • This inference model includes a variety of inference models such as an inference model for inferring a possible diseased part such as a tumor or polyp in an image acquired by the imaging unit 12A, an operation guide for operating the endoscope 10A, and the like. It may consist of an inference model.
  • the inference engine 19A may be configured by hardware, may be configured by software (program), or may be a combination of hardware and software.
  • the auxiliary device 30 is provided in an in-hospital system, server, or the like.
  • the in-hospital system is connected to devices such as endoscopes, personal computers (PCs), mobile devices such as smartphones, and the like in one or more hospitals by wired communication or wireless communication.
  • the server is connected to equipment such as an endoscope, an in-hospital system, and the like through a communication network such as the Internet.
  • the endoscope 10A may be connected to the auxiliary device 30 within the hospital system, directly connected to the auxiliary device 30 within the server, or connected to the auxiliary device 30 through the hospital system.
  • the auxiliary device 30 has a control unit 31, an input unit 32, an ID management unit 33, a communication unit 34, a recording unit 35, an inference engine 36 in which an inference model is set, and an IC confirmation unit 37.
  • the control unit 31 is composed of one or more processors having a processing device such as a CPU (Central Processing Unit), a memory storing a program (the program may be recorded in the recording unit 35), etc., and executes the program. and controls each part in the auxiliary device 30 .
  • a processing device such as a CPU (Central Processing Unit)
  • the control unit 31 controls the same subject (patient) to use the endoscope 10B (or the endoscope 10A) to perform the same examination. Or, when performing a similar examination, it performs overall control within the auxiliary device 30 so as to output an operation guide for searching for the affected part of the subject (patient).
  • the input unit 32 inputs the input image P1 acquired by the imaging unit 12A.
  • the image P1 input by the input unit 32 is output to the inference engine 36, and target information corresponding to the position of the site (affected area) Re is output using the inference model.
  • the ID management unit 33 manages IDs of subjects (patients). As described above, when a doctor performs an examination using the endoscope 10A, the ID of a subject (patient) is input, and an image P1 associated with this ID is transmitted from the endoscope 10A. come. The ID management unit 33 associates the ID associated with this image P1 with the ID information of the subject (patient) recorded in the recording unit 35 or the like.
  • the communication unit 34 has a communication circuit and exchanges information with the endoscope 10A and the second endoscope 10B. Although the endoscope 10A and the second endoscope 10B do not have a communication unit, these endoscopes also have a communication unit like the auxiliary device 30 .
  • the communication unit 34 may also communicate with other servers or hospital systems. In this case, information can be collected from other servers or hospital systems and information can be provided.
  • the target information Ita generated by the inference unit 36 and the group of reexamination auxiliary images P4 generated by the auxiliary device 30 are transmitted through the communication unit 34 to the second endoscope 10B.
  • the communication unit 34 receives the target image information converted for the second endoscope by the image conversion inference model in order to re-observe the position of the affected part when re-examining the specific subject with the second endoscope. It functions as a transmission unit that transmits to the second endoscope (see, for example, S1 in FIG. 4 and S59 in FIG. 5).
  • the recording unit 35 has an electrically rewritable non-volatile memory, and includes image data input by the input unit 32 from the imaging unit 12A, information such as the subject (patient) profile, examination history, and examination results, Programs and the like used in the control unit 31 can be recorded. Further, when a subject (patient) performs an examination using the endoscope 10A and the second endoscope 10B, the recording unit 35 records image data based on the image P1 at that time, and the inference engine 36 Also records the target information Ita output by inference.
  • the inference engine 36 may be configured by hardware, may be configured by software (program), or may be a combination of hardware and software.
  • An inference model is set in the inference engine 36 .
  • the inference engine 36 is provided in the auxiliary device 30, but may be provided in a device such as an endoscope to perform inference within the device.
  • the inference engine 36 equipped with the inference model performs inference and outputs target information Ita corresponding to the site (affected area) Re from the output layer.
  • This target information Ita is the same position as the site (affected site) Re (image P16 (image P1, see FIG. 2A)) when the endoscope 10A is inserted deep into the large intestine and then pulled out. is information indicating This target information Ita is information for finding the site (affected area) Re when inserting the endoscope toward the back of the large intestine, even if the endoscope is not pulled out after being inserted all the way into the large intestine. be. Generation of an inference model and inference performed using this inference model will be described later with reference to FIG.
  • the inference engine 36 uses a plurality of different observation methods based on an affected area image (for example, see image P1) of an affected area obtained by a first endoscope examination of a specific subject.
  • a similar image inference model trained with the same or similar affected area images is created (S59 in FIG. 5, FIG. 6A).
  • the similar image inference model described above performs inference by canceling changes in the affected area image due to deformation of the observation target organ (for example, see S59 in FIGS. 2 and 5). That is, even if the observation target organ such as the large intestine is deformed due to an insertion operation, removal operation, air supply operation, water supply operation, suction operation, operation speed change operation, posture change, etc., this deformation is removed. Inference is performed by a similar image inference model so as to (cancel). The similar image inference model was learned from the corresponding relationship between the images of the observed organs deformed by insertion and removal of the colonoscope, air supply, water supply, suction, operation speed change, and posture change. An inference model is used (for example, see S59 in FIGS.
  • the site (affected area) Re is deformed when the colonoscope is inserted and when it is removed, and the two look different.
  • the position of the site (affected area) Re can be inferred at the time of insertion.
  • inference can be made using images in corresponding states such as when air is being supplied and when air is not being supplied, and when water is being supplied and when water is not being supplied.
  • the similar image inference model infers changes in successive images of the observation target organ obtained during inspection with the first endoscope (for example, see S17 in FIG. 8 and FIG. 11).
  • the organ to be observed is the large intestine or the small intestine. That is, the organ to be observed is not limited to the large intestine, and may be the small intestine.
  • the IC confirmation unit 37 inputs, outputs, and records information for effectively utilizing informed consent (IC), that is, "agreement after receiving an explanation and understanding".
  • Informed consent IC refers to the fact that the doctor has fully explained the disease, condition, what is happening in the body, the contents of the examination and treatment, and the medicine prescribed, It is possible to record the contents of understanding, consent, reaction, and response to it, leave evidence showing consent, store confirmed documents, and input and output such as consent records.
  • the endoscopes 10A and 10B may be provided with voice reproduction and voice recognition functions so as to cooperate with these functions. This embodiment is characterized in that endoscope information is handed over, and efficiency can be improved if this information can also be handed over.
  • the IC confirmation unit 37 can acquire and manage the consent required when using an endoscopic image as teaching data, and if the same consent can be obtained for the next examination, It will be possible to use it for research on the progress of specific cases.
  • the second endoscope 10B shown in FIG. 1B is an endoscope used by a subject (patient) who has undergone an examination using the endoscope 10A for the second and subsequent examinations. is a mirror.
  • the second endoscope 10B may be the same model as the endoscope 10A, or may be the same device as the endoscope 10A, but is shown as a different model endoscope in this embodiment.
  • the auxiliary device 30 outputs the reexamination auxiliary image group P4 to the second endoscope 10B.
  • the reexamination auxiliary image group P4 is from the image P41 at the position immediately after insertion into the anus to the image P43 corresponding to the position of the site (affected part) Re when reexamination is performed using the second endoscope 10B. It is a time-series image of The re-inspection auxiliary image group P4 may be created based on the images P11 to P13 among the images P1 acquired in the first inspection.
  • An image P43 in the reexamination-time auxiliary image group P4 includes target information Ita, which is the inference result of the inference engine .
  • the second endoscope 10B has a control section 11B, an imaging section 12B, a light source section 13B, a display section 14B, an ID management section 15B, a recording section 16B, an operation section 17B, and an inference model 19B. These are the same as the control unit 11A, imaging unit 12A, light source unit 13A, display unit 14A, ID management unit 15A, recording unit 16A, operation unit 17A, and inference model 19A of the endoscope 10A. Additional configurations and functions of the endoscope 10B are described supplementarily, and detailed descriptions thereof are omitted.
  • the control unit 11B is composed of one or more processors having a processing device such as a CPU (Central Processing Unit), a memory storing a program (the program may be stored in the recording unit 16B), etc., and executes the program. and controls each part in the endoscope 10B.
  • the control unit 11B performs various controls when the endoscope 10B reexamines a subject (patient).
  • the control unit 11B uses the reexamination image acquired by the imaging unit 12B and the reexamination auxiliary image group P4 output from the auxiliary device 30 to perform an operation to reach the site (affected area) Re. It is determined whether or not it is near the guide and/or site (affected area) Re. Inference may be performed by the inference engine 19B in which an inference model is set in order to create an operation guide and determine whether or not the distal end of the second endoscope 19B is in the vicinity of the site (affected area) Re. However, similar image determination may be performed by a similar image determination unit 22B, which will be described later.
  • the operation guide created by the control unit 11B may be displayed on the display unit 14B, and the display unit 14B may display that the distal end of the endoscope is in the vicinity of the site (affected area) Re.
  • the control unit 11B inputs the reexamination image to the similar image inference unit (for example, the inference engine 19B) in order to re-observe the position of the affected area when the specific subject is reexamined with the second endoscope. It functions as a re-insertion auxiliary information generating section that generates insertion auxiliary information for guiding the reconfirmation position according to the result obtained by the above (for example, see S15 in FIG. 4).
  • An inference model is set for the inference engine 19B.
  • the inference engine 19B may be configured by hardware, may be configured by software (program), or may be a combination of hardware and software.
  • the inference model is an inference model created by the inference engine 36 in the auxiliary device 30, and is learned using images (for example, images P2 and P3) when the endoscope is inserted into and removed from the body. Created by That is, an inference model is created using affected area images obtained by a plurality of different observation methods (for example, observation during insertion and observation during withdrawal).
  • the inference model set in the inference engine 19B is based on the affected area image of the affected area obtained by the inspection with the first endoscope of the specific subject, and the same model obtained by a plurality of different observation methods. Alternatively, it functions as a similar image inference model learned from similar affected area images.
  • the inference engine 19B functions as a similar image inference unit having the similar image inference model described above.
  • the inference engine 19B functions as a similar image inference unit having an image transformation inference model trained with the same or similar affected area images obtained by a plurality of different observation methods.
  • the second endoscope 10B further has a signal output section 21B and a similar image determination section 22B.
  • the signal output section 21B outputs a signal indicating that fact. For example, by irradiating the light source from the light source unit 13B, the irradiated light can be seen from the outside of the wall of the gastrointestinal tract, thereby informing the doctor or the like of the position (for example, in the direction of arrow H in FIG. 10B). ).
  • the similar image determination unit 22B compares the image data acquired by the captured image 12B with the re-examination auxiliary image group P4 to determine the degree of similarity. Since there are various methods for determining the degree of similarity between images, a method suitable for the present embodiment may be appropriately used from among these methods.
  • the similar image determination unit 22B determines whether or not there is an image similar to the image P41. After an image similar to the image P41 appears, the similar image determination unit 22B determines whether or not there is an image P43 including the target information Ita and a similar image.
  • the display unit 14B displays that it is near the site (affected area) Re because the site (affected area) Re is present near this position. .
  • the doctor can find the site (affected part) Re by carefully examining this vicinity. If it cannot be found immediately, air may be supplied, or the endoscope may be slightly pulled out to create a space for observation. If the site (affected part) Re is found, the progress of the affected part, etc. can be observed. In addition, depending on the condition of the affected area, treatment such as surgery may be required.
  • an inference model for similar image determination may be set in the inference engine 19B and performed in the inference engine 19B.
  • the inference engine 19B functions as a similar image estimator having a similarity estimation model for estimating the similarity of images based on endoscopic images.
  • the control unit 11B, the similar image determination unit 22B, or the inference engine 19B determines whether the lesion found in the first endoscopy is reexamined using the second endoscope for the specific subject.
  • the image for the second endoscopic examination and the endoscopic image of the second endoscopic examination are input to the similar image estimating unit to estimate the similarity of the images, and it is determined whether or not the image has a higher similarity than a predetermined similarity. It functions as an image judgment section.
  • the similar image estimating unit to estimate the similarity of the images, and it is determined whether or not the image has a higher similarity than a predetermined similarity. It functions as an image judgment section.
  • the similar image inference model described above makes inferences by canceling changes in the image of the affected area due to deformation of the large intestine.
  • the similar image inference model is a correspondence relationship between the images of the observed organs that are deformed by any of the insertion and withdrawal of the endoscope, air supply, water supply, suction, change in operation speed, change in posture, etc.
  • Use an inference model learned from As described above affected areas (regions) such as tumors and polyps are deformed due to changes in the insertion and removal of the colonoscope into and out of the large intestine, air supply, water supply, suction, operation speed changes, and posture changes.
  • images during deformation and non-deformation, such as during insertion and withdrawal it is possible to remove (cancel) this deformation and generate an inference model.
  • the teacher obtained by using the image of the specific object captured by the first observation method as the annotation data is the image of the specific object captured by the second observation method different from the first observation method.
  • an inference model can be made that can infer the object image photographed by the second observation method when the object image photographed by the first observation method is input.
  • By inputting an image into this inference model and inferring it it is possible to transform (improve) the input image into an image similar to the one in which observation was easy, even in a situation where observation is difficult.
  • By displaying this converted image the visibility can be improved, and by using the improved image data, it is possible to improve the performance of functions for other purposes (for example, guides, etc.).
  • the first observation method and the second observation method are an observation method in which the imaging unit at the tip of the endoscope is moved in the direction of insertion into the subject, and an observation method in which imaging is performed when the imaging unit at the tip of the endoscope is inserted. This is an observation method by photographing when moving in the withdrawal direction.
  • the first observation method and the second observation method are when the imaging unit at the tip of the endoscope is moved in the direction of inserting or withdrawing from the object to be inspected, and when the endoscope is bent, inserted or removed. It may be an observation method by photographing when control of removal is different.
  • the configuration of the inference engine 19B is similar to that of the inference engine 19A, but may include an inference model for detection of the site (affected area) Re generated in the auxiliary device 30.
  • an inference model for detection of the site (affected area) Re generated in the auxiliary device 30.
  • the image data acquired by the imaging unit 12B is input to the inference engine 19B, it is inferred whether or not it is a frame image corresponding to the target information Ita.
  • this frame image is found, a message to that effect is output from the output layer, and the fact that the area (affected area) is near Re is displayed on the display section 14B.
  • the inference model is generated in the inference engine 36 in the auxiliary device 30, but it may be generated in a learning device other than the auxiliary device 30, and a learning device for generating the inference model is provided in the auxiliary device 30. may be provided and an inference model may be generated in this learning device.
  • inspection images P2 and P3 for inference model generation are collected. Although only two inspection images are shown in FIGS. 1A and 3A, a large number of inspection images are actually collected.
  • the images P2 and P3 for inference model generation are obtained by inserting the endoscope through the anus, inserting it to the end, and then pulling it out, that is, on both the outward and return trips (Fig. 2(a), P21 to P27, P31 ⁇ see page 37).
  • An expert such as a doctor annotates a pair of images (P23 and P26, P33 and P36) having a site (affected part) Re among these examination images to create teacher data.
  • the created teacher data is input to the input layer of the inference engine (learning device) 36, and the image (for example, images P23 and P33) in which the site (affected area) Re is present when the insertion is performed to the back is transferred to the output layer.
  • the intermediate layer is weighted so that it can be output as target information Ita from . With this target information Ita, it is possible to find a frame related to a problematic site (affected area) in the image when moving inward.
  • a neural network with weighted hidden layers is generated as an inference model.
  • the first observation method is performed by photographing when the imaging unit at the distal end of the endoscope is moved in the direction of insertion into the subject, and the imaging unit at the distal end of the endoscope is moved in the direction of removal.
  • the second observation method is performed by photographing when the first observation method is performed (the first and second may be reversed)
  • the specific object image photographed during the first observation method is processed by the first observation method and is learned using teacher data obtained by using images of specific objects photographed by a different second observation method as annotation data
  • an inference model capable of inferring the object image captured by the second observation method can be generated.
  • This inference model can infer an image in the removal direction based on the image in the insertion direction, or infer an image in the insertion direction based on the image in the removal direction.
  • the similar image inference model created by the above-described inference model creation method An image obtained by a certain viewing method (eg image acquisition in the withdrawal direction) can be inferred (eg the insertion direction view image can be inferred as the target image).
  • the image at the time of insertion it is possible to infer that this image should look like this at the time of removal. Therefore, as described above, when the specific subject is examined by the first endoscope, the specific object image obtained by imaging the specific object, for example, when the specific object is removed is used as the target image. You may also try to guide the location where it will be found. That is, when a specific subject is re-examined using the second endoscope, guidance can be provided so that the position of the object (affected area) can be observed again simply by inserting the second endoscope into the body.
  • the image at the time of re-examination is input to the similar image inference model created by the method of creating an inference model described above to obtain an inference image, and this inference image is compared with the target to determine the reconfirmation position. It suffices to generate insertion auxiliary information for guiding.
  • An observation method e.g., a first observation method in which the imaging unit at the distal end of the endoscope is moved in the direction of insertion into the subject, and a method in which the imaging unit at the distal end of the endoscope is moved in the direction of removal.
  • An inference model may be created using an image obtained by an observation method based on time photography (for example, the second observation method).
  • the “observation method by photographing” means, for example, obtaining image data by the imaging unit, displaying an image on the display unit based on the image data, and observing the image by a doctor or the like.
  • a step is provided for inferring the time-series change of the image frame group when photographed by the second observation method based on the time-series change of the time-series image frame group. Further, a step of outputting an endoscope operation guide up to photographing of the object image so that the image change during acquisition conforms to the image change that would be obtained when the image is taken by the second observation method as a reference.
  • an endoscope operation guide method that enables a simple examination following the previous examination.
  • operation information Iop performed by the operation unit may also be collected when acquiring the inspection images P2 and P3.
  • the information may be collected in association with each image data, or the date and time information may be recorded in the image data and the operation information Iop respectively.
  • an inference model for performing operation guidance can be generated together with the generation of the inference model.
  • Deep learning is a multilayer structure of the process of "machine learning” using neural networks.
  • a typical example is a "forward propagation neural network” that sends information from front to back and makes decisions.
  • the simplest forward propagation neural network consists of an input layer composed of N1 neurons, an intermediate layer composed of N2 neurons given by parameters, and N3 neurons corresponding to the number of classes to be discriminated. It suffices if there are three output layers composed of neurons.
  • the neurons of the input layer and the intermediate layer, and the intermediate layer and the output layer are connected by connection weights, respectively, and the intermediate layer and the output layer are added with bias values, so that logic gates can be easily formed.
  • the neural network may have three layers for simple discrimination, but by increasing the number of intermediate layers, it is also possible to learn how to combine multiple feature values in the process of machine learning. In recent years, 9 to 152 layers have become practical from the viewpoint of the time required for learning, judgment accuracy, and energy consumption.
  • a process called “convolution” that compresses the feature amount of an image may be performed, and a “convolution neural network” that operates with minimal processing and is strong in pattern recognition may be used.
  • a "recurrent neural network” fully-connected recurrent neural network
  • which can handle more complicated information and can handle information analysis whose meaning changes depending on the order and order, may be used in which information flows in both directions.
  • NPU neural network processing unit
  • machine learning such as support vector machines and support vector regression.
  • the learning involves calculation of classifier weights, filter coefficients, and offsets, and there is also a method using logistic regression processing. If you want a machine to judge something, you have to teach the machine how to judge.
  • a method of deriving image determination by machine learning is used.
  • a rule-based method that applies rules acquired by humans through empirical rules and heuristics may be used.
  • the generated inference model is set in the inference engine 36 as shown in FIG. 3(b).
  • the inference engine 36 may be different from the inference engine for creating the inference model.
  • An image P5 at the time of reexamination or surgery is input to the input layer of the inference engine 36 .
  • the inference engine 36 searches for the part (affected part) Re as an image showing it by inference, and outputs the part information Iout. This information Iout indicates a target position for reexamination or surgery.
  • a reexamination auxiliary image group P4 including information target information Ita regarding the position corresponding to the site (affected part) Re is created. Therefore, when the endoscope is used for the second time for the same patient, by using the reexamination auxiliary image group P4, the site (affected part) can be easily detected when moving the endoscope toward the back. Re can be found.
  • an examination image of another person who has undergone an examination in the past that is, an examination image obtained when the colonoscope is inserted all the way is used, and a part
  • an inference model for inferring the position of the diseased part Re By creating an inference model for inferring the position of the diseased part Re, the position of the site (affected part) Re can be estimated in the same manner as the image when the subject (patient) is pulled out using the inspection image when the subject (patient) is inserted. can be inferred.
  • an inference model for inferring an image for extraction from an image for insertion toward the back using an inspection image of another person who has undergone an inspection in the past, it will be possible to create an inference model that is different from the inspection image. It is possible to infer the image when the person (the person being inspected this time) is pulled out simply by inputting the image when the person (the person being inspected this time) is inserted into the inference model. Also, an inference model may be created using transfer learning using test images of other people who have undergone tests in the past. By inputting the first insertion image, it is possible to discover where the part (affected part) is.
  • This flow is realized by controlling each unit in the endoscopes 10A and 10B based on the programs stored in the recording units 16A and 16B by the control units 11A and 11B in the endoscopes 10A and 10B. .
  • This flow includes both the first examination and the second and subsequent examinations for the same subject (patient).
  • the endoscope 10A or the second endoscope 10B can be connected through a communication unit to an auxiliary device 30 provided in an in-hospital system, server, or the like. In this step, the endoscope 10A or the second endoscope 10B is set to communicate with the auxiliary device 30 so that they can cooperate with each other and share information between them.
  • a doctor inputs information of a person (examinee/patient) to be examined using the endoscope 10A or the second endoscope 10B, such as name, ID, examination type, etc. 17B or the like.
  • make preparations and settings for the inspection (S5).
  • a doctor or the like prepares, sets, etc. for the examination according to the examination type. For example, preparations such as preparing an endoscope for examination and anesthetizing a subject (patient) are performed.
  • this inherited information is obtained by utilizing the fact that the site (affected area) Re can be easily found if observed while withdrawing the large intestine after it is inserted all the way in the first examination. The same position as the site (affected site) Re found during observation can be used when finding the same position when inserting the probe into the deep part of the large intestine.
  • the auxiliary device 30 Based on this handover information, the auxiliary device 30 creates a group of re-examination auxiliary images P4. In this step S7, based on the ID information of the subject (patient), it is determined whether or not the subject has already performed the first examination and the information for taking over is recorded in the recording unit 35 .
  • At least a part of the continuous images acquired during the inspection including the position change operation of the endoscope sensor unit up to the specific target, which is performed at the first timing, is transferred after the first timing. It is acquired as guide information for generating a guide for changing the position of the endoscope during examination including changing the position of the endoscope sensor unit up to the specific object at the second timing of .
  • at least part of the continuous images acquired at the first timing is stored as handover information, and used as guide information at the second timing.
  • guide information is determined based on the position change information for obtaining the image obtained when the endoscope is inserted and the image obtained when the endoscope is withdrawn at the first timing ((guide information determination image obtained during insertion of the endoscope at the first timing (for example, the movement along the tube and changes in direction are recorded) and the image obtained during withdrawal (For example, changes in the state and orientation when being pulled out along the tube are recorded), and by making it possible to detect a specific object detected in one image in the other, the second During endoscopic examination at the timing, it is possible to easily detect a specific object during insertion or withdrawal.
  • the information for presenting such a guide becomes the inherited information, but it can take various forms according to the specifications of the guide.
  • the information of the target image image with the feeling that there is a specific person's face here, or navigation from insertion to specific parts
  • the information may be taken over information.
  • the latter may simply be information about how long the endoscope has been inserted, or information that can guide how to bend the endoscope at the time of insertion and insertion speed. In either case, it may be considered that the timing should be matched with the first timing.
  • This handover information may be processed into information that can present the guide as it is, or the information that is the basis for issuing the guide is output, and the guide itself is sent to the device used at the second timing. You can let it go.
  • the latter type of handover information is preferable when providing a guide in comparison with the operation at the second timing.
  • step S7 If the result of determination in step S7 is that there is handover information, the handed-over information is acquired and used as a reference when guiding (S9).
  • the takeover information recorded in the recording unit 35 is read out so that the takeover information can be used when performing guidance (see S15) during inspection.
  • step S9 If the takeover information is acquired in step S9, or if the result of determination in step S7 is that there is no takeover information, then the inspection is started, the image is recorded, the timing is started, and the information is acquired.
  • the doctor starts the endoscopy, and records the image data acquired by the imaging units 12A and 12B at this time in the recording units 16A and 16B. Also, in order to acquire time information after the start of the examination, clocking is started, and the acquired time information is associated with the image data. Further, it acquires operation information Iop when the doctor operates the operation units 17A and 17B, etc., and other information related to the examination.
  • the operation information Iop may be associated with the acquired image, or may be associated with the timing information.
  • diagnostic guides and diagnostic assistance determinations are performed when a doctor makes a general diagnosis using the endoscopes 10A and 10B.
  • the operation method of the endoscopes 10A and 10B may be displayed. good.
  • Inferences using the inference engines 19A and 19B may be used to perform diagnostic guidance and diagnostic assistance determination.
  • this data is recorded in the recording units 16A and 16B.
  • the similar image determination unit 22B determines whether or not the image is similar to the image up to the site (affected area) Re provided as handover information. That is, it is determined whether or not the image showing the part (affected part) Re and the image acquired by the imaging unit 10B are similar. You may Further, when operation information by the operation unit 17A is included as the handover information, a guide display may be performed based on the operation information, and it is determined whether or not the area (affected area) is in the vicinity of Re. You may do so.
  • the guide may be such that the next image should be manipulated, or the guide may be such as to search for an image similar to this, or the guide may include both.
  • the corresponding operation part in the operation part 17B is displayed on the display part 14B, the direction of movement is illustrated, how to hold the endoscope and the state of insertion are displayed by illustrations and animations, and the endoscope is displayed. Concrete information about how to handle it may be displayed or given by voice.
  • the inference engine 19B may use the inference model to provide guidance based on inherited information.
  • the part (affected part) Re is found by reasoning. That is, when moving the second endoscope 10B toward the back of the large intestine, the doctor can find the site (affected area) Re while looking at the inference-based guide.
  • the image and timing are recorded (S17).
  • the doctor determines whether a characteristic action or operation was performed, that is, whether it is an important event, and if it is an important event, the image and timing (timekeeping information) Record in the recording units 16A and 16B.
  • Important events include, for example, operations such as air supply, water absorption, and water injection.
  • observation is carefully and repeatedly performed while moving the distal end of the endoscope, such as when an affected area such as a tumor is found, it can be said to be an important event. By assigning the important event as an index, it becomes an index when searching for the image later.
  • Whether or not the event is an important event may be determined based on the operation states of the operation units 17A and 17B, the images acquired from the imaging units 12A and 12B, and the like. Also, events such as whether the endoscope is fully inserted or withdrawn are recorded (see S63 and S67 in FIG. 6A).
  • step S23 it is determined whether or not the inspection has ended (S23).
  • the end operation is performed by the operation sections 17A and 17B of the endoscopes 10A and 10B.
  • determination is made based on whether or not this termination operation has been performed.
  • the process returns to step S13 to continue the inspection.
  • step S25 If the result of determination in step S23 is that the inspection is completed, image transfer and other information transfer are performed (S25).
  • the evidence recorded in step S ⁇ b>13 , the image and timing at the time of the important event recorded in step S ⁇ b>17 , and other information are transferred to the recording unit 35 in the auxiliary device 30 .
  • this transfer process ends, this flow ends.
  • the image is recorded in the recording unit 16A in the endoscope 10A (see S11 and S17).
  • the recorded image is transferred to the recording unit 35 in the auxiliary device 30 after the inspection is completed.
  • the information is transferred to the second endoscope 10B as takeover information (see S7 and S9), and the second endoscope 10B receives the takeover information.
  • the information is used to guide the position of the site (affected area) Re (see S15). Therefore, the doctor can easily find the position while inserting it into the large intestine and moving it to the back.
  • an inspection including a position change operation of the endoscope sensor unit up to the specific object is performed, and at least a part of the continuous images during this inspection is captured.
  • an image obtained when the endoscope is inserted at the first timing for example, an image that records the progress along the tube and changes in direction
  • an image obtained when the endoscope is withdrawn for example, images recording changes in the state and orientation when being pulled out along the tube
  • the specific object detected in one of the images is made detectable in the other, and the endoscopy at the second timing is performed.
  • the microscopic examination it is possible to easily detect a specific object whether it is inserted or pulled out. Images before and after detection of the specific object are assumed as part of the continuous images, and the pull-out image records the process from the position of the specific object until it is pulled out. Therefore, by associating the information of the image change with the image at the time of insertion, it is possible to judge the image change from the time of insertion to the specific object. can be done.
  • step S11 when the examination is started, the moving image (continuous image) during the examination is displayed. Recorded. At least part of this continuous image (not limited to raw data, but may be processed) is recorded as handover information.
  • This handover information is used as guide information at the next and subsequent examinations (second timing). That is, if the takeover information is recorded, the takeover information is acquired in steps S7 and S9, and guidance is performed using the takeover information in step S15.
  • operation information may be acquired and recorded at the time of acquiring the continuous images, and used as handover information.
  • the handover information may include the confirmation result of informed consent.
  • auxiliary devices provided in the hospital system, server, cloud, etc.
  • This flow is realized by the control unit 31 provided in the auxiliary device 30 controlling each unit in the auxiliary device 30 based on a program stored in the recording unit 35B or the like.
  • patient information is input (S41).
  • the patient information is entered when the patient is registered in the hospital system.
  • patient information for example, name, sex, age, patient ID information, etc., examination/examination information, and other various information related to the patient are input. If it is not the first visit, the current medical examination/examination information may be added as patient information to the past medical examination/examination history after the medical examination/examination is completed.
  • the devices are coordinated so that information can be shared, and recording is performed (S43).
  • various devices used in a hospital (which may be composed of a plurality of hospitals) are linked with the auxiliary device 30 . That is, communication is performed between each device and the auxiliary device 30 so that information can be exchanged with each other and information can be shared. Also, information shared between each device and the auxiliary device 30 can be recorded.
  • the doctor determines whether or not the doctor's findings have been input (S45).
  • the doctor first looks at the history of past examinations and medical care described in the medical record, etc., and after asking the subject (patient) about the symptoms, etc., writes down the findings, etc. Enter in the electronic medical record.
  • determination may be made based on voice input, image input, etc. connected to the hospital system. Further, when inputting data into an electronic medical record, determination may be made based on whether or not an input operation has been performed on a PC terminal or the like.
  • step S45 if there is an input such as the doctor's findings, the input is reflected in the chart (S49).
  • the doctor's findings and the like are reflected in the electronic chart in the hospital system (auxiliary device 30).
  • the written document may be read by a scanner or the like, and the input contents are reflected in the electronic medical chart by photographing.
  • the doctor inputs the information on the PC terminal it is stored in the electronic chart of the hospital system.
  • IC Informed Consent
  • treatment prescription
  • S51 performs IC (Informed Consent), treatment, prescription, etc.
  • IC Informed Consent
  • the doctor should fully explain the contents of the examination and treatment and the medicines to be prescribed so that the patient and the subject can understand and consent to the treatment and examination.
  • the doctor performs treatment, prescription, and the like.
  • an examination such as an endoscopy may be performed.
  • IC in order to use the images of patients and subjects for AI development, etc., we will explain to patients and subjects about using them for purposes other than medical treatment and obtain their consent.
  • IC in order to use the images of patients and subjects for AI development, etc., we will explain to patients and subjects about using them for purposes other than medical treatment and obtain their consent.
  • the handover information is set from the findings (S51).
  • the handover information is set based on the doctor's findings, treatment, prescription, and the like.
  • the information is set.
  • the handover information includes performing an examination using a colonoscope.
  • Informed consent (IC) may be obtained from the patient/examinee when setting the handover information, and necessary IC may be saved in step S51 to simplify the process.
  • step S53 it is next determined whether or not the medical examination has ended (S53).
  • the doctor inputs the completion of the medical examination and examination to the hospital system.
  • step S55 If the result of the determination in step S53 is that the medical examination has ended, accounting, reservation, prescription guidance, etc. are performed (S55).
  • accounting processing payments of expenses
  • reservation processing is performed.
  • the patient needs to take medicine a prescription will be issued.
  • a consent document may be sent by a smartphone, identity authentication may be performed, and consent obtained may be electronically recorded. Consent may be obtained when operating the accounting terminal at the time of accounting.
  • step S57 it is determined whether or not there is an inference model creation request or an inference request (S57).
  • an inference model such as that described with reference to FIGS. 2 and 3, that is, an inference model capable of performing operation guidance so as to locate the site (affected area) Re even when inserting the endoscope It is determined whether or not to request the learning device to create such as.
  • This learning device may be provided in an auxiliary device 30 provided in an in-hospital system or server, or may be a learning device provided in addition to the in-hospital system or server in which the auxiliary device 30 is provided. .
  • the learning device is requested to create the inference model.
  • An inference model may be created in the inference engine 36 within the auxiliary device 30 or may be requested from a learning device provided outside the auxiliary device 30 .
  • step S57 it is determined whether or not to make an inference request.
  • step S53 the case where it is determined in step S53 that the medical examination has not been completed and the endoscopy is being continued has been described with reference to FIGS. , it is determined whether or not to perform inference for operation guidance. As a result of this determination, if there is no inference model creation request and there is no inference request, the process returns to step S41.
  • step S57 if there is an inference model creation request or if there is an inference request, an inference model is created or inference is performed (S59).
  • a learning device such as the inference engine 36 in the auxiliary device 30 creates an inference model using teacher data created based on image data recorded in the recording unit 35 . If image data is insufficient, image data may be collected from another server or the like.
  • the inference engine 36 in the auxiliary device 30 uses the image P1 input to the input unit 32 to infer the target information Ita. Inference for performing operation guidance is performed by the inference engine 19B in the second endoscope 10B. If the second endoscope 10B does not have an inference engine, the auxiliary device 30 may be requested to make an inference.
  • the learning device such as the inference engine 36 is not limited to the inference model for inferring the target information Ita described above.
  • An inference model may be generated for determining similarity to group P4.
  • an inference model that can infer the position of the site (affected area) Re may be generated.
  • the inference model created here is transmitted to the second endoscope 10B through the communication unit 34 . Detailed operation of this inference model creation and inference will be described later with reference to FIG. Once the inference model is created or inferred, the process returns to step S41.
  • the handover information is set based on the doctor's findings (see S51).
  • an inference model to be used for operation guidance as described with reference to FIG. 2B is created, and target information Ita is inferred (S59).
  • step S59 (see FIG. 5) will be described.
  • This flow is realized by controlling each part in the auxiliary device 30 based on the program stored in the recording unit 35B or the like by the control unit 31 in the auxiliary device 30 provided in the hospital system, server, or the like.
  • an event is recorded in association with the image frame (see S17 in FIG. 4).
  • the event is, for example, when a site (affected part) Re is found, or when an operation such as an air supply operation or a water injection operation is performed using an endoscope.
  • the site (affected site) Re can be detected when the endoscope is withdrawn.
  • Re can be easily found. Therefore, even in such a case, an event is associated with the image.
  • an event may occur such as finding the site (affected part) Re during insertion, and an image corresponding to this event may be found during withdrawal.
  • an event may occur not only when the endoscope is inserted and withdrawn, but also when the endoscope is used. In this case as well, the event is associated with the image. In these cases, it is also possible to locate the image frame with which the event is associated in order to be able to infer the corresponding image. When associating the event, it is recorded whether the endoscope is inserted in the depth direction or the endoscope is pulled out from the depth direction.
  • step S61 If the result of determination in step S61 is that there is an image frame with an event, it is determined whether or not the event was detected at the time of removal (S63).
  • the event information attached to the image frame that is, after inserting the endoscope in the depth direction, it is determined whether or not the event is detected in the image frame when the endoscope is pulled out.
  • step S63 If the result of determination in step S63 is that the event is the time of removal, then the corresponding image frame is determined from the insertion process image, annotated, and converted into teacher data (S65). Since there was an event indicating the site when the endoscope was pulled out, the image corresponding to the image frame in which the event (for example, the site (affected area) Re, such as a tumor) was recorded in the process of inserting the endoscope find out Since the image frame is recorded in the recording unit 35, the control unit 31 searches for the corresponding image frame from the recorded images. As described above, it is easy to find an event during removal, but it may be difficult to find an event during insertion. To create an inference model for discovering an event even in this difficult-to-find situation. Therefore, if an image frame is found in the insertion process image, it is annotated to indicate that it is an image corresponding to an event, and is used as teacher data.
  • the event for example, the site (affected area) Re, such as a tumor
  • step S63 If the result of determination in step S63 is that the event was not detected during removal, it is next determined whether or not the event was detected during insertion (S67).
  • the event information attached to the image frame it is an image associated with an event among the image frames acquired at the time of insertion, that is, when the endoscope is being inserted in the depth direction. Determine whether or not
  • step S69 if an event is detected at the time of insertion, corresponding image frame determination is performed with the image at the time of removal, annotation is added, and teacher data is created (S69). Since there was an event indicating the site, etc. in the image when the endoscope was inserted, an event (for example, the discovery of a site such as a tumor (affected area) Re) was recorded in the images during the process of removing the endoscope. Search for an image corresponding to the image frame that has been detected. Since the image frame is recorded in the recording unit 35, the control unit 31 searches for the corresponding image frame from the recorded images. This case is opposite to the case of steps S63 and S65. If an image frame is found in the process images at the time of extraction, it is annotated to indicate that it is an image corresponding to an event, and is used as teacher data.
  • an event for example, the discovery of a site such as a tumor (affected area) Re
  • step S71 If the result of determination in step S67 is that no event has been detected at the time of insertion, another frame in which a similar event should have been recorded is turned into teacher data (S71).
  • teacher data for creating an inference model for finding frames in which a similar event for example, discovery of a site (affected area) Re such as a tumor
  • an image similar to the image to which the event is attached is retrieved, and this image is converted into training data.
  • an inference model capable of inferring an "event” is created (S73).
  • the inference engine 36 generates an inference model using the teacher data created in steps S65, S69, and S71.
  • a general method of generating an inference model is as described with reference to FIG. Specifically, for example, when an image at the time of insertion of the endoscope is input, this inference model outputs an image corresponding to the site (affected part) Re found at the time of withdrawal (at the time of withdrawal).
  • step S73 After generating the inference model in step S73, it is next determined whether or not the reliability is higher than a predetermined value, that is, whether or not the reliability is OK (S75). This determination is made based on whether, when an image whose inference result is known in advance is input, the rate of matching with the inference result known in advance is higher than a predetermined value.
  • step S75 If the result of determination in step S75 is that the reliability is not OK, selection of teacher data is performed (S77). Since the reliability is low, the inference model generated in step S73 is incomplete, and the population of teacher data is changed so as to increase the reliability. For example, teacher data that reduces reliability may be removed, and other teacher data may be added to improve reliability. In addition to the method of changing the training data, for example, processing such as changing the arrangement and degree of coupling of the intermediate layers of the neural network may be performed. After selecting the teacher data, etc., the process returns to step S63 to continue the generation of the inference model.
  • the inference model is transmitted (S79).
  • the inference model is transmitted to the inference engines 19A and 19B of endoscopes such as the endoscope 10A and the second endoscope 10B.
  • the inference engines 19A and 19B that have set the inference model can easily find the position corresponding to the site (affected area) Re and the like by inputting the images acquired by the imaging units 12A and 12B.
  • the auxiliary device 30 may hold this inference model and perform inference on the image input through the communication unit or the like. After transmitting the inference model, etc., the process returns to step S63.
  • step S81 it is determined whether or not to perform inference (S81). For example, when the endoscope 10A acquires the image P1 and the input unit 32 inputs the image P1, the image P1 is used to infer and output the target information Ita. Using this target information Ita, a reexamination auxiliary image group P4 is created, and the second endoscope 10B can perform an examination. In this step, it is determined whether or not the inference engine 36 of the auxiliary device 30 performs inference such as inference of the target information Ita.
  • step S81 if inference is to be performed, an image is acquired and the inference result is output (S83).
  • the inference is generally as described with reference to FIG. Input to 36 input layers.
  • the inference engine 36 outputs inference results from the output layer.
  • the inference result is sent (S85).
  • an inference auxiliary image group P4 including the inference result (target information Ita) output from the output layer of the inference engine 36 is transmitted to the second endoscope 10B.
  • the second endoscope 10B uses the inference auxiliary image group P4 to find the site (affected site) Re.
  • step S85 If the inference result is transmitted in step S85, or if the result of determination in step S81 is not inference, the process returns to step S61.
  • the image P1 acquired by the endoscope 10A is output to the inference engine 36 for inference. It becomes possible to easily find the site (affected area) Re where the injection cannot be performed.
  • an inspection including a position change operation of the endoscope sensor unit up to a specific object is performed, and at least a part of the continuous images during this inspection is transferred to the first timing.
  • an acquisition step (see, for example, S11 in FIG. 4) of acquiring guide information for generating a guide for changing the position of an endoscope to be used in an examination performed at a second timing after the timing; It has a guide information determination step (for example, see S15 in FIG. 4) for determining guide information based on at least part of the consecutive images acquired at the timing.
  • the position of the endoscope sensor unit including the image pickup device is changed while the endoscope sensor unit including the image pickup device and the like is inserted into the body. Acquire sequential images. Guide information for changing the position of the endoscope so that the target site can be reached at the next examination (second timing) can be generated based on the acquired continuous images. can.
  • the above-mentioned guide information is such that when the endoscope sensor section is inserted at the second timing, the specific target is moved to a position where an image corresponding to the specific target can be acquired.
  • Identifiable information is information that can move the image of the specific object reached during the first inspection to the acquired position during the second and subsequent inspections (second timing).
  • the information that enables the above-described specific object to be specified is information for similar image determination that is inferred using AI (for example, image P1 in FIG. 1A, etc.).
  • the image P1 acquired by the imaging unit 12A is used for inference for similar image determination.
  • the guide information described above is determined based on the endoscope position change information, and this endoscope position change information is obtained by cross-referencing endoscope image information in the insertion direction and the withdrawal direction obtained at the first timing. It is information that can be obtained.
  • the position corresponding to the site (affected site) Re that can be found at the time of removal is acquired from the images at the time of insertion by cross-referencing (for example, image P13 in FIG. 1A). and P17, see FIGS. 2(a) and 2(b)).
  • citation information based on at least part of the continuous images acquired at the first timing is recorded (see, for example, S11 in FIG. 4), and this citation information includes confirmation results regarding informed consent. (See S49 in FIG. 5, for example).
  • the flow of FIGS. 4 to 6 has an acquisition step for acquiring information that will serve as a guide when inserting the endoscope next time during the endoscopy. That is, during the first endoscopic examination, images from the time of insertion to the time of withdrawal are recorded (see, for example, image P1 in FIG. 1A and S11 and S17 in FIG. 4). This image is used when generating an inference model for creating guide information to the site (affected area) Re (see S59 in FIG. 5 and FIG. 6).
  • the acquisition step described above further acquires operation information (see image P5 in FIG. 7A, operation information Iop, etc., which will be described later) regarding the operation of the endoscope during the endoscopy of the specific subject. Also, the acquiring step acquires operation information in a first observation method (for example, when inserting an endoscope) and a second observation method (for example, when removing an endoscope) during an endoscopy. do.
  • At least part of the continuous images and/or the operation information are used to determine the position of the affected part found when performing either the first observation method or the second observation method.
  • This auxiliary information acquisition step uses the affected area image and/or operation information acquired by the first observation method and the second observation method during endoscopy to output auxiliary information for finding the affected area position by reasoning. (See, for example, the inference engine 36 in FIG. 1A, S59 in FIG. 5, etc.).
  • the affected part position frame in which the affected part requiring treatment is photographed is included in the video. It has a determination step of determining the position in the frame (see, for example, the group of supplementary images P4 for reexamination in FIG. 1B, S15 in FIG. 4, etc.).
  • the reinsertion auxiliary information generation step further has a similar image inference step for inferring similar images (for example, refer to the reexamination auxiliary image group P4 in FIG. 1B, the inference engine 19B, S15 in FIG. 4, etc.). .
  • This similar image inference step infers a change in the diseased part image due to deformation of the observed organ (see, for example, the reexamination auxiliary image group P4 in FIG. 1B, the inference engine 19B, S15 in FIG. 4, etc.).
  • an inference model learned from the corresponding relationships of images of the observed organ deformed due to insertion and withdrawal of the endoscope, air supply, water supply, suction, changes in operation speed, and changes in posture is generated. (See, for example, the reexamination auxiliary image group P4 in FIG. 1B, the inference engine 19B, S15 in FIG. 4, S59 in FIG. 5, and FIG. 6).
  • each image obtained by a plurality of observation methods has a step of annotating the characteristics of the image of the specific object (for example, see S65, S69, S71, etc. in FIG. 6A). It has a step of learning so that an image corresponding to one annotation site is input and an image corresponding to the other annotation site is output (for example, see FIG. 3A, S73 in FIG. 6A, etc.).
  • the endoscope system generates a plurality of different images based on an affected area image of the affected area obtained by an examination of a specific subject using a first endoscope.
  • has a similar image inference unit for example, inference engine 19B
  • the inference model set in the inference engine 19B includes affected area images (eg, affected area images P23 and P26 in image P2, It is generated by learning using the diseased part images P33 and P36) in the image P3.
  • an inference model may be generated by learning using the affected area image of the specific subject, or not only the affected area image of the specified subject but also the affected area image of other than the specified subject. may be used to train and generate an inference model. In the latter case, multiple lesion images can be collected.
  • the endoscope system according to the first embodiment uses the similar image inference unit ( For example, it has a reinsertion auxiliary information generation unit (for example, the control unit 19B) that generates insertion auxiliary information that guides the reconfirmation position according to the result obtained by inputting to the inference engine 19B).
  • FIG. 7A to 11 a second embodiment of the present invention will be described using FIGS. 7A to 11.
  • FIG. 1 In the endoscope system according to the first embodiment, when the endoscope is inserted into the body for observation, it is easy to find when the endoscope is moved while pulling it out from the back, but it is difficult to find when it is moved toward the back.
  • the part (affected part) Re is made easy to find even when it is moved toward the back. For this reason, in the first embodiment, an image is recorded when the patient is inserted toward the back and pulled out from the back, and a position corresponding to the site (affected part) Re is found when the body is inserted toward the back.
  • Target information Ita can be output.
  • operation information is recorded in addition to the image when the endoscope is inserted toward the back, and when the endoscope system is inserted toward the back, the site (affected area) Re
  • the operation information Iopf for finding the position corresponding to is output.
  • information that serves as a guide for the next insertion of the endoscope is acquired during the endoscopy.
  • information may not actually be used for the examination itself at that time, but information is acquired at the time of the endoscopic examination in anticipation of the next endoscopic examination. . That is, it has an acquisition step for acquiring information that will serve as a guide for the next insertion.
  • the conventional endoscopic examination it is sufficient to observe the target site (affected site) Re, and even if images were acquired on the way to that site, they were not used. However, even in this embodiment, it is acquired and recorded as a guide at the time of the next examination.
  • this endoscope system comprises an endoscope 10A, an in-hospital system, an auxiliary device 30 provided in a server or the like, and a second endoscope 10B.
  • the endoscope 10A and the second endoscope 10B are colonoscopes, and a case where a subject (patient) undergoes colonoscopy is taken as an example. explain.
  • the endoscope 10A is used for the first examination of the subject (patient), and the second endoscope 10B is used for the second and subsequent examinations of the subject (patient). Described as an endoscope.
  • the endoscope 10A and the second endoscope 10B may be endoscopes of the same model or completely the same device, but are explained as different models here.
  • the endoscope 10A is used when a doctor observes the inside of the large intestine and performs treatment, surgery, etc., as in the first embodiment.
  • This endoscope 10A has a control section 11A, an imaging section 12A, a light source section 13A, a display section 14A, an ID management section 15A, a recording section 16A, an operation section 17A, and an inference engine 19A. Since it has the same configuration as the first embodiment shown in FIG. 1A, detailed description is omitted.
  • the endoscope 10A has an operation determination section 18A in addition to the above sections.
  • the operation determination unit 18A determines how the endoscope 10A is moved by the operation of the operation unit 17A by the doctor, and acquires operation information Iop.
  • the endoscope is provided with an angle knob for bending the distal end portion, and operation portions such as a suction button, an air/water supply button, and the like.
  • the doctor operates the angle knob, etc., to move the endoscope in the direction of the back or in the direction of withdrawal. move.
  • the operation determination unit 18A determines the operation state of the operation unit 17A by the doctor described above, and acquires the operation information Iop.
  • the image P5 is image data acquired by the imaging unit 12A and is transmitted to the input unit 32 of the auxiliary device 30.
  • the image at the time of removal is also included in FIG. 3, the illustration of the image at the time of removal (at the time of pulling out) is omitted here in order to explain information transmission by information at the time of insertion. This is because the present embodiment not only solves the problem of image acquisition due to the difference between insertion and withdrawal of the endoscope, but is also capable of supporting a wide variety of observation methods.
  • the image P5 is also time-series data
  • the image P51 is an image acquired immediately after the endoscope 10A is inserted into the anus
  • the image P53 is an image captured by the endoscope 10A.
  • Images P51a to P54a and operation information Iop1 to Iop3 show examples of images and operation information transmitted from the input unit 32 of the endoscope system 10A to the input unit 32.
  • FIG. Images P51a to P54a are time-series images acquired when the endoscope 10A moves inward.
  • the black part near the center of each image indicates the cavity in the digestive tract, and the white part around it is the muscle and folds of the digestive tract wall, which reflect the illumination light and appear white. It's becoming In addition, the black portions outside the folds of the muscle and the wall of the gastrointestinal tract are the shadow portions of the folds.
  • the doctor sees the image P51a and operates the operation section 17A so as to move the tip of the endoscope 10A in the direction of the black portion near the center.
  • the operation determination unit 18A determines the operation state of the doctor and acquires the operation information Iop1.
  • the doctor operates the endoscope 10A to move the tip of the endoscope 10A in the direction of the black portion near the center, and the operation information Iop2 at this time is acquired.
  • the image P53a is acquired, the operation information Iop3 at the time of operation is acquired accordingly, and the image P54a is acquired according to this operation.
  • the doctor advances the distal end of the endoscope 10A into the hollow portion of the gastrointestinal tract, and the operation information Iop1 to Iop3 and the images P51a to 54a at this time are acquired and transmitted to the input unit 32 of the auxiliary device 30. be done.
  • the auxiliary device 30 has a control unit 31 , an input unit 32 , an ID management unit 33 , a communication unit 34 , a recording unit 35 , an inference engine 36 and an IC confirmation unit 37 . Since the configuration of each of these units is the same as that of each unit in FIG. 1A according to the first embodiment, detailed description thereof will be omitted. However, as described above, the input unit 32 inputs the operation information Iop in addition to the image, and the inference engine 36 also uses the operation information Iop to perform inference.
  • the inference model set in the inference engine 36 outputs the operation information Iopf regarding the position information of the site (affected area) Re when the image and the operation information are input. That is, the inference engine 36 outputs the operation information Iopf so that the doctor can easily find the site (affected area) Re when operating the endoscope. Also, although not shown, the inference engine 36 may also output the target information Ita as in the case of FIG. 1A.
  • the auxiliary device 30 outputs the reexamination auxiliary image group P6 to the second endoscope 10B.
  • This group of images is created based on the image P5 acquired during examination using the endoscope 10A. is created on the basis of This reexamination auxiliary image group P6 extends from the image P61 at the position immediately after insertion into the anus to the image P63 corresponding to the position of the site (affected part) Re when reexamination is performed using the second endoscope 10B. It is a time-series image up to.
  • An image P63 in the reexamination-time auxiliary image group P6 corresponds to the target information Ita, which is the inference result of the inference engine .
  • the operation information Iopf of the second endoscope 10B can be displayed together with each image among the time-series images of the group of reexamination auxiliary images P61 to P63.
  • the group of reexamination auxiliary images P6 includes information on operations until reaching the site (affected area) Re.
  • the second endoscope 10B has a control section 11B, an imaging section 12B, a light source section 13B, an ID management section 15B, a recording section 16B, an inference engine 19B, a signal output section 21B, and a similar image determination section 22B. Since the configuration of each of these units is the same as that of each unit in FIG. 1A according to the first embodiment, detailed description thereof will be omitted.
  • the second endoscope 10B has an operation reflection control section 23B.
  • the operation reflection control unit 23B performs mechanical automatic operation of the second endoscope 10B, for example, like a robot endoscope without the need for a doctor's operation, and the target site (affected site) Re. In order to reach, the operation of the second endoscope 10B is controlled. For this reason, the operation reflection control section 23B has a bending control section 23aB and an insertion control section 23bB.
  • the bending control section 23aB controls bending of the distal end portion of the second endoscope 10B.
  • the tip of an endoscope can be redirected by a physician pulling or extending any of a plurality of wires. Therefore, the bending control unit 23aB controls the actuator (see FIG. 10B) connected to each wire according to the operation information Iopf output together with the group of re-examination auxiliary images P6, thereby Controls the bending direction of the part.
  • the insertion control section 23bB has a drive source and a drive control circuit, and moves the second endoscope 10B in the deep direction inside the body.
  • General practitioners and endoscopes can be inserted and withdrawn by the operation of pushing and pulling them into and out of the body by a doctor. Therefore, the insertion control unit 23bB controls the lever or the like (see FIG. 10A) for moving the second endoscope 10B forward and backward according to the operation information Iopf output together with the group of reexamination auxiliary images P6. Insertion/removal of the distal end portion is controlled by performing control according to this.
  • controllers 11A and 11B in endoscope 10A and second endoscope 10B operate endoscope 10A and second endoscope 10B based on programs stored in recording units 16A and 16B. It is realized by controlling each part in the mirror 10B.
  • This flow includes both the first examination and the second and subsequent examinations for the same subject (patient).
  • steps S1 to S17 and S23 to S25 are the same processes as steps S1 to S17 and S23 to S25 of the flowchart shown in FIG. 4 according to the first embodiment. to run.
  • step S11 of FIG. 4 the image is recorded when the inspection is started.
  • the operation information Iop1 to Iop3 at this time is recorded together with the images.
  • the operation information Iop1 and the like are transmitted to the auxiliary device 30 and recorded in the recording unit 35 .
  • step S11 when the examination starts, the imaging units 12A and 12B output continuous images shown in image P5 (P51a to P54a) in FIG. 7A.
  • FIG. 11 shows continuous images at this time, and is handover information that can track changes in the operation state before and after. Changes in the image in FIG. 11 will be described later.
  • step S15 guidance is provided in step S15 using the inherited information.
  • guidance is provided using continuous images that can track changes in the operating state before and after. Details will be described with reference to FIG. 11, but the continuous images are images that can be traced, and by adding operation information, guide display is performed so that the target site (affected area) Re can be reached.
  • the first observation method and the second observation method according to the present embodiment are the bending of the endoscope, the It corresponds to the change in the observation method due to imaging in the case where the control of insertion or withdrawal is different.
  • a second observation method different from the first observation method for example, a specific insertion method, withdrawal direction
  • the teacher data is created by using the image of the specific object photographed by using the annotation data as the annotation data. Learning is performed using this teacher data to generate an inference model. If this inference model can infer the image of the object photographed by the second observation method when the image of the object photographed by the first observation method is input, what kind of image will be obtained next?
  • step S17 when the image, timing, etc. are recorded at the time of the important event, it is next determined whether or not there is information to be handed over (S19).
  • step S7 it is determined whether or not there was information taken over in the past. Then, if it is handed over, it is determined whether or not there is information that will be useful in subsequent examinations. For example, when next examination by a robot endoscope is scheduled, the control units 11A and 11B cause the robot endoscope to take over in order to automatically move in the direction of the surgical site (affected site) Re. The control units 11A and 11B determine whether or not there is information to be processed.
  • control units 11A and 11B determine whether or not there is information to be handed over to the mirror.
  • step S19 if there is information to be handed over, if the change in the insertion speed, image change record, etc. is inappropriate, redo guidance is performed (S17).
  • redo guidance is performed (S17).
  • the robot endoscope automatically moves to the site (affected part) Re, the change is such that the mechanism in the robot endoscope can follow.
  • step S21 If the process in step S21 is executed, or if the result of determination in step S19 is that there is no information to take over, the process proceeds to step S23.
  • steps S23 and S25 are the same as S23 and S25 in FIG. 4, so detailed description thereof will be omitted.
  • the part (affected part) Re it becomes possible to observe the part (affected part) Re by using the Also, when performing surgery on a site (affected area) Re in the gastrointestinal tract, if a doctor finds the site (affected area) Re by conducting an endoscopy in advance, the tip of the robot endoscope can be used during the actual surgery. The part can be automatically moved to the target part (affected part) Re, and the doctor does not have to handle both endoscope operation and surgery.
  • an inspection including a position change operation of the endoscope sensor unit up to the specific object is performed, and at least a part of the continuous images during this inspection is captured.
  • the change in the image obtained when the endoscope is inserted at the first timing and the change in the image (for example, the progress along the tube and the change in direction are recorded). Since the information on the operation when the tip of the endoscope is bent or further pushed is recorded as data as described with reference to FIG. 7A, this information is effectively utilized.
  • the above-mentioned guide information is such that when the endoscope sensor section is inserted at the second timing, the specific target is moved to a position where an image corresponding to the specific target can be acquired.
  • Information that enables identification of a specific object is information for similarity determination of changes in images inferred using AI. That is, since the image changes according to the doctor's operation until reaching a specific position, the reasoning is made using this changing image. Further, the guide information is determined based on the endoscope position change information, and this endoscope position change information is obtained by cross-referencing the endoscope image change obtained at the second timing and the endoscope operation information. It is information that can be obtained.
  • This handover information is used as guide information at the next and subsequent examinations (second timing). That is, if the takeover information is recorded, the takeover information is acquired in steps S7 and S9, and guidance is performed using the takeover information in step S15. In this guidance, if the operation is guided so that the continuous image change at the first timing and the continuous image change at the second timing are similar, the target site (affected area) Re can be reached. It should be noted that operation information may be acquired and recorded at the time of acquiring the continuous images, and used as handover information.
  • the target site can be reached.
  • a guide is presented with reference to the operation information at that time. For example, in the examination at the first timing, after such an image, such an image is captured, and since the site such as the affected area can be reached, the operation guide indicates that the same operation as at that time should be performed. may be displayed or voice guidance may be provided. That is, when a certain image is obtained, it is possible to easily detect the specific object by guiding the doctor at the time of the examination at the first timing to perform such operations so as to obtain such an image. becomes.
  • the control unit 11B in the second endoscope 10B functioning as a robot endoscope controls each unit in the second endoscope 10B based on a program stored in the recording unit 16B or the like. It will be realized by This flow shows the operation when handover information is recorded in the first endoscopic examination for the same subject (patient), and the second and subsequent endoscopic examinations are performed.
  • steps S1 to S7, S9, and S11 execute the same processes as steps S1 to S7, S9, and S11 in the flow shown in FIG. explain.
  • step S7 If the result of determination in step S7 is that there is inherited information, the inherited information is acquired, and guidance is provided by referring to this information (S9). However, even if there is no takeover information, it may be used as an endoscope, so if there is no takeover information, it follows the operation and general-purpose program (S10). In this case, since there is no handover information, the doctor will either operate the endoscope manually without the robot traveling, or follow a general-purpose program that can be operated without using the handover information.
  • step S9 or S10 When the processing in step S9 or S10 is executed, the inspection is started, the image is recorded, the timing operation is started, and information is acquired (S11), as in the case of FIG.
  • S31 The handover image is an image to be referred to reach the site (affected part) Re, similar to the reexamination auxiliary image P6 (see FIG. 7B) described above. By moving the tip of the endoscope 10B, the site (affected area) Re can be reached.
  • the similar image determination unit 22B determines whether or not the image is similar to the handover image.
  • step S33 If the result of determination in step S31 is that the image is not similar to the transferred image, rotation, trimming, and image processing are performed (S33).
  • the image captured by the imaging unit is not always oriented in a fixed direction, so the image as handover information may not be in the same orientation as the image currently acquired by the imaging unit 12B.
  • the angle of view of the image as handover information may not be the same as the angle of view of the image currently acquired by the imaging unit 12B.
  • the image quality such as the brightness of the image may also be different. Therefore, rotation, trimming, and image processing are performed on the image as the handover information or the currently acquired image, so that both are images with the same conditions.
  • step S33 If the process in step S33 is performed, or if the result of determination in step S31 is that the image is similar to the transferred image, then the tip is inserted while controlling it so that the image change is the same as that of the previous insertion. (S35). If the endoscope is moved in the same manner as when the endoscope was inserted into the body last time, the site (affected part) Re is reached. Control operations.
  • the similar image determination unit 22B refers to this image to determine the region (affected region) Re. It is determined whether or not they are similar, and if they are similar, it is determined that they have reached the affected area. In addition to the similarity determination by the similar image determination unit 22B, whether or not the affected part image has been reached may be determined by inference by the inference engine 19B. As a result of this determination, when it is determined that the target has not reached the affected area, the process returns to step S31.
  • step S39 if the affected area is reached as a result of the determination in step S37, light irradiation, range switching, etc. are performed (S39).
  • the signal output unit 21B illuminates the light source unit 13B arranged at the distal end of the endoscope so that the gastrointestinal tract can be visually recognized even from the outside. If the position of the site (affected part) Re is known, the doctor can operate at that position.
  • the robot endoscope can automatically move to the site (affected part) Re using the handover information (see S31 to S37). . Therefore, when a doctor performs an operation, there is no need to manually operate the endoscope to the position of the site (affected area) Re.
  • the site (affected site) Re When the site (affected site) Re is reached, it can be visually recognized from the outside of the digestive organ by light irradiation or the like, so treatment can be performed on that position. Further, in the case of reexamination, the target site (affected area) Re can be easily reached without advanced operation techniques, and the doctor can concentrate on follow-up observation.
  • FIG. 10(a) shows how the distal end 10Aa of the endoscope 10A is inserted into the large intestine Co from the anus and moved inward.
  • This endoscope 10A is composed of a distal end portion 10Aa, a bending portion 10Ab, a flexible portion 10Ac, and an operating portion 17A in order from the distal end side.
  • the distal end portion 10Aa is provided with an imaging portion 12A, which acquires an image and outputs it to the display portion 14A, the input portion 32 of the auxiliary device 30, and the like.
  • the bending portion 10Ab is connected by a plurality of wires to a plurality of levers 17Aa of the operating portion 17A.
  • the doctor can freely change the orientation of the distal end portion 10Aa by bending the bending portion 10Ab by operating the lever 17Aa or the like.
  • the bending portion 10Ab and the operating portion 17 are connected by a flexible portion 10Ac.
  • Images P71 to P73 are time-series images acquired when the endoscope 10A moves toward the back of the large intestine Co.
  • the black part near the center of each image indicates the cavity in the gastrointestinal tract, and the surrounding white part is the muscle and the folds of the gastrointestinal tract wall, which reflect the illumination light and appear white. It's becoming In addition, the black portions outside the folds of the muscle and the wall of the gastrointestinal tract are the shadow portions of the folds.
  • the doctor sees the image P71 and operates the lever 17Aa of the operation section 17A so as to move the distal end portion 10Aa of the endoscope 10A in the direction of the black portion near the center. 7A, while viewing images P72 and P73, the doctor operates the lever 17Aa of the operation unit 17A, etc., and points the distal end portion 10Aa of the endoscope 10A toward the site (affected area) Re. to move. Also, at this time, the operation determination unit 18A determines the operation state of the doctor, acquires the operation information Iop, and records it.
  • the doctor advances the distal end portion 10Aa of the endoscope 10A into the cavity of the gastrointestinal tract, and the operation information Iop and the images P71 to P73 at this time are acquired and transmitted to the input unit 32 of the auxiliary device 30. , is recorded in the recording unit 35 .
  • FIG. 10(b) shows how the second endoscope 10B functioning as a robot endoscope is inserted into the patient's large intestine and automatically moved to the site (affected site) Re.
  • the second endoscope 10B also includes, in order from the distal end side, a distal end portion 10Ba, a bending portion 10Bb, a flexible portion 10Bc, and an operating portion 17B.
  • the distal end portion 10Ba is provided with an imaging portion 12B, which acquires an image and outputs it to the display portion 14A, the input portion 32 of the auxiliary device 30, and the like.
  • the bending portion 10Bb is connected with a plurality of levers 17Ba of the operation portion 17B with a plurality of wires. Each of these wires is provided with an actuator in the operation section 17B, and the bend control section 23aB in the operation reflection control section 23B controls each actuator to bend the bending section 10Bb and bend the tip of the wire.
  • the orientation of the portion 10Ba can be freely changed.
  • the insertion control unit 23bB also moves the second endoscope 10B in the direction of arrow G, that is, in the direction of the back of the large intestine Co.
  • the control unit 11B or the operation reflection control unit 23B controls so that the distal end portion 10Ba of the second endoscope 10B can automatically move to the vicinity of the site (affected site) Re.
  • images and operation information until the doctor moves to the vicinity of the site (affected area) Re are recorded in the recording unit 35 of the auxiliary device 30 . Therefore, the control section 11B or the operation reflection control section 23B controls the actuator in the operation reflection control section 23B based on the recorded information.
  • the drive control of the actuator is performed. If the image P80 acquired by the imaging unit 12B is rotated when the similarity of the images and the reasoning using the images are performed, rotation processing is performed so that the images P71 to P73 and the like match. Also, if there is a case where the angles of view do not match, a trimming process is performed. Also, if the luminance (exposure) is not uniform, brightness correction or the like is performed. In addition, image correction processing is performed so that the conditions are the same as those of the image obtained when the doctor finds the site (affected area) Re. Images P81 to P83 are images after correction.
  • the control unit 11B or the operation reflection control unit 23B controls the driving of the actuator or the like so that the distal end portion 10Ba of the second endoscope 10B moves to the site.
  • the control unit 11B or the like displays a message to that effect on the display unit 14B.
  • the doctor in charge of reexamination sees this display, the progress of the site (affected area) Re can be observed.
  • the control unit 11B or the operation reflection control unit 23B controls the driving of the actuators, etc.
  • the signal output portion 21B causes the light source portion 13B provided at the distal end portion 10Ba to emit light.
  • the doctor can recognize that the direction of the arrow H is the target position of the operation when the light source unit 13B emits light. Therefore, the gastrointestinal tract such as the large intestine Co can be excised in the direction of the arrow J.
  • the second endoscope 10B that functions as a robot endoscope when used for reexamination, it travels by itself to the target site (affected site) Re, thereby reducing the burden on the doctor in charge. be.
  • follow-up observation can be performed even if the patient is not skilled in endoscope operation.
  • the surgeon in charge does not need to be involved in the operation of the endoscope, so the surgeon can concentrate on the surgery and reduce the burden.
  • images P91 to P94 are images in time series obtained as handover images in the imaging unit 12A. Exposure is performed during the H level of the graph drawn as imaging timing, and image data is read out from the image sensor during the L level. The images P91 to P94 described above are acquired by the imaging unit 12B at timings T1 to T4.
  • the black part near the center of each image indicates the cavity in the digestive tract
  • the surrounding white parts A to D are muscles and folds of the digestive tract wall that reflect the illumination light. and white.
  • the black portions outside the folds of the muscle and the wall of the gastrointestinal tract are the shadow portions of the folds.
  • the doctor sees the image P91 and operates the operation section 17A so as to move the tip of the endoscope 10A toward the black portion near the center in the same manner as described in FIG. 7A.
  • the control unit 11A determines whether or not it is possible to track how the operation is performed based on the images of the image P91 and the image P92. . If the change between the two images P91 and P92 is inappropriate because the insertion speed is too fast or if an operation that suddenly changes is performed, a guide such as redoing the operation is displayed ( See S21 in FIG. 8). In this case, the redo display may be visually displayed on the display unit 14A, or may be audibly displayed.
  • the handover image is acquired once every two cycles of the imaging timing, but this timing does not have to be a constant cycle, and may be determined according to changes in the image.
  • the timing may be such that the positional relationship between the images P91 and P92 can be traced between the same portions (for example, muscles and folds A to D of the wall of the gastrointestinal tract).
  • the time-series images record what kind of insertions and withdrawals were performed, and in what order and in what kind of operations, as information according to time. Therefore, if information on how the operation was performed is added to the image as necessary, it can be used as a reference when inserting the endoscope or observing the endoscope in which the same operation is performed.
  • the speed of change in the image obtained here will also serve as guide information.
  • the doctor is doing the operation carefully, and if the image changes quickly, the doctor is trying to operate quickly. It is meaningful to refer to them. Not only the speed of inserting and removing the endoscope, but also the observation time, observation angle, distance, etc. are helpful, and these information may also be taken over information.
  • the endoscope system generates a plurality of different images based on an affected area image of the affected area obtained by an examination of a specific subject using a first endoscope.
  • a similar image inference unit e.g., an inference engine 36
  • the re-insertion auxiliary information generation unit For example, auxiliary device 30, inference engine 36, etc.
  • the endoscope information acquisition method has an acquisition step of acquiring information that serves as a guide for the next insertion of the endoscope during the endoscopy. That is, during the first endoscopic examination, images from the time of insertion to the time of withdrawal are recorded, and the recorded images are used as guide information when inserting the next endoscope (for example, , image P1 in FIG. 1A and S11 and S17 in FIG. 4).
  • the auxiliary device 30 has been described as provided in a server or an in-hospital system.
  • services may be provided by multiple systems.
  • two endoscopes ie, the endoscopes 10A and 10B are shown as the endoscopes to be connected to the auxiliary device 30, this is merely an example, and a greater number of endoscopes may be connected.
  • colonoscopy has been mainly described as an endoscopy, the present invention is not limited to this, and can be applied to other endoscopy such as duodenum examination.
  • endoscopic examinations but also clinical examinations require various preparations and pretreatments, and the present embodiment can be applied to such cases as well.
  • the present invention can be applied not only to colonoscopes but also to other endoscopes.
  • endoscopes for example, laryngoscope, bronchoscope, cystoscope, biliary scope, angioscope, upper gastrointestinal endoscope, duodenoscope, small intestine endoscope, capsule endoscope, thoracoscope, laparoscope, arthroscope , a spinal endoscope, an epidural endoscope, and the like.
  • the present invention is not limited to this example. It can also be applied to mirrors. Once an affected area is found, it can also be applied to guidance such as changing the imaging frame rate when approaching that area during re-examination. Since the capsule endoscope is a device that captures images while moving in a specific direction and confirms the situation, the present invention can also be applied to systems and equipment that perform confirmation while moving, such as robots and in-vehicle equipment. can be applied and utilized.
  • an example of using an image acquired using an imaging element of an endoscope has been described, but the present invention is not limited to this example, and for example, an image using ultrasound may be used.
  • ultrasound images may be used for diagnosis of lesions that cannot be observed with optical images of an endoscope, such as the pancreas, pancreatic duct, gallbladder, bile duct, and liver.
  • Ultrasound may be used to observe the properties of the biological tissue or material that is the object of observation.
  • the ultrasonic observation device can acquire information about the characteristics of the observation target by performing predetermined signal processing on the ultrasonic echo received from the ultrasonic transducer that transmits and receives ultrasonic waves. becomes.
  • logic-based determination and inference-based determination have been described. may be used as In addition, in the process of judgment, a hybrid judgment may be made by partially utilizing the merits of each.
  • controllers 11A, 11B, and 31 have been described as devices configured from CPUs, memories, and the like.
  • part or all of each part may be configured as a hardware circuit, and is described in Verilog, VHDL (Verilog Hardware Description Language), etc.
  • a hardware configuration such as a gate circuit generated based on a program language may be used, or a hardware configuration using software such as a DSP (Digital Signal Processor) may be used. Of course, these may be combined as appropriate.
  • control units 11A, 11B, and 31 are not limited to CPUs, and may be elements that function as controllers. good.
  • each unit may be a processor configured as an electronic circuit, or may be each circuit unit in a processor configured with an integrated circuit such as an FPGA (Field Programmable Gate Array).
  • FPGA Field Programmable Gate Array
  • a processor composed of one or more CPUs may read and execute a computer program recorded on a recording medium, thereby executing the function of each unit.
  • the auxiliary device 30 has been described as having the control unit 31, the input unit 32, the ID management unit 33, the communication unit 34, the recording unit 35, and the inference engine 36. However, they do not need to be provided in an integrated device, and the above-described units may be distributed as long as they are connected by a communication network such as the Internet.
  • the endoscopes 10A and 10B include control units 11A and 11B, imaging units 12A and 12B, light source units 13A and 13B, display units 14A and 14B, ID management units 15A and 15B, recording units 16A and 16B, operation units 17A and 17B, operation determination section 18A, inference engines 19A and 19B, signal output section 21B, similar image determination section 22B, and operation reflection control section 23B.
  • control units 11A and 11B the imaging units 12A and 12B, light source units 13A and 13B, display units 14A and 14B, ID management units 15A and 15B, recording units 16A and 16B, operation units 17A and 17B, operation determination section 18A, inference engines 19A and 19B, signal output section 21B, similar image determination section 22B, and operation reflection control section 23B.
  • these need not be provided in an integrated device, and each part may be distributed.
  • control described mainly in the flowcharts can often be set by a program, and may be stored in a recording medium or recording unit.
  • the method of recording in the recording medium and the recording unit may be recorded at the time of product shipment, using a distributed recording medium, or downloading via the Internet.
  • the present invention is not limited to the above-described embodiment as it is, and can be embodied by modifying the constituent elements without departing from the spirit of the present invention at the implementation stage.
  • various inventions can be formed by appropriate combinations of the plurality of constituent elements disclosed in the above embodiments. For example, some components of all components shown in the embodiments may be deleted. Furthermore, components across different embodiments may be combined as appropriate.

Abstract

La présente invention concerne un dispositif de guidage d'insertion d'endoscope, un procédé de guidage d'insertion d'endoscope, un procédé d'acquisition d'informations d'endoscope, un dispositif de serveur de guidage et un procédé d'apprentissage de modèle d'inférence d'image qui permettent de détecter de nouveau facilement une position observée par un endoscope ou analogue. La présente invention comprend : une unité d'inférence d'image similaire (un moteur d'inférence 36) présentant un modèle d'inférence d'image similaire appris au moyen de la même image de région ou d'images de région similaires obtenues par une pluralité de procédés d'observation différents sur la base d'images de région d'une région obtenue par examen d'un patient spécifique au moyen d'un premier endoscope ; et une unité de génération d'informations auxiliaires de réinsertion (un dispositif auxiliaire 30, le moteur d'inférence 36 et analogues) qui génère des informations auxiliaires d'insertion pour effectuer un guidage vers une position de reconfirmation en fonction d'un résultat obtenu par l'entrée d'une image au moment du réexamen du patient spécifique au moyen d'un second endoscope dans l'unité d'inférence d'image analogue afin d'observer de nouveau la position de la lésion au moment du réexamen.
PCT/JP2021/043003 2021-11-24 2021-11-24 Dispositif de guidage d'insertion d'endoscope, procédé de guidage d'insertion d'endoscope, procédé d'acquisition d'informations d'endoscope, dispositif de serveur de guidage et procédé d'apprentissage de modèle d'inférence d'image WO2023095208A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/043003 WO2023095208A1 (fr) 2021-11-24 2021-11-24 Dispositif de guidage d'insertion d'endoscope, procédé de guidage d'insertion d'endoscope, procédé d'acquisition d'informations d'endoscope, dispositif de serveur de guidage et procédé d'apprentissage de modèle d'inférence d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/043003 WO2023095208A1 (fr) 2021-11-24 2021-11-24 Dispositif de guidage d'insertion d'endoscope, procédé de guidage d'insertion d'endoscope, procédé d'acquisition d'informations d'endoscope, dispositif de serveur de guidage et procédé d'apprentissage de modèle d'inférence d'image

Publications (1)

Publication Number Publication Date
WO2023095208A1 true WO2023095208A1 (fr) 2023-06-01

Family

ID=86539072

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/043003 WO2023095208A1 (fr) 2021-11-24 2021-11-24 Dispositif de guidage d'insertion d'endoscope, procédé de guidage d'insertion d'endoscope, procédé d'acquisition d'informations d'endoscope, dispositif de serveur de guidage et procédé d'apprentissage de modèle d'inférence d'image

Country Status (1)

Country Link
WO (1) WO2023095208A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024042895A1 (fr) * 2022-08-24 2024-02-29 富士フイルム株式会社 Dispositif de traitement d'images, endoscope, procédé de traitement d'images, et programme

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012170774A (ja) * 2011-02-24 2012-09-10 Fujifilm Corp 内視鏡システム
WO2018159347A1 (fr) * 2017-02-28 2018-09-07 富士フイルム株式会社 Dispositif de processeur, système d'endoscope, et procédé de fonctionnement d'un dispositif de processeur
WO2018180631A1 (fr) * 2017-03-30 2018-10-04 富士フイルム株式会社 Dispositif de traitement d'image médicale, système d'endoscope, et procédé d'exploitation d'un dispositif de traitement d'image médicale
WO2019106712A1 (fr) * 2017-11-28 2019-06-06 オリンパス株式会社 Dispositif de traitement d'images d'endoscope et procédé de traitement d'images d'endoscope
JP2019097661A (ja) * 2017-11-29 2019-06-24 水野 裕子 内視鏡ナビゲーション装置
WO2020017213A1 (fr) * 2018-07-20 2020-01-23 富士フイルム株式会社 Appareil de reconnaissance d'image d'endoscope, appareil d'apprentissage d'image d'endoscope, procédé d'apprentissage d'image d'endoscope et programme
WO2020039931A1 (fr) * 2018-08-20 2020-02-27 富士フイルム株式会社 Système endoscopique et système de traitement d'image médicale
US20210177524A1 (en) * 2019-12-12 2021-06-17 Koninklijke Philips N.V. Guided anatomical visualization for endoscopic procedures
WO2021229684A1 (fr) * 2020-05-12 2021-11-18 オリンパス株式会社 Système de traitement d'image, système d'endoscope, procédé de traitement d'image et procédé d'apprentissage

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012170774A (ja) * 2011-02-24 2012-09-10 Fujifilm Corp 内視鏡システム
WO2018159347A1 (fr) * 2017-02-28 2018-09-07 富士フイルム株式会社 Dispositif de processeur, système d'endoscope, et procédé de fonctionnement d'un dispositif de processeur
WO2018180631A1 (fr) * 2017-03-30 2018-10-04 富士フイルム株式会社 Dispositif de traitement d'image médicale, système d'endoscope, et procédé d'exploitation d'un dispositif de traitement d'image médicale
WO2019106712A1 (fr) * 2017-11-28 2019-06-06 オリンパス株式会社 Dispositif de traitement d'images d'endoscope et procédé de traitement d'images d'endoscope
JP2019097661A (ja) * 2017-11-29 2019-06-24 水野 裕子 内視鏡ナビゲーション装置
WO2020017213A1 (fr) * 2018-07-20 2020-01-23 富士フイルム株式会社 Appareil de reconnaissance d'image d'endoscope, appareil d'apprentissage d'image d'endoscope, procédé d'apprentissage d'image d'endoscope et programme
WO2020039931A1 (fr) * 2018-08-20 2020-02-27 富士フイルム株式会社 Système endoscopique et système de traitement d'image médicale
US20210177524A1 (en) * 2019-12-12 2021-06-17 Koninklijke Philips N.V. Guided anatomical visualization for endoscopic procedures
WO2021229684A1 (fr) * 2020-05-12 2021-11-18 オリンパス株式会社 Système de traitement d'image, système d'endoscope, procédé de traitement d'image et procédé d'apprentissage

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024042895A1 (fr) * 2022-08-24 2024-02-29 富士フイルム株式会社 Dispositif de traitement d'images, endoscope, procédé de traitement d'images, et programme

Similar Documents

Publication Publication Date Title
JP6641172B2 (ja) 内視鏡業務支援システム
US20180263568A1 (en) Systems and Methods for Clinical Image Classification
JP5368668B2 (ja) 医用画像表示装置、医用画像表示システム及び医用画像表示システムの作動方法
JP7127785B2 (ja) 情報処理システム、内視鏡システム、学習済みモデル、情報記憶媒体及び情報処理方法
Chadebecq et al. Computer vision in the surgical operating room
CN114332019B (zh) 内窥镜图像检测辅助系统、方法、介质和电子设备
JP2010517632A (ja) 内視鏡の継続的案内のためのシステム
JP2017534322A (ja) 膀胱の診断的マッピング方法及びシステム
WO2020242949A1 (fr) Systèmes et procédés de positionnement et de navigation par vidéo dans des interventions gastroentérologiques
JP2009022446A (ja) 医療における統合表示のためのシステム及び方法
CN111588464A (zh) 一种手术导航方法及系统
JP2012024518A (ja) 内視鏡観察を支援する装置および方法、並びに、プログラム
WO2021075418A1 (fr) Procédé de traitement d'image, procédé de génération de données d'apprentissage, procédé de génération de modèle entraîné, procédé de prédiction d'apparition de maladie, dispositif de traitement d'image, programme de traitement d'image et support d'enregistrement sur lequel un programme est enregistré
CN114913173B (zh) 内镜辅助检查系统、方法、装置及存储介质
JP5451718B2 (ja) 医用画像表示装置、医用画像表示システム及び医用画像表示システムの作動方法
US20220409030A1 (en) Processing device, endoscope system, and method for processing captured image
KR20220130855A (ko) 인공 지능 기반 대장 내시경 영상 진단 보조 시스템 및 방법
WO2023095208A1 (fr) Dispositif de guidage d'insertion d'endoscope, procédé de guidage d'insertion d'endoscope, procédé d'acquisition d'informations d'endoscope, dispositif de serveur de guidage et procédé d'apprentissage de modèle d'inférence d'image
JP6644530B2 (ja) 内視鏡業務支援システム
JP2023509075A (ja) 医療支援操作方法、デバイス及びコンピュータプログラム製品
US20220361739A1 (en) Image processing apparatus, image processing method, and endoscope apparatus
JP2018047067A (ja) 画像処理プログラム、画像処理方法および画像処理装置
JP7314394B2 (ja) 内視鏡検査支援装置、内視鏡検査支援方法、及び内視鏡検査支援プログラム
WO2023218523A1 (fr) Second système endoscopique, premier système endoscopique et procédé d'inspection endoscopique
US20220202284A1 (en) Endoscope processor, training device, information processing method, training method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21965582

Country of ref document: EP

Kind code of ref document: A1