WO2021054419A1 - Système et procédé de traitement d'images endoscopiques - Google Patents

Système et procédé de traitement d'images endoscopiques Download PDF

Info

Publication number
WO2021054419A1
WO2021054419A1 PCT/JP2020/035371 JP2020035371W WO2021054419A1 WO 2021054419 A1 WO2021054419 A1 WO 2021054419A1 JP 2020035371 W JP2020035371 W JP 2020035371W WO 2021054419 A1 WO2021054419 A1 WO 2021054419A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
endoscope
model
acquired
endoscopic image
Prior art date
Application number
PCT/JP2020/035371
Other languages
English (en)
Japanese (ja)
Inventor
智大 下田
賢 植木
政至 藤井
敦朗 古賀
上原 一剛
Original Assignee
株式会社Micotoテクノロジー
国立大学法人鳥取大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Micotoテクノロジー, 国立大学法人鳥取大学 filed Critical 株式会社Micotoテクノロジー
Publication of WO2021054419A1 publication Critical patent/WO2021054419A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/30Anatomical models

Definitions

  • the present invention relates to a technique for processing an image captured by an endoscope (referred to as an endoscopic image).
  • Endoscopy is widely used in the medical field, and there are many types of endoscopes such as upper gastrointestinal endoscope, colonoscope, bronchoscope, thoracoscope, vascular endoscope, capsule endoscope, etc., depending on the organ and purpose of use.
  • endoscopes such as upper gastrointestinal endoscope, colonoscope, bronchoscope, thoracoscope, vascular endoscope, capsule endoscope, etc., depending on the organ and purpose of use.
  • a type of endoscope is provided.
  • endoscopes that not only allow viewing of images inside organs, but also enable tissue collection, polyp resection, and the like.
  • Patent Document 1 a learning image group classified into each pathological type is stored, and the feature amount matching between the image of the identification target region of the obtained endoscopic image and the learning image group is performed.
  • An endoscopic diagnostic imaging support system that identifies each pathological type has been proposed.
  • the accuracy of finding lesions in endoscopic images can be improved.
  • it is necessary to operate the endoscope properly.
  • a trainee tries to learn a technique for operating an endoscope properly, he / she must receive training under the guidance doctor, which causes problems such as an increase in the burden on the guidance doctor.
  • problems that the operation of the endoscope operated by the doctor by himself / herself is good or bad and that the proficiency level of his / her own endoscope technique cannot be judged by himself / herself.
  • the present invention has been made in view of such circumstances, and provides a technique capable of supporting endoscopic operation using an endoscopic image.
  • the endoscopic image processing system adopts the following configuration. That is, the endoscopic image processing system is an image acquisition means for acquiring an endoscopic image captured by an endoscope in a lumen organ, and the acquired endoscopy for the first learned model.
  • the first model processing means for acquiring the position information and orientation information indicating the position and direction of the endoscope that captured the endoscope image by giving a mirror image, the acquired position information and orientation information, and the above. It is provided with a storage means for storing the endoscope image in association with the endoscope, and the first trained model determines the correct answer of the position and orientation of the endoscope that captured the teacher's endoscope image for the teacher's endoscope.
  • the endoscopic image processing system may be one device or a plurality of devices. Further, the first trained model may be provided in the endoscopic image processing system or may be provided outside.
  • the endoscopes that capture the endoscopic images to be processed by the endoscopic image processing system are upper gastrointestinal endoscopy, colonoscope, bronchoscope, thoracic endoscope, vascular endoscope, and capsule. It is an endoscope, etc., and is not limited.
  • the luminal organ shown in the endoscopic image may be an organ model that imitates an organ in a human body model, or may be an actual luminal organ of a living body.
  • Another aspect of the present invention is an endoscopic image processing method realized by executing a control program stored in a memory by a processor.
  • the endoscopic image processing method acquires an endoscopic image captured by an endoscope in a luminal organ and gives the acquired endoscopic image to the first trained model. , Acquires position information and orientation information indicating the position and direction of the endoscope that captured the endoscope image, and associates the acquired position information and orientation information with the acquired endoscope image to store the memory.
  • the first trained model is stored in the above memory or other memory, and the correct answer of the position and orientation of the endoscope that captured the teacher's endoscope image is obtained for the teacher.
  • Machine learning is performed based on a plurality of teacher data associated with the endoscopic image.
  • FIG. 5 is a schematic cross-sectional view of a slice of the stomach for explaining the region orientation data used in the positioning AI model. It is a figure for demonstrating the 1st region designation data used in the 1st guide AI model. It is a figure for demonstrating the 2nd region designation data inferred by the 2nd guide AI model.
  • FIG. 1 is a diagram showing the appearance of a part of the endoscopic procedure trainer system according to the present embodiment.
  • FIG. 2 is a diagram conceptually showing a control configuration of the endoscopic procedure trainer system according to the present embodiment.
  • the endoscopic procedure trainer system (hereinafter referred to as this system) 1 according to the present embodiment is mainly composed of a human body model 3, an input / output panel 5, a control unit 10, and the like. Enables individual learning and individual training. Specifically, in this system 1, a person to be trained (hereinafter referred to as a trainee) is output to an input / output panel 5 or the like while actually training an endoscopic technique using a human body model 3.
  • the endoscopic technique can be self-taught.
  • the system 1 can instruct trainees such as residents in endoscopic procedures, so that the burden on the instructor can be reduced.
  • the medical procedures that enable self-learning and self-training by this system 1 include colonoscopy, enteroscopy, biliary / pancreatic endoscopy, and these endoscopic treatment techniques. , Other intubation procedures may also be included.
  • the human body model 3 is a humanoid model operated by a trained person (hereinafter referred to as a trainee), and has a shape imitating the outer shape and internal organs of the human body.
  • the human body model 3 simulates the outer shape of the entire human body and, as internal organs, the oral cavity, nasal cavity, pharynx, larynx, trachea, esophagus, bronchus, stomach, and duodenum.
  • the shape of the nasal organ is simulated internally.
  • the human body model 3 is placed on the pedestal portion in a posture corresponding to the procedure for training purposes.
  • the mannequin 3 is placed on the pedestal in a lying position, and during training for endoscopic procedures, the mannequin 3 is placed on the pedestal in a sideways position, as shown in FIG. It will be placed.
  • the outer surface of the human body model 3 is covered with a skin sheet that imitates the outer shape of the human body, and a wig is attached to the head.
  • the skin sheet is made of a flexible material such as silicone rubber.
  • the term "flexibility” as used herein means a property that is unlikely to break or be damaged even when bent, and may include one or both of stretchability and elasticity.
  • the internal structure inside the skin sheet in the human model 3 is composed of an internal organ structure, a skeleton base, and the like.
  • the skeleton base portion is a group of components forming a skeleton that forms the basis of the shape of the human body model 3, and is formed of a material having strength and hardness that can withstand operation by a trainee, such as metal or synthetic resin.
  • the skeletal base includes skeletal members corresponding to the skull, cervical spine, and the like.
  • the internal organ structure is a group of components having a shape imitating a luminal organ, and is connected and fixed to a skeletal base portion at an arbitrary position and method.
  • the internal organ structure includes an internal modeling part (not shown).
  • the in-vivo model includes an oral cavity, a nasal cavity, an oral cavity, a pharynx, a larynx, a trachea, an esophagus, a stomach, and a duodenum.
  • the duodenal model is formed of a material having flexibility close to that of a living body's luminal organ, such as silicone rubber.
  • the internal molding portion is preferably integrally molded using a flexible material so as to eliminate joints as much as possible.
  • the skeleton base portion and the internal organ structure simulate the living body with high accuracy, but in the present embodiment, the specific shape, material, manufacturing method, etc. of the human body model 3 and the like. Does not limit.
  • various known human body models 3 can be used.
  • the mannequin 3 may simulate the outer shape of the human body only in the upper body, or the gastrointestinal tract such as the large intestine, small intestine, gallbladder, and bladder, and urethra as the internal modeling part.
  • Other luminal organs, such as the ureteral system such as the canal, bladder, and urethra, may be simulated.
  • object detection sensors for detecting the presence of an endoscope inserted into the internal modeling portion are provided at a plurality of predetermined portions of the internal modeling portion.
  • Each object detection sensor is located at a position where it does not come into contact with the inserted endoscope (for example, outside the inner wall surface defining the lumen of the tract organ model) so as not to give a sense of discomfort to the endoscope operator.
  • a photoelectric sensor is used as the object detection sensor, and is provided at the esophageal entrance, the gastroesophageal junction, and the duodenal descending leg.
  • the object detection sensor may not be provided, may be provided at a portion different from the present embodiment, and the number thereof is not limited. Further, the object detection principle by the object detection sensor is not limited.
  • the control unit 10 has a configuration that controls the control of the system 1, and may be a so-called computer such as a PC (Personal Computer), an embedded system, or a control board.
  • the control unit 10 is housed in a device mounting table having a pedestal on which the human body model 3 is placed, together with the input / output panel 5, the speaker 6, and the like.
  • the control unit 10 has a processor 11, a memory 12, an input / output interface (I / F) unit 13, and the like as a hardware configuration.
  • the processor 11 may be one or more general CPUs or MPUs (Micro Processing Units), and may be replaced with or together with an integrated circuit (ASIC) for a specific application, a DSP (Digital Signal Processor), and a GPU (Graphics). It may be a Processing Unit), an FPGA (Field Programmable Gate Array), or the like.
  • ASIC integrated circuit
  • DSP Digital Signal Processor
  • GPU Graphics
  • It may be a Processing Unit), an FPGA (Field Programmable Gate Array), or the like.
  • the memory 12 is a RAM (Random Access Memory) and a ROM (Read Only Memory), and may include an auxiliary storage device (hard disk or the like).
  • a control program that realizes various functions of the system 1 is stored in the memory 12.
  • the control program also includes an AI (Artificial Intelligence) model.
  • the AI model can be described as a machine learning (ML) model or a trained model, and can also be described as a combination of a computer program and parameters, or a combination of a plurality of functions and parameters. ..
  • This embodiment does not limit the structure and learning algorithm of the AI model as long as it is an AI model of supervised machine learning.
  • the AI model has a structure in which a plurality of nodes are connected by edges in each layer of an input layer, an intermediate layer (hidden layer), and an output layer, and the value of each node is connected to that node of another node. It may be realized by a model called a neural network (NN) or a deep neural network (DNN) calculated by an activation function from values and edge weights (nodes). Further, in the present embodiment, since the AI model uses an endoscopic image, it may be realized by a convolutional neural network (CNN).
  • CNN convolutional neural network
  • the "AI model" in the present specification may refer to one neural network when it is constructed by a neural network and when the input layer, the intermediate layer and the output layer are regarded as one neural network unit. Alternatively, it may refer to a combination of a plurality of neural networks.
  • the input / output I / F unit 13 is a device that controls the input or output of a signal to be processed or processed by the processor 11, and is a user interface device such as an input / output panel 5, a speaker 6, a sensor group 7, and an internal view. It is connected to a mirror 8 or the like.
  • the connection form between the endoscope 8 and the sensor group 7 and the input / output I / F unit 13 may be not only wired but also wirelessly connected so that communication is possible.
  • the input / output I / F unit 13 may include a communication unit that communicates with another computer or device, and may be connected to a portable recording medium or the like.
  • the sensor group 7 is a plurality of various sensors provided inside or outside the human body model 3, and includes the above-mentioned object detection sensor.
  • the sensor group 7 may include a sensor other than the object detection sensor, such as a pressure sensor provided at a predetermined portion of the internal modeling portion and an atmospheric pressure sensor for detecting the atmospheric pressure in the lumen of the internal modeling portion.
  • the input / output panel 5 is installed above the equipment mounting base, and operates a training menu, a display device that displays the operation mode of the system 1, implementation details, evaluation results, etc., and a screen displayed on the display device. Includes an input device for.
  • the input / output panel 5 is realized as a touch panel in which a display device and an input device are integrated. The display contents of the input / output panel 5 will be described later.
  • the endoscope 8 connected to the input / output I / F unit 13 is an upper gastrointestinal endoscope.
  • the upper gastrointestinal endoscope is composed of an insertion portion including a tip portion and a curved portion, an operation unit for performing various operations on the tip portion and the curved portion, an image processing device, and the like.
  • the operation unit is provided with an angle knob (left / right angle and up / down angle), a suction button, an air supply / water supply button, a forceps insertion port, and the like.
  • the endoscope 8 connected to the input / output I / F unit 13 may be a colonoscope, a bronchoscope, a thoracoscope, a blood vessel endoscope, or the like, other than the upper gastrointestinal endoscope.
  • the control unit 10 may include hardware elements (not shown in FIG. 2), and the hardware configuration of the control unit 10 is not limited.
  • control unit 10 When the control program stored in the memory 12 is executed by the processor 11, the control unit 10 receives the input signals from the sensor group 7 and the endoscope 8 and displays, outputs, and outputs to the input / output panel 5.
  • the input information is acquired from the panel 5, the audio output is controlled from the speaker 6, and the like.
  • the control program may be stored in advance at the time of shipment, or may be stored in advance from a portable recording medium such as a CD (Compact Disc) or a memory card or another computer on the network via the input / output I / F unit 13. It may be installed and stored in the memory 12.
  • FIG. 3 is a block diagram conceptually showing a software configuration realized by the control unit 10.
  • the control unit 10 realizes the software configuration as shown in FIG.
  • the control unit 10 has an image processing module 21, an AI processing module 22, a storage processing module 23, an output processing module 24, and the like as software configurations.
  • the image processing module 21 can be described as an image acquisition means
  • the AI processing module 22 can be described as a first, second or third model processing means
  • the storage processing module 23 can be described as a storage means.
  • the output processing module 24 can be described as an output processing means.
  • each software component shown in FIG. 3 is conceptually shown separately for convenience of explanation, the software configuration realized by the control unit 10 is as shown in FIG. It does not have to be clearly separated into each component.
  • This system 1 estimates the position and orientation of the endoscope 8 and estimates the orientation of the endoscope 8 by the operation of the control unit 10 based on the software configuration as described above in order to enable individual learning and individual training of the endoscope procedure by the trainee.
  • the "position and orientation of the endoscope” specifically means the endoscope 8 inserted in the luminal organ model of the human body model 3 (here, mainly the esophageal model, the stomach model and the duodenum model). It means the position and the imaging direction of the imaging element provided at the tip of the image.
  • the guide of the endoscope 8 outputs information that guides or guides what the tip of the endoscope 8 should do at any given time.
  • the trainee can independently learn and train the endoscope technique without an instructor. ..
  • the process related to "estimation of the position and orientation of the endoscope” and the process related to "guide of the endoscope” executed by the control unit 10 will be described in detail.
  • the positioning AI model 31, the first guide AI model 32, and the second guide AI model 33 are stored in the memory 12.
  • the positioning AI model 31, the first guide AI model 32, and the second guide AI model 33 can be described as the first, second, or third trained model.
  • the positioning AI model 31, the first guide AI model 32, and the second guide AI model 33 are AI models that have been trained by a supervised machine learning algorithm.
  • the positioning AI model 31 is referred to as the P-AI model 31
  • the first guide AI model 32 is referred to as the G1-AI model 32
  • the second guide AI model 33 is referred to as the G2-AI model 33.
  • the P-AI model 31 is an image classification type AI model in which an endoscope image, which is an image captured by an endoscope, is input and region position data and region direction data corresponding to the endoscope image are inferred. Is.
  • the P-AI model 31 corresponds to the first trained model.
  • the "region position data" includes one or more organ regions specified based on the inference result of the P-AI model 31 from a plurality of organ regions in which the luminal organs are virtually divided in the long axis direction. It is data that can be identified as the position information of the endoscope 8, and is data for specifying the position of the tip portion of the endoscope 8 in the luminal organ.
  • the "region direction data” is a plurality of directions (six directions in the present embodiment) indicated by three-dimensional orthogonal axes virtually set for each organ region in which the luminal organ is virtually divided in the long axis direction. It is data that can identify one or more directions specified based on the inference result of the P-AI model 31 as the direction information of the endoscope 8, and is the tip of the endoscope 8 in the luminal organ. It is data for specifying the direction of.
  • FIG. 4 is a schematic view of the esophagus, stomach and duodenum for explaining the region position data used in the positioning AI model 31, and FIG. 5 is a stomach for explaining the region orientation data used in the positioning AI model 31. It is a cross-sectional schematic diagram of the round slice of.
  • the luminal organs consisting of the esophagus, stomach and duodenum are arranged in the longitudinal direction in the oral esophagus part E0, the gastric side esophagus part E1, the corpus E2, the gastric corpus E3, the angular incisure E4, and the antrum.
  • the region position data is data capable of identifying one or more organ regions specified based on the inference result of the P-AI model 31 from these plurality of organ regions as the position information of the endoscope 8. For example, it is indicated by a numerical value or a character string from E0 to E8.
  • FIG. 5 shows six directions indicated by three-dimensional orthogonal axes virtually set in the body of stomach E3. Specifically, in the circular section of the gastric body E3, the direction from the posterior wall to the anterior wall (hereinafter referred to as the anterior wall direction) D1 and the direction from the anterior wall to the posterior wall (hereinafter referred to as the posterior wall direction).
  • the anterior wall direction the direction from the posterior wall to the anterior wall
  • the posterior wall direction the direction from the anterior wall to the posterior wall
  • the region direction data is data that can identify one or more directions specified based on the inference result of the P-AI model 31 from the six directions indicated by the three-dimensional orthogonal axes for each organ region, for example.
  • the three-dimensional orthogonal axis is virtually set not only for the body of the stomach E3 but also for each other organ region, and since the luminal organ does not extend linearly, it is set for each organ region. Each method may be different. Further, the area direction data may include data indicating that no direction is specified.
  • the P-AI model 31 is machine-learned based on a plurality of teacher data in which the correct answer of the area position data and the correct answer of the area direction data are associated with the endoscopic image for teachers. Specifically, a plurality of teacher's endoscope images are prepared, and the position and direction of the endoscope that captured each teacher's endoscope image is the above-mentioned region position data and region direction data (for example, by a person). A plurality of teacher data are generated by tagging each teacher's endoscopic image with the specified area position data and area direction data (correct answer data).
  • the P-AI model 31 is trained by a predetermined machine learning algorithm using the plurality of teacher data generated in this way. For example, the P-AI model 31 is trained by CNN's known learning algorithm suitable for image classification.
  • the endoscopic image for the teacher can be collected by actually performing the endoscopic procedure using the human body model 3 by a skilled doctor who is good at the endoscopic procedure.
  • Multiple patterns of endoscopic images for teachers may be collected by multiple endoscopic procedures by one skilled doctor, or multiple patterns of endoscopic images may be collected by multiple endoscopic procedures by multiple skilled doctors.
  • a group of endoscopic images may be collected.
  • the endoscopic image for the teacher can be easily collected, and the correct answer data of the region position data and the region direction data can be easily collected. Can be identified.
  • the P-AI model 31 learned in this way is obtained for each organ region of the luminal organ when an endoscopic image, which is an image captured by the endoscope 8 inserted in the human body model 3, is input.
  • the probability value of and the probability value of each of the six directions indicated by the three-dimensional orthogonal axes are output.
  • the input endoscopic image shows each region (mouth side esophagus part E0, gastric side esophagus part E1, gastroscopy part E2, gastric body part E3, angular incisure part E4, antrum part E5, duodenal bulb part.
  • the insertion direction D5, and the pull-out direction D6) are calculated.
  • the probability value of each organ region indicates the probability that the endoscope 8 exists in the organ region (position), and the probability value in each direction is the direction in which the endoscope 8 is facing. It shows the probability.
  • the region position data can identify one or more organ regions having a probability value equal to or higher than a predetermined threshold based on the probability value for each organ region, which is the inference result of the P-AI model 31, or the maximum. It is acquired as data that can identify the organ region having a probability value.
  • the area direction data is data that can identify one or more directions having a probability value equal to or higher than a predetermined threshold based on the probability value for each direction that is the inference result of the P-AI model 31, or a direction having the maximum probability value. Is acquired as data that can identify.
  • the G1-AI model 32 is an image classification type AI model that inputs an endoscopic image and infers first region designation data corresponding to the endoscopic image.
  • the G1-AI model 32 corresponds to the second trained model.
  • the "first region designation data" is data that designates (points to) a certain image region in the endoscopic image input to the G1-AI model 32.
  • the image area designated by the first area designation data is an image area corresponding to the route to be taken by the endoscope 8 that is capturing the endoscope image.
  • the image area designated by the first area designation data is not limited to such an example, and may be an image area corresponding to a place (point) where something should be done with the endoscope 8.
  • the image area corresponding to the point to be observed (the part to be imaged and recorded) and the image area corresponding to the point (site) to be treated or tissue is collected are specified by the first area designation data.
  • the first area designation data a portion to be observed with the endoscope 8 or to be imaged and recorded may be referred to as an observation point.
  • FIG. 6 is a diagram for explaining the first region designation data used in the first guide AI model 32.
  • the endoscopic image input to the G1-AI model 32 is normalized to a predetermined size and shape, and as shown in FIG. 6, a predetermined grid line is used for a plurality of unit image regions. It is virtually divided. In the example of FIG. 6, it is divided into 25 unit image areas of 5 in the vertical direction and 5 in the horizontal direction.
  • the first area designation data any one unit image area or any two unit image areas vertically, horizontally, or diagonally adjacent to each other is designated as an image area.
  • the first area designation data is indicated by, for example, "vertical, horizontal" coordinate values.
  • the first area designation data indicating an image area consisting of a unit image area that is the third from the top in the vertical direction and the fourth from the left in the horizontal direction is indicated by "3, 4", and is the second from the top in the vertical direction and is horizontal.
  • the first area designation data indicating the image area consisting of the third and fourth unit image areas from the left is indicated by "2, 3-2, 4".
  • each unit image area has a quadrangular shape separated by grid lines, but the shape and size of the unit image area are not limited, and even if it is a circular shape, it is triangular. You may.
  • the G1-AI model 32 is machine-learned using a plurality of teacher data in which the correct answers of the first region designation data are associated with each teacher's endoscopic image. Specifically, a plurality of teacher's endoscope images are prepared, and for each teacher's endoscope image, an image area corresponding to a route to be taken by the endoscope that captured the image is specified (for example, by a person). Then, a plurality of teacher data are generated by tagging each teacher's endoscope image with the correct answer of the first area designation data that specifies the specified image area.
  • the G1-AI model 32 is trained by a predetermined machine learning algorithm using the plurality of teacher data generated in this way. For example, the G1-AI model 32 is trained by a known learning algorithm of CNN suitable for image classification similar to the P-AI model 31. The method of collecting endoscopic images for teachers may be the same as that of the P-AI model 31.
  • each image in the endoscope image is input.
  • the G1-AI model 32 in addition to each unit image area divided by the grid lines in FIG. 6, all combinations of two unit image areas adjacent to the left and right, and two unit image areas adjacent to the top and bottom are used.
  • the probability value is output for each image area of all combinations of the above and all combinations of two diagonally adjacent unit image areas. That is, the probability value of each image area indicates the certainty that each image area of the target endoscopic image is a route to be taken or a place to do something.
  • the region designation data is data that designates one or more image regions having a probability value equal to or higher than a predetermined threshold based on the probability value for each image region that is the inference result of the G1-AI model 32, or the maximum probability. It is acquired as data that specifies the image area that has a value.
  • the G2-AI model 33 is an image detection type AI model that inputs an endoscopic image and detects an image area tagged with predetermined job information in the endoscopic image.
  • the AI processing module 22, which will be described later gives an endoscopic image to the G2-AI model 33, thereby providing second area designation data for designating an image area detected in the endoscopic image.
  • Job information corresponding to the second area designation data can be acquired.
  • the G2-AI model 33 also corresponds to the second trained model.
  • the "second area designation data" is in agreement with the above-mentioned first area designation data in that it is data that specifies (points to) a certain image area in the endoscopic image.
  • the second area designation data is distinguished from the first area designation data. It shall be written as.
  • the second area designation data is sometimes called a Bounding Box.
  • area-designated data means either one or both of the first area-designated data and the second area-designated data.
  • job information of the second area designation data is tag information corresponding to the image area designated by the second area designation data, and indicates the route that the endoscope 8 should take, or the endoscope 8 This is information that indicates the place (point) where something should be done.
  • the job information may indicate an observation point, or a point (site) at which some treatment should be performed or tissue collection should be performed.
  • the G2-AI model 33 is machine-learned using a plurality of teacher data in which the correct answers of the second region designation data tagged with job information are associated with each teacher's endoscope image.
  • a plurality of teacher endoscopic images are prepared, and necessary tags are prepared.
  • the tag here is the above-mentioned job information, and a tag indicating a route to be followed by the endoscope 8, a tag indicating an observation point, a tag indicating a point at which tissue collection should be performed, and the like are prepared. Then, if there is an image area corresponding to the prepared tag (job information) for each teacher's endoscope image, the image area is specified for each tag (for example, by a person), so that a plurality of teacher data can be obtained.
  • the G2-AI model 33 is trained by a predetermined machine learning algorithm using the plurality of teacher data generated in this way.
  • the G2-AI model 33 is trained by CNN's known learning algorithm suitable for image detection.
  • the method of collecting endoscopic images for teachers may be the same as that of the P-AI model 31 and the G1-AI model 32.
  • FIG. 7 is a diagram for explaining the second region designation data inferred by the second guide AI model 33.
  • five image regions B1, B2, B3, B4 and B5 are detected, and five second region designation data for designating each image region are represented.
  • the image area B1 is tagged with job information indicating the observation point of the curvature fold
  • the image area B2 is tagged with the job information indicating the observation point of the angular incisure
  • the image area B4 is the pylorus. It is tagged with job information that indicates the observation point of.
  • the image area B5 is tagged with job information indicating the tissue collection of the ulcer
  • the image area B3 is tagged with the job information indicating the route to be followed.
  • the G2-AI model 33 learned in this way is included in the endoscope image.
  • the detection result for each image area tagged in advance with the job information and the second area designation data for designating the detected image area are output respectively.
  • the detection result for each image area may be the existence probability value for each image area, or may be the presence / absence of detection for each image area. In the former case, an image region having an existence probability value equal to or higher than a predetermined threshold value may be the detected image region.
  • an image region that is the same as or similar to any of a plurality of specific local images tagged in advance is detected from the input endoscopic image, and the detected image region is detected.
  • the second area designation data for designating the image area and the corresponding specific local image tag (job information) are acquired.
  • the control unit 10 receives a video signal captured by an image sensor provided at the tip of the endoscope 8 from an image processing device of the endoscope 8 connected via an input / output I / F unit 13. I'm receiving.
  • the image processing module 21 acquires an image frame (endoscopic image) of an endoscopic image obtained from the image signal.
  • the image processing module 21 can also sequentially acquire endoscopic images by thinning out the endoscopic images at predetermined intervals.
  • the image processing module 21 normalizes the acquired endoscopic image for input of the P-AI model 31. For example, the image processing module 21 can perform trimming and size adjustment on the acquired endoscopic image.
  • the AI processing module 22 inputs the region position data and the region direction data corresponding to the endoscopic image. get. Specifically, the P-AI model 31 calculates the probability value for each organ region of the luminal organ and the probability value for each direction indicated by the three-dimensional orthogonal axis with respect to the input endoscopic image. To do. For example, the oral esophageal part E0, the gastric esophagus part E1, the lesser curvature part E2, the gastric corpus E3, the gastric corpus E4, the vestibular part E5, the duodenal bulb E6, the duodenal descending leg E7, and the lower duodenal angle E8. The probability values for the region and the probability values for each of the front wall direction D1, the rear wall direction D2, the lesser curvature direction D3, the greater curvature direction D4, the insertion direction D5, and the pull-out direction D6 are calculated.
  • the AI processing module 22 acquires data capable of identifying the organ region having the maximum probability value as region position data based on the probability value for each organ region, which is the calculation result of the P-AI model 31, and further, P- Based on the probability value for each direction indicated by the three-dimensional orthogonal axis, which is the calculation result of the AI model 31, data capable of identifying the direction having the maximum probability value is acquired as region direction data.
  • the AI processing module 22 acquires data capable of identifying one or more organ regions having a probability value equal to or higher than a predetermined threshold value as region position data, and identifies one or more directions having a probability value equal to or higher than a predetermined threshold value.
  • the data to be obtained can also be acquired as area direction data.
  • the AI processing module 22 also acquires the probability values calculated for the area position data and the area direction data.
  • the storage processing module 23 stores the area position data and the area direction data acquired by the AI processing module 22 in association with the endoscopic image corresponding to the data in the memory 12.
  • the storage processing module 23 can store the area position data and the area direction data in association with the endoscopic image in the memory 12 so that the data can be reproduced later, or the area can be stored together with the display of the endoscopic image.
  • both may be associated with each other and temporarily stored in the memory 12 and deleted immediately.
  • the output processing module 24 displays a certain image frame (endoscope image) while displaying the endoscope image on the display device of the input / output panel 5 based on the image signal received from the image processing device of the endoscope 8.
  • the position information and orientation information of the endoscope 8 indicated by the area position data and the area direction data acquired for the endoscope image are displayed on the display device of the input / output panel 5.
  • the position information and orientation information of the endoscope 8 may be displayed in characters, or may be displayed by adding a display capable of grasping the position and orientation of the endoscope 8 to the schematic diagram of the luminal organ. May be good.
  • Endoscope guide Using the trained G1-AI model 32 and G2-AI model 33 as described above, the control unit 10 guides the endoscope 8.
  • the processing related to the guide of the endoscope 8 executed by the control unit 10 will be described in detail.
  • the process related to the guide of the endoscope 8 is executed in parallel with the process related to the estimation of the position and orientation of the endoscope 8 described above.
  • the method of acquiring the endoscopic image by the image processing module 21 is as described above.
  • the AI processing module 22 inputs the endoscopic image acquired and normalized by the image processing module 21 into both the G1-AI model 32 and the G2-AI model 33.
  • the G1-AI model 32 and the second guide AI model 33 are executed in parallel.
  • the G1-AI model 32 calculates the probability value for each image area in the input endoscopic image. For example, in the G1-AI model 32, in addition to each unit image area divided by the grid lines in FIG. 6, all combinations of two unit image areas adjacent to the left and right, and two unit image areas adjacent to the top and bottom are used. The probability value is output for each image area of all combinations of the above and all combinations of two diagonally adjacent unit image areas.
  • the G2-AI model 33 sets the detection result for each image area tagged in advance with the job information in the input endoscopic image and the second area designation data for designating the detected image area, respectively. calculate.
  • the detection result for each image region indicates the presence / absence of detection of each image region based on whether or not it has an existence probability value equal to or higher than a predetermined threshold value.
  • the inference accuracy of the G1-AI model 32 and the G2-AI model 33 may differ from each other depending on the input endoscopic image.
  • the G2-AI model 33 that detects a specific image area of an endoscopic image has high detection accuracy for an endoscopic image including the entire contour that is an image feature, but includes only a partial contour. Detection accuracy tends to deteriorate in endoscopic images.
  • the classification accuracy may deteriorate depending on the input endoscopic image. Therefore, in the present embodiment, both the G1-AI model 32 and the G2-AI model 33 are executed in parallel, and one of the outputs is used to maintain high inference accuracy.
  • the output of the G2-AI model 33 is used and the maximum value is equal to or higher than the predetermined threshold value.
  • the output of the G1-AI model 32 is used for this.
  • the AI processing module 22 acquires the area designation data from either the G1-AI model 32 or the G2-AI model 33. Specifically, when the output of the G1-AI model 32 is used, the AI processing module 22 has an image region having a maximum probability value based on the probability value for each image region calculated by the G1-AI model 32. Acquires the first area specification data that specifies.
  • the AI processing module 22 can also acquire first region designation data that designates one or more image regions having a probability value equal to or higher than a predetermined threshold value. In the present embodiment, the AI processing module 22 also acquires the probability value calculated for the first area designation data.
  • the AI processing module 22 When using the output of the G2-AI model 33, the AI processing module 22 outputs each image region in the endoscopic image output by the G2-AI model 33, which is tagged with job information in advance. The detection result and the second area designation data for designating the detected image area are acquired respectively. Further, the AI processing module 22 also acquires the tag (job information) attached to the second area designation data that designates the detected image area.
  • each image region is tagged with the G2-AI model 33.
  • the job information being performed may indicate information other than the route such as a place (point) to be something to be done by the endoscope 8.
  • the output of either the G1-AI model 32 or the G2-AI model 33 is used for the route that the endoscope 8 should take, and the G2-AI for the other job information.
  • the output of the model 33 may be used fixedly.
  • both the G1-AI model 32 and the G2-AI model 33 are used, but the G1-AI model 32 depends on the position in the luminal organ to be captured by the endoscopic image. And the G2-AI model 33 may be switched and used. In this case, if the AI processing module 22 is used by switching between the G1-AI model 32 and the G2-AI model 33 according to the position of the endoscope 8 estimated using the inference result of the P-AI model 31. Good.
  • both the G1-AI model 32 and the second guide AI model 33 are used, and when the position of the endoscope 8 is the esophagus or the duodenum, the position is the esophagus or the duodenum.
  • the G2-AI model 33 may be used.
  • the output processing module 24 is acquired by the AI processing module 22 at the timing when the target image frame (endoscopic image) is displayed while displaying the endoscopic image on the display device of the input / output panel 5 as described above.
  • a display in which the guide information of the endoscope 8 is added to the endoscope image based on the area designation data is displayed on the display device.
  • the guide information displayed in the present embodiment includes information that guides the route or direction in which the endoscope 8 should go, a place where something should be done by the endoscope 8 such as observation, tissue collection, treatment, etc. ( There is information to guide the point).
  • the output processing module 24 displays a direction display toward the image area indicated by the area designation data acquired by the AI processing module 22 in the endoscope image acquired from the image processing module 21. Overlay on the image.
  • This direction display may be displayed so as to clearly point to a specific image area, or may be a display pointing to a direction such as upward, downward, left, or right.
  • the output processing module 24 acquires the second area designation data from the G2-AI model 33 and further acquires the job information corresponding to the second area designation data
  • the display corresponding to the job information is obtained.
  • a display indicating an image region designated by the second region designation data can be added to the endoscope image as guide information of the endoscope 8.
  • a direction display for example, an arrow display
  • a marker display may be added on the image area indicated by the second area designation data.
  • FIG. 8 is a diagram showing a display example of guide information of the endoscope.
  • the arrow display G1 toward the image region corresponding to the route to be taken by the endoscope 8 is superimposed on the endoscope image.
  • a marker G2 is attached to the image area corresponding to the observation point, and an annular broken line G3 is displayed around the marker G2 for easier understanding.
  • the display of the guide information in this embodiment is not limited to the example of FIG.
  • the image area such as an observation point is designated by the first area designation data or the second area designation data
  • the image area is of a certain size in the endoscopic image.
  • the output processing module 24 notifies that the image area indicated by the acquired area designation data in the acquired endoscopic image has reached a predetermined position or a predetermined size in the endoscopic image. Output the notification display.
  • This notification display will be displayed if the viewer can grasp that the image area corresponding to the observation point has appeared in the endoscopic image or has reached a predetermined position or a predetermined size in the endoscopic image. Display contents and display forms may be used.
  • FIG. 9 is a diagram showing an example of guide display of observation points.
  • the output processing module 24 when the image area B10 designated by the area designation data acquired by the AI processing module 22 corresponds to the observation point, the output processing module 24 performs the guide display as follows. That is, the output processing module 24 superimposes and displays the aiming displays F1, F2, and F3 on the endoscopic image, and the timing at which the image area B10 reaches a predetermined size within the frame of the aiming display F2 shown at a predetermined position.
  • the endoscopic image is set as a still image G4, and the still image G4 is rotated and reduced to frame out.
  • the frame-out effect in this display can be regarded as corresponding to a notification display for notifying that the image area indicated by the area designation data has reached a predetermined position or a predetermined size in the endoscopic image, and can be regarded as an aim in the display.
  • the displays F1, F2 and F3 correspond to the notification display.
  • the output processing module 24 may display the aiming points F1, F2 and F3 when the area designation data corresponding to the observation point is acquired by the AI processing module 22, or may be designated by the area designation data.
  • Aims F1, F2 and F3 may be displayed when the image area B10 to be formed has a predetermined position or a predetermined size.
  • the content and display form of the notification display are not limited to such an example.
  • the guide information as described above may be always displayed when the area designation data is acquired by the image processing module 21.
  • the output processing module 24 takes the opportunity of detecting that the endoscope 8 is stagnant in the luminal organ for a predetermined time based on the endoscopic images sequentially acquired by the image processing module 21.
  • a display to which the guide information is added may be output. In this case, for example, by detecting that the contents of the endoscopic image are almost unchanged for a predetermined time, it is possible to detect that the endoscope 8 is stagnant in the luminal organ for a predetermined time.
  • the fact that the endoscope 8 is stagnant for some time may mean that the trainee is confused by the endoscopic procedure. Only when the trainee is confused by outputting a display with the guide information added when the endoscope 8 detects that it is stagnant in the luminal organ for a predetermined time as described above. Since the guide information can be displayed, it is possible to prevent the guide information from becoming an obstacle and causing discomfort to the trainee.
  • the control unit 10 storage processing module 23
  • the output processing module 24 uses the information of the organ part group to be imaged and recorded, and based on the history information of the organ part indicated by the image recording information, the image recording is performed from the organ part group to be imaged and recorded. It is possible to identify the part of the organ that has not been made.
  • the output processing module 24 can also display information indicating that fact or the specified organ part on the display device of the input / output panel 5. In this way, it is possible to point out the omission of imaging recording to the trainee.
  • the organ site presumed to have been imaged and recorded by the endoscope 8 can be specified as follows.
  • the input / output I / F unit 13 can receive a signal indicating the operation of the switch instructing the recording of the still image provided in the operation unit of the endoscope 8, it corresponds to the timing at which the signal is received.
  • the organ site can be specified by the area position data acquired for the endoscopic image, the first area designation data, the second area designation data, and the job information.
  • an image region tagged with job information indicating an observation point (organ site to be imaged and recorded) was detected by the output of the G2-AI model 33, and second region designation data for designating the image region was acquired.
  • the organ part corresponding to the observation point was imaged and recorded by detecting that the image area indicated by the second area designation data has reached a predetermined position or a predetermined size in the endoscopic image. can do. Furthermore, an endoscopic image in which a specific organ part corresponding to each observation point is captured at a predetermined position and a predetermined size is used as a teacher's endoscope image, and each specific organ part is compared with respect to the teacher's endoscope image.
  • an AI model that is machine-learned with teacher data tagged with imaging record data indicating imaging records, it is also possible to identify an organ site that is presumed to have been imaged and recorded by the endoscope 8. In this case, when the imaging record data is acquired by giving the endoscope image to the AI model, the history information of the organ part estimated to have been captured and recorded by the endoscope 8 based on the imaging recording data. It is possible to retain the imaging record information.
  • FIG. 10 is a flowchart showing an operation example of the control unit 10. Since the detailed operation contents of each process shown in FIG. 10 are as described above, the operation flow of the control unit 10 will be mainly described here.
  • the control unit 10 receives the image signal of the endoscope from the image processing device of the endoscope 8 connected via the input / output I / F unit 13, and the endoscope obtained from this image signal. The image is displayed on the display device of the input / output panel 5.
  • the control unit 10 thins out the image frames (endoscopic images) of the endoscopic image at a predetermined cycle and sequentially acquires them (S101), and each time the endoscopic image is acquired, the operation flow shown in FIG. 10 is performed. Run. In step (S101), the control unit 10 can also normalize the endoscopic image for input of various AI models.
  • the control unit 10 inputs the endoscopic images acquired in (S101) to the P-AI model 31, G1-AI model 32, and G2-AI model 33, respectively (S110), (S121), and (S122).
  • S101 the endoscopic images acquired in (S101)
  • G1-AI model 32, and G2-AI model 33 respectively (S110), (S121), and (S122).
  • each AI model is input from the image frame of the endoscopic image. Different endoscopic images obtained by thinning out at different cycles may be input.
  • the P-AI model 31, the G1-AI model 32, and the G2-AI model 33 are executed substantially in parallel.
  • the control unit 10 acquires the area position data and the area direction data based on the calculation result of the P-AI model 31 (S111). At this time, if the maximum value of the probability value for each organ region or the maximum value of the probability value for each direction output by the P-AI model 31 is lower than the predetermined threshold value, the control unit 10 acquires the data in (S101). With respect to the endoscopic image obtained, the region position data or the region direction data may not be acquired. In addition to the endoscope image, the control unit 10 displays the position information and orientation information of the endoscope 8 indicated by the area position data and the area direction data acquired in (S111) on the display device of the input / output panel 5. (S112).
  • the control unit 10 operates as follows with reference to the outputs of the G1-AI model 32 and the G2-AI model 33.
  • the control unit 10 determines the detection result for each image area tagged in advance with the job information based on the output of the G2-AI model 33 (S123), and further, based on the output of the G1-AI model 32.
  • (S124) and (S131) determine whether or not the first area designation data can be acquired.
  • Whether or not the first region designation data can be acquired can be determined by whether or not the maximum value of the probability value for each image region is equal to or greater than a predetermined threshold value. If the maximum value is equal to or greater than a predetermined threshold value, it can be determined that the first area designated data can be acquired, and if not, it can be determined that the first area designated data cannot be acquired.
  • the control unit 10 When the control unit 10 has the detected image area and the first area designation data can be acquired (S123; YES) (S124; YES), the control unit 10 of the G1-AI model 32 or the G2-AI model 33. It is determined whether or not the output needs to be selected (S125).
  • the job information (tag) corresponding to the image area detected by the G2-AI model 33 is the semantic information of the image area classified by the G1-AI model 32 (for example, the route or observation point where the endoscope 8 should be advanced). ), It may be determined that selection is necessary (S125; YES), and if it is different, it may be determined that selection is not necessary (S125; NO).
  • the control unit 10 selects either the output of the G1-AI model 32 or the output of the G2-AI model 33 (S126). As a result, either the first area designated data or the second area designated data is selected. For example, if the probability value of the image region specified by the first region designation data is equal to or greater than a predetermined threshold value, the first region designation data that is the output of the G1-AI model 32 is acquired, and if not, the G2-AI The second area designation data which is the output of the model 33 and the job information corresponding thereto may be acquired.
  • the control unit 10 specifies the first area designation data which is the output of the G1-AI model 32 and the second area designation which is the output of the G2-AI model 33. Acquire the data and the corresponding job information (S128). If there is a detected image area and the first area designation data cannot be acquired (S123; YES) (S124; NO), the control unit 10 is the output of the G2-AI model 33. (2) Acquire the area designation data and the job information corresponding thereto (S127). Further, when there is no detected image area and one area designation data can be acquired (S123; NO) (S131; YES), the control unit 10 is the output of the G1-AI model 32. Acquire area designation data (S132). If there is no detected image area and the first area designation data cannot be acquired (S123; NO) (S124; NO), the control unit 10 ends the process without displaying the guide information. ..
  • the input / output panel 5 inputs guide information based on the acquired area designation data in addition to the endoscopic image. Is displayed on the display device of (S129). The display of the guide information is as described above.
  • the operation flow of the control unit 10 is not limited to the example shown in FIG.
  • a plurality of processes processes are described in order, but the execution order of each process is not limited to the order of description.
  • the method of using the output results of the G1-AI model 32 and the G2-AI model 33 is not limited to the example shown in FIG.
  • any one or more of the P-AI model 31, the G1-AI model 32 and the G2-AI model 33 may not be executed in parallel with the other AI models, but may be executed before and after.
  • the order of the illustrated steps can be changed within a range that does not hinder the contents.
  • the contents of the system 1 described above are merely examples, and can be partially changed as appropriate.
  • the endoscope 8 is connected to the input / output I / F unit 13, but the endoscope 8 may not be connected.
  • the control unit 10 of the control unit 10 via the portable recording medium or communication.
  • the endoscopic image stored in the memory 12 and obtained from the moving image data may be processed as described above.
  • the position information, the orientation information, and the guide information of the endoscope 8 are acquired by using only the endoscope image, but other information is further added in addition to the endoscope image. You may use it.
  • the control unit 10 further uses the detection information of the object detection sensor provided at a predetermined portion (for example, the esophageal entrance portion, the gastroesophageal junction, and the duodenal descending leg portion) of the internal modeling portion of the human model 3. May be good. According to this, the fact that the tip portion of the endoscope 8 has passed through the predetermined portion can be grasped as accurate information.
  • control unit 10 may include as a software element detection information acquisition means for acquiring the presence detection information of the endoscope from each sensor provided at a plurality of predetermined parts of the luminal organ model.
  • the AI processing module 22 can also determine and correct the correctness of the region position data obtained from the output of the P-AI model 31 based on the detection information. For example, the AI processing module 22 may acquire region position data that indicates a position that matches the detection information and has the maximum probability value.
  • an AI model may be provided for each organ region between the sites where the object detection sensor is provided, and the control unit 10 can switch the AI model to be used by using the detection information of the object detection sensor.
  • an AI model from the esophageal entrance to the gastroesophageal junction and an AI model from the gastroesophageal junction to the duodenal descending leg are provided. This may be divided into AI models for each organ region not only for P-AI model 31 but also for G1-AI model 32 and G2-AI model 33.
  • the P-AI model 31 can be formed so as to input the endoscopic image and the detection information of the object detection sensor and output the probability value for each organ region and the probability value for each direction.
  • control unit 10 can automatically determine an error in the inference result of the P-AI model 31, the G1-AI model 32, and the G2-AI model 33. For example, comparing each output of the AI model for three endoscopic images adjacent to the time series to be processed sequentially, the output for the intermediate endoscopic image is significantly different from the output for the endoscopic images before and after it. If they are different, the output for the intermediate endoscopic image can be determined to be erroneous. Since the time interval between adjacent endoscopic images in chronological order is less than 1 second, the output is that the AI model outputs with a certain degree of reliability for the intermediate endoscopic image.
  • the control unit 10 obtains the region position data or the region direction data obtained from the output of the P-AI model 31 for the intermediate endoscopic image with respect to the front and rear endoscopic images. If it is significantly different from the region position data or region direction data to be obtained, the output related to the intermediate endoscopic image is determined to be erroneous. Similarly, the control unit 10 specifies the area where the area designation data obtained from the output of the G1-AI model 32 or the G2-AI model 33 is obtained for the front and rear endoscope images for the intermediate endoscope image. If an image area significantly different from the data is specified, the output related to the intermediate endoscopic image is determined to be incorrect. In this way, even though each AI model outputs with a certain degree of reliability, if it is determined to be an error, the control unit 10 may not use the output.
  • control unit 10 can automatically generate the output of the AI model determined to be an error from the output of the AI model related to the front and rear endoscopic images.
  • the region position data or region direction data acquired based on the output of the P-AI model 31 for the front and rear endoscope images it is assumed that the endoscope does not move at high speed, and an endoscope in between.
  • the area position data and the area direction data related to the image can be predicted.
  • the area designation data acquired based on the output of the G1-AI model 32 or the G2-AI model 33 For example, the area designation data that specifies the image area at the intermediate position between the image areas specified by the area designation data acquired for the front and rear endoscopic images is generated as the data corresponding to the intermediate endoscopic image. May be good.
  • control unit 10 automatically generates the intermediate endoscope that is automatically generated from the output of the AI model regarding the intermediate endoscope image when it is determined to be an error and the front and rear endoscope images as described above.
  • the output of the AI model related to the image is associated and retained.
  • the endoscopic image held in this way and the output of the automatically generated AI model can be used as teacher data at the time of re-learning of the AI model.
  • Such re-learning of the AI model may be automatically executed at a time when the system 1 is not used by the trainee.
  • the endoscopic procedure trainer system (system 1) described above is an example of an embodiment of the present invention.
  • the present system 1 includes an organ model of the human body model 3, and enables self-learning and self-training of endoscopic techniques using the organ model.
  • the endoscopic technique is performed.
  • the endoscope 8 itself may be used, not limited to those for the purpose of learning or training.
  • FIG. 11 is a diagram conceptually showing a control configuration of the endoscope system 80 (endoscope 8) according to another embodiment.
  • the endoscope system 80 includes an insertion portion including a tip portion and a curved portion, an operation unit for performing various operations on the tip portion and the curved portion, an image processing device 82, a display device 83, and the like.
  • An endoscope imaging unit 81 is provided at the tip portion, and the endoscope imaging unit 81 and the display device 83 are connected to the image processing device 82 by a cable or the like.
  • the image processing device 82 has a processor 85, a memory 86, an input / output interface (I / F) unit 87, and the like, and the control program and the AI model stored in the memory 86 are executed by the processor 85. , The same processing as that of the control unit 10 described above may be realized.
  • the software configuration of the image processing device 82 may be the same as the software configuration of the control unit 10 shown in FIG.
  • the image processing device 82 may also be referred to as an endoscopic image processing device or an endoscopic image processing system.
  • the endoscope image acquired by the image processing device 82 may be an image obtained by capturing the lumen organ model of the human body model 3, or an image obtained by capturing the lumen organ of a living body. There may be, or both.
  • the endoscopic image obtained by the endoscopic imaging unit 81 of the luminal organ of the living body is to be processed, the P-AI model 31, G1-AI model 32 and G2-AI model 33 are living bodies. It is desirable that machine learning is performed using a teacher's endoscopic image that images the luminal organs of.
  • the correct answer of the area position data and the area direction data corresponding to the teacher's endoscope image, the correct answer of the area designation data corresponding to the teacher's endoscope image, and the job information of the endoscope for it are the teachers. It can be generated by confirming the endoscopic image for use with a plurality of skilled doctors. Further, the P-AI model 31, the G1-AI model 32, and the G2-AI model 33 use the endoscopic images for teachers that image the lumen organ model of the human body model 3 in the same manner as the above-mentioned system 1. After machine learning, further machine learning may be performed using an endoscopic image for a teacher that images a luminal organ of a living body.
  • a teacher's endoscope image is collected using an endoscope system that can acquire the position and orientation of the tip of the endoscope in cooperation with a magnetic sensor, and the endoscope acquired by the endoscope system is used. It is also possible to machine-learn the P-AI model 31 by collecting the position information and the orientation information of the tip of the mirror and using a plurality of teacher data associated with them. In this case, the P-AI model 31 can be trained with the position information and the orientation information that are more subdivided than the above-mentioned "region position data" and "region direction data", and the position information is subdivided as such. And direction information can be acquired.
  • the endoscope system 80 may be a capsule endoscopy system.
  • the endoscope imaging unit 81 is not provided at the tip of the endoscope, but may be provided at the capsule endoscope itself and connected to the image processing device 82 by wireless communication.
  • the endoscope system 80 does not need to include a display device 83.
  • the area position data and the area direction data acquired from the P-AI model 31 may be associated with the target endoscopic image and stored in the memory 86.
  • the area designation data acquired from the G1-AI model 32 and the G2-AI model 33 may also be associated with the target endoscopic image and stored in the memory 86.
  • the generated guide information may be used as information for automatically operating the insertion portion including the tip portion and the curved portion of the endoscope, or the capsule endoscope itself.
  • the correct answer of the position and orientation of the endoscope in the plurality of teacher data is the correct answer of the area position data in which each of the plurality of regions virtually divided into the luminal organs in the long axis direction can be identified as the position information, and each region. It is the correct answer of the area direction data that can identify each direction indicated by the three-dimensional orthogonal axis that is virtually set in.
  • the first model processing means acquires region position data and region direction data corresponding to the acquired endoscopic image as the position information and the orientation information.
  • the endoscopic image processing system according to (Appendix 1).
  • the endoscopic image acquired by the image acquisition means is an image taken by an endoscope in a tract organ of a living body or in a tract organ model imitating a tract organ of a living body.
  • the plurality of teacher data includes a plurality of teacher endoscopic images captured by an endoscope in the luminal organ model.
  • the endoscopic image processing system according to (Appendix 1), (Appendix 2) or (Appendix A).
  • (Appendix 4) A detection information acquisition means for acquiring the presence detection information of an endoscope from each sensor provided at a plurality of predetermined parts of a tract organ model that imitates a tract organ of a living body.
  • the endoscopic image acquired by the image acquisition means is an image captured by the endoscope in the luminal organ model.
  • the teacher endoscopic image of the plurality of teacher data is an image taken by the endoscope in the tract organ model.
  • the first model processing means further uses the acquired existence detection information to acquire the position information and the orientation information.
  • the endoscopic image processing system according to (Appendix 1) or (Appendix 2). (Appendix 5)
  • a second model processing means for acquiring region designation data corresponding to the endoscopic image by giving the acquired endoscopic image to the second trained model.
  • An output processing means for generating endoscope guide information regarding an image region designated by the region designation data in the acquired endoscope image based on the acquired region designation data.
  • the second trained model is machine-learned using a plurality of teacher data in which the correct answers of the region designation data are associated with each teacher's endoscopic image.
  • the endoscopic image processing system according to any one of (Appendix 1) to (Appendix 4). (Appendix 6)
  • the second trained model is a plurality of teachers in which the correct answers of the second area designation data for designating a specific image area tagged with the job information of the endoscope are associated with each teacher's endoscope image.
  • Machine-learned using data The second model processing means further acquires the job information corresponding to the area designation data together with the area designation data, and further acquires the job information.
  • the output processing means further uses the job information, and in a display form corresponding to the job information, the display indicating the image area designated by the acquired area designation data is used as the guide information.
  • the endoscopic image processing system according to (Appendix 5) or (Appendix A).
  • the output processing means superimposes and displays the direction display toward the image area indicated by the acquired area designation data in the acquired endoscopic image on the endoscopic image.
  • the endoscopic image processing system according to (Appendix 5), (Appendix 6) or (Appendix A).
  • the output processing means notifies that the image area indicated by the acquired area designation data in the acquired endoscopic image has reached a predetermined position or a predetermined size in the endoscopic image.
  • the endoscopic image processing system according to any one of (Appendix 5) to (Appendix 7) or (Appendix A).
  • the storage means holds the imaging record information which is the history information of the organ part estimated to have been imaged and recorded by the endoscope based on the acquired area designation data.
  • the output processing means identifies an organ part for which imaging recording has not been made from the organ part group to be imaged and recorded based on the history information of the organ part indicated by the imaging recording information.
  • the endoscopic image processing system according to any one of (Appendix 5) to (Appendix 8) or (Appendix A).
  • the output processing means detects that the endoscope is stagnant in the luminal organ for a predetermined time based on the endoscopic images sequentially acquired by the image acquisition means, and the guide information is triggered. Output the display with The endoscopic image processing system according to any one of (Appendix 5) to (Appendix 9) or (Appendix A).
  • a second model processing means for acquiring the first region designation data corresponding to the endoscopic image by giving the acquired endoscopic image to the second trained model. By giving the acquired endoscopic image to the third trained model, the second region designation data corresponding to the endoscopic image and the job information corresponding to the second region designation data are acquired.
  • Third model processing means and Either the acquired first region designation data or the acquired second region designation data and job information based on the outputs of the second trained model and the third trained model. Is selected, and output processing means for generating the guide information of the endoscope regarding the selected first region designation data or the image region designated by the second region designation data in the acquired endoscope image.
  • the second trained model is machine-learned using a plurality of teacher data in which the correct answers of the first region designation data are associated with each teacher's endoscopic image.
  • a plurality of teachers are associated with the correct answers of the second area designation data that specifies the specific image area tagged with the job information of the endoscope for each teacher's endoscope image.
  • the endoscopic image processing system according to any one of (Appendix 1) to (Appendix 4).
  • Appendix 21 It is an endoscopic image processing method realized by executing a control program stored in memory by a processor. Obtain an endoscopic image taken by an endoscope in a luminal organ, By giving the acquired endoscope image to the first trained model, position information and orientation information indicating the position and direction of the endoscope that captured the endoscope image can be acquired. The acquired position information and orientation information are associated with the acquired endoscopic image and stored in the memory. Including that The first trained model is stored in the memory or another memory, and the correct answer of the position and orientation of the endoscope that captured the teacher's endoscope image is given to the teacher's endoscope image. Machine-learned based on multiple associated teacher data, Endoscopic image processing method.
  • the correct answer of the position and orientation of the endoscope in the plurality of teacher data is the correct answer of the area position data in which each of the plurality of regions virtually divided into the luminal organs in the long axis direction can be identified as the position information, and each region. It is the correct answer of the area direction data that can identify each direction indicated by the three-dimensional orthogonal axis that is virtually set in.
  • the acquisition of the position information and the orientation information acquires the region position data and the region direction data corresponding to the acquired endoscopic image.
  • the acquired endoscopic image is an image taken by an endoscope in a tract organ of a living body or in a tract organ model imitating a tract organ of a living body.
  • the plurality of teacher data includes a plurality of teacher endoscopic images captured by an endoscope in the luminal organ model.
  • the endoscopic image processing method according to (Appendix 21), (Appendix 22) or (Appendix B).
  • (Appendix 24) Acquires the presence detection information of an endoscope from each sensor provided at a plurality of predetermined parts of a tract organ model that imitates a tract organ of a living body. Including that The acquired endoscopic image is an image taken by an endoscope in the luminal organ model.
  • the teacher endoscopic image of the plurality of teacher data is an image taken by the endoscope in the tract organ model.
  • the acquisition of the position information and the orientation information further uses the acquired existence detection information to acquire the location information and the orientation information.
  • the endoscopic image processing method according to (Appendix 21) or (Appendix 22). (Appendix 25) By giving the acquired endoscopic image to the second trained model, the area designation data corresponding to the endoscopic image is acquired, and the area designation data is acquired. Based on the acquired region designation data, the guide information of the endoscope regarding the image region designated by the region designation data in the acquired endoscope image is generated.
  • the second trained model is stored in the memory or another memory, and is machine-learned using a plurality of teacher data in which the correct answer of the area designation data is associated with each teacher's endoscopic image.
  • the endoscopic image processing method according to any one of (Appendix 21) to (Appendix 24).
  • the second trained model is a plurality of teachers in which the correct answers of the second area designation data for designating a specific image area tagged with the job information of the endoscope are associated with each teacher's endoscope image.
  • Machine-learned using data In addition to the area designation data, the job information corresponding to the area designation data is further acquired, and the job information is further acquired.
  • a display indicating an image area designated by the acquired area designation data is added to the acquired endoscopic image as the guide information in a display form corresponding to the job information.
  • the direction display toward the image area indicated by the acquired area designation data in the acquired endoscopic image is superimposed on the endoscopic image.
  • a notification display for notifying that the image region indicated by the acquired region designation data in the acquired endoscopic image has reached a predetermined position or a predetermined size in the endoscopic image is output.
  • the endoscopic image processing method according to any one of (Appendix 25) to (Appendix 27) or (Appendix B). (Appendix 29) Based on the acquired area designation data, the image pickup record information which is the history information of the organ part estimated to have been imaged and recorded by the endoscope is held in the memory. Based on the history information of the organ part indicated by the imaging record information, the organ part that has not been imaged and recorded is specified from the organ part group to be imaged and recorded.
  • the endoscopic image processing method according to any one of (Appendix 25) to (Appendix 28) or (Appendix B).
  • the second trained model is stored in the memory or another memory, and is a machine using a plurality of teacher data in which the correct answer of the first region designation data is associated with each teacher's endoscope image.
  • a specific image area stored in the memory or another memory and tagged with the job information of the endoscope is specified for each teacher's endoscope image. Machine learning is performed using multiple teacher data in which the correct answers of the area designation data are associated with each other.
  • the endoscopic image processing method according to any one of (Appendix 21) to (Appendix 24).
  • (Appendix B) It is an endoscopic image processing method realized by executing a control program stored in memory by a processor. Obtain an endoscopic image taken by an endoscope in a luminal organ, By giving the acquired endoscopic image to the trained model, the area designation data corresponding to the endoscopic image is acquired, and the area designation data is acquired. Based on the acquired region designation data, a display in which the guide information of the endoscope regarding the image region designated by the region designation data in the acquired endoscope image is added to the endoscope image is output. To do, Including that The trained model is stored in the memory or another memory, and is machine-learned using a plurality of teacher data in which the correct answer of the area designation data is associated with each teacher's endoscope image. Endoscopic image processing method.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Optics & Photonics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Medicinal Chemistry (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Endoscopes (AREA)
  • Image Analysis (AREA)
  • Instructional Devices (AREA)

Abstract

La présente invention concerne un système de traitement d'images endoscopiques qui transmet une image endoscopique acquise à un modèle entraîné, qui acquiert des informations de position et des informations d'orientation qui indiquent la position et l'orientation d'un endoscope qui capture l'image endoscopique, et qui stocke les informations de position et les informations d'orientation acquises en association avec l'image endoscopique, le modèle entraîné ayant été entraîné par machine sur la base d'une pluralité de données d'enseignant dans lesquelles les réponses correctes concernant la position et l'orientation de l'endoscope qui a capturé une image endoscopique pour un enseignant sont associées à l'image endoscopique pour un enseignant.
PCT/JP2020/035371 2019-09-20 2020-09-18 Système et procédé de traitement d'images endoscopiques WO2021054419A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-172325 2019-09-20
JP2019172325A JP6632020B1 (ja) 2019-09-20 2019-09-20 内視鏡画像処理システム

Publications (1)

Publication Number Publication Date
WO2021054419A1 true WO2021054419A1 (fr) 2021-03-25

Family

ID=69146623

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/035371 WO2021054419A1 (fr) 2019-09-20 2020-09-18 Système et procédé de traitement d'images endoscopiques

Country Status (2)

Country Link
JP (1) JP6632020B1 (fr)
WO (1) WO2021054419A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021177302A1 (fr) * 2020-03-04 2021-09-10 国立大学法人大阪大学 Dispositif de création d'informations de formation chirurgicale, programme de création d'informations de formation chirurgicale et dispositif de formation chirurgicale
WO2023002960A1 (fr) * 2021-07-19 2023-01-26 国立大学法人東海国立大学機構 Dispositif de traitement d'informations, procédé de traitement d'informations et programme informatique

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7323647B2 (ja) * 2020-01-20 2023-08-08 オリンパス株式会社 内視鏡検査支援装置、内視鏡検査支援装置の作動方法及びプログラム
JP2021141973A (ja) * 2020-03-10 2021-09-24 Hoya株式会社 内視鏡用プロセッサ、内視鏡、内視鏡システム、情報処理方法、プログラム及び学習モデルの生成方法
JPWO2022071328A1 (fr) * 2020-09-29 2022-04-07
KR20230147959A (ko) * 2022-04-15 2023-10-24 충남대학교산학협력단 내시경 이동 경로 가이드 장치, 시스템, 방법, 컴퓨터 판독 가능한 기록 매체, 및 컴퓨터 프로그램
JP2024031468A (ja) 2022-08-26 2024-03-07 富士フイルム株式会社 画像処理装置及びその作動方法並びに内視鏡システム

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0642644Y2 (ja) * 1988-10-15 1994-11-09 オリンパス光学工業株式会社 内視鏡湾曲装置
JP2002500941A (ja) * 1998-01-26 2002-01-15 シンバイオニクス リミテッド 内視鏡のチュートリアルシステム
JP2004089484A (ja) * 2002-08-30 2004-03-25 Olympus Corp 内視鏡装置
JP2004348095A (ja) * 2003-03-26 2004-12-09 National Institute Of Advanced Industrial & Technology トレーニングシステム
JP2006158760A (ja) * 2004-12-09 2006-06-22 Gifu Univ 医療用挿入練習装置
JP2008119259A (ja) * 2006-11-13 2008-05-29 Olympus Medical Systems Corp 内視鏡挿入形状解析システム
WO2016170656A1 (fr) * 2015-04-23 2016-10-27 オリンパス株式会社 Dispositif de traitement d'image, procédé de traitement d'image, et programme de traitement d'image
WO2017212725A1 (fr) * 2016-06-07 2017-12-14 オリンパス株式会社 Système d'observation médicale
WO2018235185A1 (fr) * 2017-06-21 2018-12-27 オリンパス株式会社 Dispositif d'aide à l'insertion, procédé d'aide à l'insertion et appareil d'endoscope comprenant un dispositif d'aide à l'insertion
WO2019107226A1 (fr) * 2017-11-29 2019-06-06 水野 裕子 Appareil endoscopique
WO2019138773A1 (fr) * 2018-01-10 2019-07-18 富士フイルム株式会社 Appareil de traitement d'image médicale, système d'endoscope, procédé de traitement d'image médicale et programme
WO2019155617A1 (fr) * 2018-02-09 2019-08-15 オリンパス株式会社 Système d'endoscope, dispositif de commande d'endoscope, procédé de fonctionnement pour système d'endoscope, et support d'informations stockant un programme de commande d'endoscope

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0642644Y2 (ja) * 1988-10-15 1994-11-09 オリンパス光学工業株式会社 内視鏡湾曲装置
JP2002500941A (ja) * 1998-01-26 2002-01-15 シンバイオニクス リミテッド 内視鏡のチュートリアルシステム
JP2004089484A (ja) * 2002-08-30 2004-03-25 Olympus Corp 内視鏡装置
JP2004348095A (ja) * 2003-03-26 2004-12-09 National Institute Of Advanced Industrial & Technology トレーニングシステム
JP2006158760A (ja) * 2004-12-09 2006-06-22 Gifu Univ 医療用挿入練習装置
JP2008119259A (ja) * 2006-11-13 2008-05-29 Olympus Medical Systems Corp 内視鏡挿入形状解析システム
WO2016170656A1 (fr) * 2015-04-23 2016-10-27 オリンパス株式会社 Dispositif de traitement d'image, procédé de traitement d'image, et programme de traitement d'image
WO2017212725A1 (fr) * 2016-06-07 2017-12-14 オリンパス株式会社 Système d'observation médicale
WO2018235185A1 (fr) * 2017-06-21 2018-12-27 オリンパス株式会社 Dispositif d'aide à l'insertion, procédé d'aide à l'insertion et appareil d'endoscope comprenant un dispositif d'aide à l'insertion
WO2019107226A1 (fr) * 2017-11-29 2019-06-06 水野 裕子 Appareil endoscopique
WO2019138773A1 (fr) * 2018-01-10 2019-07-18 富士フイルム株式会社 Appareil de traitement d'image médicale, système d'endoscope, procédé de traitement d'image médicale et programme
WO2019155617A1 (fr) * 2018-02-09 2019-08-15 オリンパス株式会社 Système d'endoscope, dispositif de commande d'endoscope, procédé de fonctionnement pour système d'endoscope, et support d'informations stockant un programme de commande d'endoscope

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"ICIAP: International Conference on Image Analysis and Processing, 17th International Conference, Naples, Italy, September 9-13, 2013. Proceedings; [Lecture Notes in Computer Science; Lect.Notes Computer],", vol. 10550, 14 September 2017, SPRINGER, BERLIN, HEIDELBERG, article ARMIN MOHAMMAD ALI; BARNES NICK; ALVAREZ JOSE; LI HONGDONG; GRIMPEN FLORIAN; SALVADO OLIVIER: "Learning Camera Pose from Optical Colonoscopy Frames Through Deep Convolutional Neural Network (CNN)", pages: 50 - 59, XP047435531 *
GEIGER B; KIKINIS R: "Simulation of Endoscopy.", JOINT CONFERENCE. COMPUTER VISION, VIRTUAL REALITY AND ROBOTICSIN MEDICINE AND MEDICAL ROBOTICS AND COMPUTER-ASSISTED SURGERYPROCEEDINGS, vol. 905, 1995, pages 277 - 281, XP001036505 *
ZIEGLER R ET AL.: "A Virtual Reality Medical Training System.", LECTURE NOTES IN COMPUTER SCIENCE, 1995 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021177302A1 (fr) * 2020-03-04 2021-09-10 国立大学法人大阪大学 Dispositif de création d'informations de formation chirurgicale, programme de création d'informations de formation chirurgicale et dispositif de formation chirurgicale
WO2023002960A1 (fr) * 2021-07-19 2023-01-26 国立大学法人東海国立大学機構 Dispositif de traitement d'informations, procédé de traitement d'informations et programme informatique

Also Published As

Publication number Publication date
JP2021048927A (ja) 2021-04-01
JP6632020B1 (ja) 2020-01-15

Similar Documents

Publication Publication Date Title
WO2021054419A1 (fr) Système et procédé de traitement d'images endoscopiques
US11935429B2 (en) Endoscope simulator
US10360814B2 (en) Motion learning support apparatus
US9053641B2 (en) Real-time X-ray vision for healthcare simulation
SI20559A (sl) Sistem in postopek za izvajanje simuliranega medicinskega postopka
WO2020090945A1 (fr) Simulateur médical
JP6521511B2 (ja) 手術トレーニング装置
WO2021176664A1 (fr) Système et procédé d'aide à l'examen médical et programme
JP7457415B2 (ja) コンピュータプログラム、学習モデルの生成方法、及び支援装置
JP2005241883A (ja) カテーテル検査シミュレーションシステム
JP6014450B2 (ja) 動き学習支援装置
JP2021049314A (ja) 内視鏡画像処理システム
JP2021048928A (ja) 内視鏡画像処理システム
JP2016539767A (ja) 内視鏡検査用装置
WO2017126313A1 (fr) Apprentissage de la chirurgie et système de simulation faisant appel à un organe de modélisation à texture-bio
JP7378837B2 (ja) 医療シミュレータ及び医療シミュレータを用いた手技評価方法
Manfredi Endorobotics: Design, R&D and future trends
Wytyczak-Partyka et al. A novel interaction method for laparoscopic surgery training
Vajpeyi et al. A Colonoscopy Training Environment with Real-Time Pressure Monitoring
López et al. Work-in-Progress—Towards a Virtual Training Environment for Lower Gastrointestinal Endoscopy
Fujii et al. Development of Operation Recording System of Gastrointestinal Endoscopy Procedures
Velden Determining the optimal endoscopy movements for training and assessing psychomotor skills

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20864856

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20864856

Country of ref document: EP

Kind code of ref document: A1