WO2023013080A1 - Procédé d'aide à l'annotation, programme d'aide à l'annotation, et dispositif d'aide à l'annotation - Google Patents

Procédé d'aide à l'annotation, programme d'aide à l'annotation, et dispositif d'aide à l'annotation Download PDF

Info

Publication number
WO2023013080A1
WO2023013080A1 PCT/JP2021/029450 JP2021029450W WO2023013080A1 WO 2023013080 A1 WO2023013080 A1 WO 2023013080A1 JP 2021029450 W JP2021029450 W JP 2021029450W WO 2023013080 A1 WO2023013080 A1 WO 2023013080A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
image
annotation
frames
group
Prior art date
Application number
PCT/JP2021/029450
Other languages
English (en)
Japanese (ja)
Inventor
浩一 新谷
憲 谷
学 市川
智子 後町
修 野中
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Priority to PCT/JP2021/029450 priority Critical patent/WO2023013080A1/fr
Publication of WO2023013080A1 publication Critical patent/WO2023013080A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • the present invention relates to an annotation support method, an annotation support program, and an annotation support device that enable efficient annotation by selecting images to be used for annotation.
  • Deep Learning In Deep Learning, first, using annotated teacher data, "learning" is performed to calculate weights to appropriately derive a solution to unknown inputs, and an inference model using the weights calculated by learning , and use this inference model to perform “inference” that leads to a solution to the input.
  • an inspection image (moving image) acquired in an endoscopy can be used as an image that is the source of teacher data.
  • the number of moving images is extremely large, and an enormous amount of work is required to annotate the acquired inspection images.
  • Japanese Patent Publication No. 2020-518915 discloses a technique for dividing a fundus image into a plurality of regions and automatically annotating each region during fundus examination.
  • Japanese Patent Publication No. 2020-518915 is a technique for annotating divided regions of still images, and is not for moving images that change over time.
  • An object of the present invention is to provide an annotation support method, an annotation support program, and an annotation support device that can efficiently annotate a plurality of continuous images such as moving images.
  • An annotation support method acquires a moving image consisting of a plurality of frames obtained by continuously capturing images, and detects a frame including an image of a specific object among the frames of the acquired moving image. Then, the detected series of frames are grouped as an annotation candidate image frame group, and at least one of the start point frame and the end point frame of the grouped series of frames is displayed, and the grouped annotation candidate image frame group is displayed.
  • An annotation candidate image frame group is determined by a series of frames from the start-point frame to the end-point frame in accordance with the correction operation of the start-point frame and the end-point frame.
  • An annotation support program acquires a moving image consisting of a plurality of frames obtained by continuous imaging, and includes an image of a specific object in each frame of the acquired moving image. Detecting frames, grouping the detected series of frames as an annotation candidate image frame group, displaying at least one frame of the starting point frame and the ending point frame in the series of grouped frames, and grouping the annotation candidate image A procedure for determining an annotation candidate image frame group consisting of a series of frames from the start point frame to the end point frame is executed according to the correction operation of the start point frame and the end point frame of the frame group.
  • An annotation support device includes a moving image acquisition unit that acquires a moving image composed of a plurality of frames obtained by continuously capturing images; a grouping unit for detecting frames including the frame, grouping the detected series of frames as an annotation candidate image frame group; A control unit determines an annotation candidate image frame group by a series of frames from the start point frame to the end point frame in accordance with a correction operation of the start point frame and the end point frame of the grouped annotation candidate image frame group, and the moving image. and a metadata addition unit for adding metadata indicating that each frame is an annotation candidate image frame group.
  • An annotation support method acquires a moving image consisting of a plurality of frames obtained by continuous imaging, and selects a frame including an image of a specific object from among the frames of the acquired moving image.
  • a series of detected frames are grouped as an annotation candidate image frame group, and from the grouped annotation candidate image frame group, an image for a first annotator and a second annotator different from the first annotator. classify images for
  • FIG. 1 is a schematic configuration diagram showing a system including an annotation support device according to a first embodiment of the present invention
  • FIG. 10 is an explanatory diagram for explaining a group of annotation candidate image frames
  • FIG. 10 is an explanatory diagram for explaining a group of annotation candidate image frames
  • FIG. 4 is an explanatory diagram showing an example of an image having an image portion of forceps as a treatment instrument
  • FIG. 4 is an explanatory diagram for explaining a start point frame and an end point frame
  • 10 is a flowchart for explaining processing (grouping control) for determining an annotation candidate image frame group G
  • FIG. 10 is an explanatory diagram for explaining selection of a group of annotation candidate image frames G
  • FIG. 10 is an explanatory diagram for explaining selection of a group of annotation candidate image frames G; 4 is a flowchart for explaining annotation work in the first embodiment; FIG. 4 is an explanatory diagram showing how annotation work is performed; It is an explanatory view for explaining a complicated shape. It is a graph which shows the determination method of a complicated shape.
  • FIG. 10 is a flowchart showing processing for determining annotation candidate image frame groups employed in the second embodiment of the present invention;
  • FIG. 10 is an explanatory diagram for explaining a network for generating an inference model for obtaining annotation candidate image frame group G;
  • FIG. 10 is an explanatory diagram for explaining a network for generating an inference model for obtaining annotation candidate image frame group G;
  • FIG. 1 is a schematic configuration diagram showing a system including an annotation support device according to the first embodiment of the present invention.
  • the annotation in the present embodiment is for detecting an area occupied in an image of an object included in a specific frame in a moving image composed of a plurality of consecutively captured frames.
  • the object is a lesion in a living body
  • this embodiment for a moving image of a living body obtained by, for example, an endoscope, an image included in a specific frame of the moving image
  • This embodiment reduces the load of annotation by selecting a plurality of frames to be annotated (hereinafter referred to as annotation candidate image frame group) from predetermined moving images such as inspection images.
  • 2 and 3 are explanatory diagrams for explaining the annotation candidate image frame group.
  • an image (frame) including an image portion considered to be a lesion (object) is selected as a representative image (hereinafter also referred to as a representative frame) from among the frames of the moving image.
  • a representative image hereinafter also referred to as a representative frame
  • an image with a lesion in the center of the image is suitable as the representative frame.
  • an image (inspection image) captured by an endoscope is used as the moving image.
  • the endoscope has an elongated insertion section, and an imaging device is provided at the tip of the insertion section.
  • the insertion portion of the endoscope is inserted into the body, and moving images of the inside of the body are acquired by capturing images with an imaging device while moving the tip of the insertion portion.
  • FIG. 2 shows how moving images (inspection images) inside the body are acquired while moving the imaging range, and each square frame in FIG. 2 indicates each frame of the inspection image.
  • FIG. 2 shows, for example, the acquired frames arranged at positions corresponding to the imaging range.
  • FIG. 2 shows that the imaging ranges of frames may overlap.
  • FIG. 2 shows that the frames are sequentially shot from left to right on the page, the order of overlapping frames does not indicate the order of shooting, but observations are made in this order. There are many things. Also, the overlap of the frames is for explanation purposes, and in reality, there are many cases where the same place is observed at different angles and distances, but the illustration also includes such situations.
  • the inspection image includes frame groups F1, F2, and F3 each consisting of a series of frames.
  • FIG. 3 shows an example of an image of each frame of the frame group F2 among the inspection images of FIG.
  • the square frames in FIG. 3 indicate the frames f1 to f5 in the frame group F2, and the frame f3 includes an image f3a of an object such as a lesion (indicated by a face mark in FIG. 3). indicates that there is
  • the image f3a is obtained by imaging the object from the front, and the image f3a is positioned substantially in the center of the frame f3.
  • this frame f3 is assumed to be a representative frame indicated by diagonal lines. Frames before and after frame f3 show that as a result of imaging while moving the insertion portion, the object is captured from a direction other than the front at each position of the image.
  • frames f1, f2, f4, and f5 include images f1a, f2a, f4a, and f5a of objects corresponding to image f3a, respectively.
  • the images f1a, f2a, f4a, and f5a have shapes obtained by deforming the image f3a by imaging the object from various angles other than the front.
  • the frame f3 is selected as the representative frame. Furthermore, not only the frame f3 but also a series of frames f1 to f5 obtained by photographing the object from various angles are set as the annotation candidate image frame group.
  • the inference model created using such teacher data is superior to the inference model created using teacher data obtained by annotating only the representative frames. Also, it is considered that high inference accuracy can be obtained even when conditions such as the imaging position and angle are changed. If only images obtained under specific conditions, such as images obtained at a position directly facing a ladle, can be used as training data, the accuracy and reliability of judgment will be poor for images obtained under other conditions. .
  • the operator may not be able to acquire the carefully observed image because the operator has overlooked it, but specifications are required that will still make him aware of the oversight. Therefore, as described above, in the present embodiment, even if the frames before and after the representative frame are not at the ideal in-screen position or viewing angle, or even if there is difficulty in focusing or exposure, the A series of frames including an object are set in the annotation candidate image frame group G so that inference that can detect the object is possible.
  • the annotation candidate image frame group G among the inspection images is the object of annotation.
  • the system in FIG. 1 is composed of a medical system 10 and an information processing system 20.
  • the medical system 10 includes a control unit 11 that controls the entire medical system 10 .
  • the control unit 11 may be configured by a processor using a CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), or the like.
  • the control unit 11 may operate according to a program stored in a memory (not shown) to control each unit, or may implement part or all of its functions with hardware electronic circuits. good too.
  • the image acquisition unit 12 includes an imaging sensor having an imaging element (not shown) such as a CMOS sensor or CCD, and an imaging optical system (not shown).
  • the image acquisition unit 12 also has an image processing function of obtaining an inspection image by performing image processing on the output of the imaging sensor.
  • the image acquisition unit 12 is controlled by the control unit 11 to acquire an inspection image of the subject.
  • the medical system 10 includes, for example, an endoscope for imaging the inside of the body (inside organs such as body cavities and gastrointestinal tracts), a video processor and a light source for driving the endoscope and acquiring inspection images obtained by imaging with the endoscope. device.
  • an endoscope has an elongated insertion section, and an imaging sensor forming an image acquisition section 12 is provided at the distal end of the insertion section.
  • the insertion section is inserted into the body cavity, and moving images (inspection images) of the body cavity illuminated by illumination light from the light source device are acquired by the imaging sensor while the insertion section is moved to shift the imaging range.
  • the moving image acquired by the image acquisition unit 12 can be supplied to the display unit 14.
  • the display unit 14 can be configured by a display device such as an LCD.
  • the display unit 14 has a display screen 14a, and can display an input moving image on the display screen 14a.
  • the medical system 10 is also provided with an inference engine 17 .
  • the inference engine 17 has an inference model for detecting an object such as a lesion in the inspection image, and uses the inference model to detect the object in the moving image acquired by the image acquisition unit 12. and obtain the detection result (inference result) together with information on its reliability.
  • the inference result of the inference engine 17 and reliability information are output to the display unit 14 .
  • the display unit 14 can display the inference result and reliability information on the display screen 14a.
  • a treatment section 15 is provided in the medical system 10 .
  • the treatment section 15 implements various treatments in endoscopic observation.
  • the treatment section 15 may be configured by an operation section of an endoscope, or may be configured by an operation section of a video processor or a light source device.
  • the treatment section 15 may be configured to generate a signal according to the operation of a release button provided on the operation section of the endoscope.
  • the treatment unit 15 is configured to generate a signal in response to an operation of switching the illumination light of the light source device between white light for normal observation and narrow band light for narrow band (NBI) observation. good too.
  • NBI narrow band light for narrow band
  • the treatment section 15 generates various signals according to various operations performed by the operator during endoscope observation.
  • a signal from the processing unit 15 is given to the metadata adding unit 11a of the control unit 11.
  • the metadata addition unit 11 a adds metadata based on the signal from the treatment unit 15 to the captured image from the image acquisition unit 12 . For example, when the operator operates the release button to shoot a still image, metadata indicating that a still image recording operation has been performed at this operation timing is added to the captured image acquired by the image acquiring unit 12. It has become.
  • the moving image acquired by the image acquisition unit 12 and to which the metadata is added by the metadata addition unit 11a is also supplied to the image output unit 13.
  • the image output unit 13 outputs the input moving image to the information processing system 20 .
  • the information processing system 20 creates an inference model to be used in the inference engine 17 of the medical system 10 .
  • the information processing system 20 has a control unit 21 .
  • the control unit 21 may be configured by a processor using a CPU, FPGA, or the like.
  • the control unit 21 may operate according to a program stored in a memory (not shown) to control each unit, or may implement a part or all of its functions with a hardware electronic circuit.
  • the control unit 21 as a grouping unit includes a representative image determination unit 22 , a grouping unit 23 , a metadata reattachment unit 24 , a recording control unit 25 , a learning requesting unit 26 and a display control unit 27 .
  • the information processing system 20 has a recording unit 31 , an annotation terminal unit 41 and a learning unit 51 in addition to the control unit 21 .
  • a recording control unit 25 of the control unit 21 controls recording and reproduction in the recording unit 31 .
  • the recording unit 31 can be composed of a predetermined recording medium such as a hard disk or a memory medium.
  • the recording unit 31 has a plurality of recording areas 32-1, 32-2, 32-3, .
  • the recording area 32 is an area for recording image information for each examination.
  • Each recording area 32 has an image recording area 33a for recording image information and an operation history information recording area 33b for recording image metadata.
  • an image of examination 1 is recorded in the image recording area 33a in the recording area 32-1. That is, in the image recording area 33a, the inspection images supplied from the medical system 10 by the inspection 1 are recorded, and a plurality of annotation candidate image frame groups G(1), selected based on the inspection images of the inspection 1, The image information of the annotation candidate image frame group G(2), . . . , the annotation candidate image frame group G(n) is recorded. Information similar to that in the recording area 32-1 is also recorded in the recording areas 32-2 and 32-3.
  • a representative image determination unit 22 in the control unit 21 reads the inspection image from the recording unit 31 and determines a representative frame.
  • the representative image determination unit 22 may determine the representative frame based on the feature quantity (feature pattern) of the object such as the lesion.
  • the representative image determining unit 22 may determine the representative frame by AI using an inference model for detecting an object such as a lesion. Further, the representative image determining section 22 may determine the representative frame based on the position within the screen and the size of the object.
  • the representative image determination unit 22 has a memory (not shown) that stores information on the characteristic pattern of the target object.
  • the representative image determining unit 22 determines whether or not the frame contains an image of the object by comparing the characteristic pattern of the object stored in the memory with the characteristic pattern of the image in the frame image. Further, the representative image determination unit 22 determines whether or not the image of the detected object is positioned near the center of the image.
  • lesions such as cancer may have a blood vessel pattern different from that of normal tissue.
  • the memory stores a characteristic pattern of an object such as a lesion.
  • an examination image is obtained by a doctor performing an endoscopic examination of a body cavity. It may be located in the center of the image.
  • the representative image determining unit 22 reads out a characteristic pattern from the memory, performs a first process (first image analysis process) on the inspection image, and performs a first process (first image analysis process) on the inspection image. It is detected whether or not there is an image portion of a pattern that matches the pattern.
  • the representative image determination unit 22 may determine the image (frame) as the representative frame when the target object existing in the center of the image is detected by the first processing. It should be noted that the representative image determination unit 22 may add to the condition for determining the representative frame that the proportion of the target object in the image is greater than a predetermined threshold.
  • the lesion is often present in a part of the body cavity.
  • the representative image determination unit 22 performs second processing (second image analysis processing) on the inspection image to detect whether or not the pattern in the center of the image is different from most of the patterns in the center of the other images.
  • second processing second image analysis processing
  • the representative image determination unit 22 may determine the image (frame) as the representative frame. good.
  • the representative image determination unit 22 may determine the representative frame based on the processing results of both the first and second processing. Furthermore, the representative image determination unit 22 may determine the representative frame based on changes in various images caused by doctor's operations.
  • the doctor when a lesion is discovered during an endoscopy, the doctor tries to check it carefully from various angles.
  • various forms of image frames can be obtained for the same object by viewing it from a distance or from a close distance, or by changing the exposure, focus, or zoom.
  • the doctor may intervene with a treatment instrument, release washing water, spray pigment, stain, magnify observation, or perform special light observation. Therefore, the representative image determination unit 22 performs at least three processes (third image analysis process) on the inspection image, such as intervention of the treatment tool, release of cleansing water, pigment dispersion, staining, special light observation, magnifying observation, and the like. Detect one.
  • the representative image determination unit 22 determines the above characteristics. From the group of images obtained by holding, for example, an object that occupies a specific screen area or more in the center of the screen with white light, or a position directly facing (the optical axis of the endoscope optical system and the object A frame in which an object is photographed from a position at which the direction of expansion of the image is substantially perpendicular to the frame may be determined as the representative frame.
  • the representative image determination unit 22 may determine the representative frame through the first to third processes. For example, the representative image determination unit 22 determines the frame as a representative image when a change in the image based on the operation of the operator is detected by the third processing for the frame in which the target object is detected by the first and second processing. You may decide to the frame.
  • FIG. 4 is an explanatory diagram showing an example of an image having an image portion of forceps as a treatment tool.
  • An image frame f10 in FIG. 4 shows a state in which an affected area in the living body is being treated with forceps or the like. Since the image f10a of the forceps has a relatively high brightness and a known shape, for example, the characteristic pattern of the treatment tool is read out from the memory storing the characteristic pattern, and is compared with the pattern of the image by the third processing. Accordingly, the forceps image f10a can be detected relatively easily. In addition, since an image portion with high brightness is generated in the washing water, the image portion of the washing water can be easily detected by the third processing. Also, with respect to pigment dispersion, staining, special light observation, magnifying observation, and the like, the third processing makes it possible to detect these image portions relatively easily.
  • the representative image determination unit 22 may detect the still image operation by the fourth process based on the metadata, and determine the frame of the inspection image corresponding to the timing at which the still image operation was performed as the representative frame. .
  • the representative image determination unit 22 may determine the representative frame by performing the first to fourth processes. For example, when the representative image determining unit 22 detects a still image operation by the operator in the case where the target object is detected by the first to third processes, the representative image determination unit 22 detects that the still image operation is performed by the fourth process. A frame of the inspection image corresponding to the timing may be determined as the representative frame.
  • the inside of the large intestine is imaged by an imaging device while the insertion portion of the endoscope is being pulled out.
  • the imaging element When a lesion is found in the examination image, the vicinity of the lesion is imaged by the imaging element and observed intensively. Therefore, it is considered that the series of frames before and after the representative frame include the image of the object.
  • an imaging element is used to image the entire stomach wall for observation. Even in this case, it is considered that the series of frames before and after the representative frame include the image of the object. Therefore, it is preferable to annotate a series of frames before and after the representative frame.
  • the grouping unit 23 groups the frames before and after the representative frame, including the representative frame, to form an annotation candidate image frame group G. For example, the grouping unit 23 sets, among the frames before and after the representative frame, frames having an image portion with a pattern similar to the characteristic pattern of the object as annotation candidate image frames to be annotated. The grouping unit 23 groups a series of annotation candidate image frames, including representative frames, into a group G of annotation candidate image frames.
  • an annotation candidate image frame group G consisting of continuous frames is set.
  • the annotation candidate image frame group G consists of frames that are continuous with the representative frame, and contains images of characteristic patterns similar to the image of the object included in the representative frame. Therefore, when the image area of the object included in the representative frame is annotated, by referring to this annotation, it is possible to relatively easily annotate the other frames included in the same annotation candidate image frame group G. Conceivable. For this reason, in the present embodiment, a series of frames before and after the representative frame, which are frames having an image portion of a characteristic pattern similar to the characteristic pattern of the object, are included in the annotation candidate image frame group G. ing.
  • the grouping unit 23 may allow the user to determine whether or not the start and end frames in the annotation candidate image frame group G should be included in the annotation candidate image frame group G.
  • FIG. 5 is an explanatory diagram for explaining the start point frame and the end point frame.
  • FIG. 5 shows each frame in the annotation candidate image frame group G using the same notation method as in FIG.
  • the shaded area in FIG. 5 indicates the representative frame IG.
  • Each frame surrounded by a frame is the annotation candidate image frame group G.
  • FIG. The start frame IsG and the end frame IeG are determined by the grouping unit 23 to constitute the annotation candidate image frame group G because patterns similar to the characteristic pattern of the object in the representative frame IG exist in the image. It is a tacoma.
  • the user determines whether or not to include the starting frame IsG and the ending frame IeG in the annotation candidate image frame group G.
  • the display control unit 27 causes the display device (not shown) to display the images of the start point frame IsG and the end point frame IeG specified by the grouping unit 23 .
  • the start point frame IsG and the end point frame IeG may be given to the annotation terminal unit 41 and the annotation terminal unit 41 may display the start point frame IsG and the end point frame IeG on the display unit 42 .
  • the user determines whether or not each of the displayed start point frame IsG and end point frame IeG is included in the annotation candidate image frame group G, and inputs the instruction by an input device (not shown) or the input unit 43 in the annotation terminal unit 41. conduct.
  • the grouping unit 23 determines whether or not to include the starting frame IsG and the ending frame IeG in the annotation candidate image frame group G according to the user's instruction.
  • the grouping unit 23 displays only the start frame IsG and the end frame IeG, and allows the user to decide whether or not to include them in the annotation candidate image frame group G.
  • Other frames may also be displayed, and the user may decide whether to include them in the annotation candidate image frame group G or not.
  • a metadata reattachment unit 24 as a metadata addition unit adds information about the representative frame IG, start point frame IsG, end point frame IeG, and annotation candidate image frame group G to each frame of the inspection image read from the recording unit 31.
  • the metadata shown is added to the target frame in the inspection image.
  • the representative image determination unit 22 and the grouping unit 23 determine the representative frame IG, the start frame IsG, the end frame IeG, and the annotation candidate image frame group G, and the metadata reattachment unit 24 determines the metadata.
  • the given inspection image is given to the recording unit 31 and recorded.
  • the recording control unit 25 also provides the annotation terminal unit 41 with information on the representative frame IG, the start frame IsG, the end frame IeG, and the annotation candidate image frame group G.
  • the annotation terminal unit 41 can be configured by, for example, a computer system.
  • the annotation terminal unit 41 has a display unit 42 , an input unit 43 and an ID determination unit 44 .
  • the display unit 42 can be configured by a display device such as an LCD.
  • the display unit 42 has a display screen, and displays the image supplied from the recording control unit 25 on the display screen.
  • the input unit 43 may be configured by a predetermined input device such as a mouse.
  • the input unit 43 receives a user's input operation and annotates the image.
  • the input unit 43 can be operated to specify a position on the image on the display unit 42 and specify a specific area on the image.
  • the input unit 43 can apply an annotation specifying the area occupied in the image of the object in each frame of the annotation candidate image frame group G.
  • the input unit 43 may be configured by a touch panel provided on the display screen of the display unit 42. In this case, the input unit 43 generates an operation signal based on the user's touch operation. For example, the user can annotate by touching the touch panel that constitutes the input unit 43 and sliding the area occupied in the image of the object in the frame.
  • An ID determination unit 44 is provided to identify a person who performs annotation work.
  • the ID determination unit 44 may be configured by an input device such as a keyboard, or may be configured by a card reader or the like for reading the ID card of the worker.
  • the ID determining unit 44 identifies the person who performed the annotation work, and obtains the metadata of the frame annotated by the person who performed the annotation work. This metadata is added to the corresponding frame by the recording control section 25 and recorded.
  • the recording control unit 25 supplies the information of the annotation candidate image frame group G to which the annotation has been applied to the recording unit 31 for recording. Further, the learning requesting section 26 supplies the annotation candidate image frame group G to which the annotation is applied to the learning section 51 to request learning.
  • the learning unit 51 can be configured by, for example, a computer system capable of deep learning.
  • the learning unit 51 is provided with the annotation candidate image frame group G to which the annotation is applied from the recording unit 31 as teacher data.
  • the learning unit 51 generates an inference model for detecting an object by learning using teacher data.
  • the learning unit 51 can output the generated inference model.
  • “Deep learning” is a multi-layered structure of the process of "machine learning” using neural networks.
  • a typical example is a "forward propagation neural network” that sends information from front to back and makes decisions.
  • it consists of an input layer consisting of N1 neurons, an intermediate layer consisting of N2 neurons given by parameters, and N3 neurons corresponding to the number of classes to be discriminated. It suffices if there are three output layers.
  • the neurons of the input layer and the intermediate layer, and the neurons of the intermediate layer and the output layer are respectively connected by a connection weight, and the intermediate layer and the output layer are added with a bias value, thereby facilitating the formation of logic gates.
  • Three layers may be sufficient for simple discrimination, but if a large number of intermediate layers are used, it becomes possible to learn how to combine a plurality of feature quantities in the process of machine learning. In recent years, those with 9 to 152 layers have become practical due to the relationship between the time required for learning, judgment accuracy, and energy consumption.
  • R-CNN Regions with CNN features
  • CNN Convolution Neural Network
  • FCN Full Convolutional Networks
  • a recurrent neural network fully-connected recurrent neural network
  • NPU neural network processing unit
  • the information processing system 20 is provided with a communication section 28, and the medical system 10 is provided with a communication section 16.
  • the communication units 16 and 28 are configured by a communication circuit adopting a predetermined communication standard, and can transmit and receive data to and from each other.
  • the learning unit 51 supplies the generated inference model to the medical system 10 via the communication units 28 and 16 .
  • the inference engine 17 stores the inference model received via the communication unit 16 in a memory (not shown), and uses the inference model read from the memory to detect a lesion or the like.
  • FIG. 6 is a flow chart for explaining the process of determining the annotation candidate image frame group G (grouping control). 7 and 8 are explanatory diagrams for explaining the selection of the annotation candidate image frame group G.
  • FIG. FIG. 9 is a flowchart for explaining the annotation work in the first embodiment, and FIG. 10 is an explanatory diagram showing how the annotation work is done. Also, FIG. 11 is an explanatory diagram for explaining a complicated shape, and FIG. 12 is a graph showing a method of determining a complicated shape.
  • FIG. 7 shows inspection images recorded in the recording area 32 of the recording unit 31 .
  • FIG. 7 shows each frame of the inspection image in the same manner as in FIG.
  • the inspection image in FIG. 7 has frames F11, F12, and F13.
  • Each of the frame groups F11, F12, and F13 has a plurality of frames.
  • the control unit 21 of the information processing system 20 selects a representative frame using the representative image determination unit 22 .
  • Step S1 in FIG. 6 shows an example of selection of a representative frame.
  • the representative image determining section 22 selects a representative frame depending on whether or not a characteristic pattern PA exists in the center of the image.
  • the representative image determination unit 22 may select a representative frame through the above-described first to fourth processes, for example.
  • the representative image determination unit 22 sequentially reads each frame of the frame group F11, each frame of the frame group F12, and each frame of the frame group F13, and determines whether or not the characteristic pattern PA is positioned at the center of the image of each frame. judge.
  • step S5 determines whether all frames have been determined. If determination of all frames has not been completed (NO determination in step S5), the process returns to step S1 to determine whether or not the next frame is the representative frame.
  • the grouping unit 23 selects frames constituting the annotation candidate image frame group G in steps S2 to S4. In step S2, the grouping unit 23 determines whether or not there is a pattern similar to the characteristic pattern PA in the image for frames before and after the representative frame. Since the images we are dealing with here are continuously acquired at a high frame rate to ensure excellent visibility for observation, differentiation, and diagnosis, unless they are moved a lot (fast movements are not recommended). situation), and the frames before and after are almost the same (the object is the same and the imaging conditions are similar), so you can easily refer to the doctor's annotations. Grouping can be done by parsing.
  • frames that are obtained close to each other in terms of time may be compared, and if the difference in image characteristics is small, they may be grouped together.
  • it may be observed by changing the optical parameters, exposure, image processing, etc. at the time of imaging, or when the appearance of the shape etc. may change due to the handling of treatment tools.
  • it looks like a different group, but in many cases it is better to classify them in the same group. This is because it is better to group images showing the same object in the same group than to form a large number of groups, so that the number of groups can be reduced and the images in the group can be handled collectively, thereby enabling efficient work.
  • FIG. 8 shows an example of an image of frame f20 and frames before and after frame f20.
  • an image f20a of the characteristic pattern PA is present approximately in the center of the image, which is indicated by a circle.
  • the plurality of frames f18 to f19 and frames f21 to f22 preceding the representative frame f20 also have patterns similar to the characteristic pattern PA.
  • the grouping unit 23 sets a series of frames including the representative frame f20 and having a pattern similar to the characteristic pattern PA as the annotation candidate image frame group G(n) (where n is 1, 2, . . . ) (step S3 ).
  • the frames other than frame f21 are images obtained by white light observation, and frame f21 is an image obtained by narrow band light (NBI) observation, and there is a possibility that similar pattern PA will not be detected from frame f21.
  • NBI narrow band light
  • the grouping unit 23 records information on the representative frame IG(n), the starting frame IsG(n), and the ending frame IeG(n) in the recording unit 31 .
  • the metadata reattachment unit 24 records information indicating that each frame is the representative frame IG(n), the start frame IsG(n), and the end frame IeG(n) as metadata. do.
  • step S5 the control unit 21 returns the process to step S1 when the determination of all frames has not been completed, and repeats steps S1 to S5.
  • a plurality of annotation candidate image frame groups G may be detected for one inspection image, and recording in the recording unit 31 is performed while increasing n for each annotation candidate image frame group G.
  • step S5 the control unit 21 ends the processing.
  • step S6 the metadata reattachment unit 24 of the control unit 21 may attach metadata indicating that the frames other than the annotation candidate image frame group G are not frames to be annotated.
  • the annotation candidate image frame group G is read by the recording control unit 25 and supplied to the annotation terminal unit 41 .
  • the frame group F12 in FIG. 7 is set as the annotation candidate image frame group G and none of the frames of the frame groups F11 and F13 is a frame to be annotated, the frame group F12 is set as the annotation candidate image frame group G. It is supplied to the terminal section 41 .
  • annotation candidate image frame group G which is a part of the inspection image, is used as an annotation target, the amount of annotation work is significantly reduced.
  • annotation candidate image frame group G is obtained by imaging an object such as the same lesion from various angles, and teacher data obtained by annotating this annotation candidate image frame group G was used.
  • An inference model obtained by learning is considered to be extremely effective in detecting lesions.
  • the start point frame IsG and the end point frame IeG are confirmed, and the annotation candidate image frame group G is determined. This work may be performed in step S3, or may be performed when an annotation is requested.
  • FIG. 9 shows a series of flows for requesting annotation work and annotation work
  • FIG. 10 shows how doctors and assistants perform annotation work.
  • the example of FIG. 9 is an example in which the start point frame IsG and the end point frame IeG are determined when an annotation work is requested.
  • Steps S11 to S15 in FIG. 9 relate to processing before requesting annotation, and processing is performed using the annotation terminal unit 41, for example.
  • Steps S16 and S17 relate to annotation by a doctor, and are processed by, for example, the computer system 52a shown in FIG.
  • steps S17 to S19 relate to annotation by the assistant, and are processed by, for example, the computer system 52b shown in FIG.
  • step S11 the recording control unit 25 reads out the annotation candidate image frame group G in order from the annotation candidate image frame group G(1).
  • the control unit 21 gives each frame of the readout annotation candidate image frame group G to the annotation terminal unit 41 , and the annotation terminal unit 41 displays each given frame on the display unit 42 .
  • step S12 the representative frame IG(n) is displayed.
  • the operator of the annotation terminal unit 41 may confirm the displayed representative frame IG(n) and, if necessary, reselect the representative frame IG(n).
  • step S13 the start point frame IsG(n) and the end point frame IeG(n) are displayed.
  • the operator of the annotation terminal unit 41 may confirm the displayed representative frame IG(n) and, if necessary, reselect the start frame IsG(n) and the end frame IeG(n).
  • the number of frames in the annotation candidate image frame group G may be reduced by resetting the frame after the starting frame IsG(n) to the starting frame IsG(n). ) is reset as the end point frame IeG(n), the number of frames in the annotation candidate image frame group G may be reduced.
  • the recording control unit 25 records the frame before the annotation candidate image frame group G so that the frame before the starting frame IsG(n) can be reset as the starting frame IsG(n). is also read out and given to the annotation terminal unit 41 . Similarly, the recording control unit 25 instructs the operator so that the frame after the end frame IeG(n) can be reset to the end frame IeG(n). Frames are also read out and given to the annotation terminal unit 41 .
  • step S14 the control unit 21 determines whether or not an annotation work request has been issued by a doctor. If it has not occurred, in the next step S15, confirmation and correction of the representative frame IG, start point frame IsG(n) and end point frame IeG(n) for all annotation candidate image frame groups G(n) are completed. determine whether or not If the processing for all annotation candidate image frame groups G(n) has not been completed, the control unit 21 returns the processing to step S12, repeats steps S12 to S15, and processes all annotation candidate image frame groups. End the processing for G(n).
  • the annotation candidate image frame group G in which the representative frame IG(n), the start frame IsG(n), and the end frame IeG(n) are confirmed.
  • An annotation work is performed for (n). This work may be performed by the annotation terminal unit 41 or by a computer system having the same configuration as the annotation terminal unit 41 .
  • the annotation candidate image frame group G(n) may be sent to the computer system 52a operated by the doctor 51a and the computer system 52b operated by the assistant 51b to carry out the annotation work.
  • steps S12 and S13 may be omitted, and the annotation candidate image frame group G(n) obtained by the representative image determination unit 22 and the grouping unit 23 may be used as they are to request annotation work.
  • FIG. 9 shows an example in which the doctor and the assistant cooperate in performing the annotation work, the doctor alone may perform the annotation work.
  • step S16 an annotation is performed by a doctor.
  • the frame having the largest size of the object included in the image and having the entire object included in the image is subjected to annotation.
  • the representative frame IG(n) may be the frame to be annotated.
  • a doctor 51a operates a computer system 52a to perform annotation work.
  • the computer system 52a has a display screen 53a with a touch panel.
  • an image 54a of the representative frame IG(n) is displayed on the display screen 53a.
  • the image 54a includes an image 55a of an object such as a lesion.
  • the doctor 51a traces the outline of the image 55a with the finger 53a as indicated by the circular arrow, thereby annotating the area occupied by the object in the image 55a.
  • the computer system 52a determines whether or not the annotation has been completed by the operation of the doctor 51a (step S17). Step S16 is repeated until the annotation is completed, and when the annotation is completed, the annotation result by the doctor 51a is sent to the computer system 52b operated by the assistant 51b.
  • the assistant 51b annotates frames other than the frames for which the doctor 51a has annotated, out of the annotation candidate image frame group G(n), while referring to the annotation result of the doctor 51a. Since the images were captured continuously at a high frame rate to ensure high visibility, the frames before and after are almost the same (the object is also the same) unless you move it very much (it is also not recommended to move it quickly). and the imaging conditions are similar), it is easy to refer to the doctor's annotations.
  • an assistant 51b operates a computer system 52b to perform annotation work.
  • the computer system 52b has a display screen 53b.
  • an image 54b of a frame to be annotated is displayed on the display screen 53b.
  • the image 54b includes an image 55b of an object such as a lesion, and an image 57 resulting from annotation by the doctor 51a.
  • the assistant 51b manipulates the mouse 59 with the hand 58b and traces the outline of the image 55b with a cursor (not shown) as indicated by the wavy line arrow, thereby drawing the area where the object occupies the image 55b.
  • the computer system 52b determines whether or not the annotation has been completed by the operation of the assistant 51b (step S19). Step S18 is repeated until the annotation is completed. When the annotation is completed (YES judgment in step S19), in the next step S20, it is determined whether or not the annotation of all the frames in the annotation candidate image frame group G(n) has been completed. is determined. Steps S18 and S19 are repeated until the annotation of all frames is completed, and when the annotation of all frames is completed (YES determination in step S20), the process returns to step S15.
  • the result of the annotation by the doctor 51a and the result of the annotation by the assistant 51b are transmitted to the control unit 21 and recorded in the recording unit 31 by the recording control unit 25.
  • the annotation candidate image frame group G(n) is added with the result of the annotation and the ID information specifying the operator who applied the annotation as metadata by the metadata reattachment unit 24, and is recorded. be done.
  • the annotation is performed by the doctor and the assistant, and the work efficiency is high.
  • the physician performs the annotation work on the frame that serves as a sample of the annotation
  • the assistant can perform the annotation work while referring to the annotation results.
  • Reliable annotation is possible even for assistants who are inexperienced in annotation, such as assistants who are not.
  • the doctor and the assistant may select the frames to be annotated according to the difficulty of annotation, that is, the complexity of the object.
  • the control unit 21 determines whether the doctor or the assistant should annotate each frame in the annotation candidate image frame group G(n) according to the complexity of the object, and determines the result of this determination.
  • the information may be added to the image of the annotation candidate image frame group G(n) as metadata by the metadata reattachment unit 24 and recorded.
  • FIG. 11 is an explanatory diagram showing an example of a method for determining the complexity of an object
  • FIG. 12 is a graph showing an example of a method for determining the complexity of an object.
  • the upper part of FIG. 11 shows the shape of the relatively simple object 62 by hatching.
  • the lower part of FIG. 11 shows the shape of the object 65, which has a relatively complicated shape, by hatching.
  • the control unit 21 sets circles or ellipses 61 and 66 that generally enclose the objects 62 and 65, respectively.
  • the control unit 21 sets circles 63 and 67 smaller in size than the target objects 62 and 66 approximately in the center thereof.
  • the control unit 21 gradually enlarges the diameters of the circles 63 and 67 to the diameters of the substantially ellipses 61 and 66 while the centers of the circles 63 and 67 are fixed.
  • the control unit 21 sequentially obtains the number of pixels of the object 62 included in the circle 63 .
  • the control unit 21 sequentially obtains the number of pixels of the object 65 included in the circle 67 .
  • FIG. 12 is a graph showing the complexity of an object, with the radius of the circle on the horizontal axis and the number of images of the object included in the circle on the vertical axis.
  • FIG. 12 shows, for example, the characteristics in the lower part of FIG.
  • the change in the number of images of the object 62 included in the circle 63 is relatively uniform.
  • the slope of the change in the number of images of the object 65 included in the circle 63 changes according to the shape, as shown in FIG.
  • the control unit 21 determines that the shape of the object is more complicated as the inclination of the change in the number of pixels of the object included in the circle increases.
  • control unit 21 By reading the information about the shape of the object recorded as metadata, the control unit 21 preliminarily sets frames for requesting annotation from the doctor and frames for requesting annotation from the assistant in advance. It is possible to decide.
  • doctors can be asked to annotate frames containing objects with relatively simple shapes, and frames containing objects with relatively complex shapes can be annotated by an assistant, thereby reducing the burden on doctors.
  • the annotation candidate image frame group includes a series of multiple frames in which the same object is imaged at various angles and sizes, each frame of the annotation candidate image frame group is annotated and generated as training data It is considered that the inference model obtained by this method enables high inference accuracy to be obtained.
  • the annotation work is divided between the doctor and the assistant, information about the complexity of the shape of the object is recorded as metadata.
  • a series of frames obtained by acquiring a moving image consisting of a plurality of frames and detecting frames containing an image of a specific object among the frames of the acquired moving image are grouped as an annotation candidate image frame group.
  • FIG. 13 is a flowchart showing processing for determining annotation candidate image frame groups employed in the second embodiment of the present invention.
  • the hardware configuration of this embodiment is the same as that of the first embodiment.
  • This embodiment is an example in which a group of annotation candidate image frames is obtained by reasoning by AI. That is, the representative image determination unit 22 and the grouping unit 23 in the control unit 21 perform inference by AI to determine the representative frame IG and the annotation candidate image frame group G.
  • the representative image determining unit 22 may determine the representative frame IG by the same processing as in the first embodiment.
  • FIGS. 14 and 15 are explanatory diagrams for explaining a network for generating an inference model for obtaining annotation candidate image frame groups G.
  • FIG. 14 and 15 are explanatory diagrams for explaining a network for generating an inference model for obtaining annotation candidate image frame groups G.
  • a predetermined network 70 is given a large amount of data sets 71 corresponding to inputs and outputs as teacher data. Thereby, the network design is determined so that the network 70 can obtain an output corresponding to the input.
  • the network 70 is capable of deep learning having an input layer, an intermediate layer and an output layer.
  • a moving image such as an inspection image is used, in which each frame in the moving image is annotated to specify a region occupied by an object such as a lesion in the image.
  • This moving image may include a frame annotated to indicate that it is a representative frame.
  • the network 70 outputs annotation candidate image frames together with reliability information (reliability).
  • FIG. 15 shows inputting an inspection image to the learned network 70 .
  • the network 70 outputs information of a frame (annotation candidate image frame) (indicated by a hatched frame) containing an object to be annotated among inspection images.
  • step S31 of FIG. 13 the control unit 21 provides the inspection image read from the recording unit 31 to the network 70, and obtains annotation candidate image frames by reasoning, as shown in FIG.
  • the control unit 21 classifies these series of frames as an annotation candidate image frame group G(n). are grouped as (step S32).
  • control unit 21 determines the representative frame IG(n), the start frame IsG(n) and the end frame IeG(n) from the annotation candidate image frame group G(n) (step S33). It should be noted that the representative frame IG(n) may be obtained by inference in step S31.
  • the metadata reattachment unit 24 adds metadata indicating that each frame of the inspection image read out from the recording unit 31 is the representative frame IG, the start frame IsG, the end frame IeG, and the annotation candidate image frame group G. do.
  • the recording control unit 25 supplies the annotation candidate image frame group G(n) to which the metadata has been added by the metadata re-attaching unit 24 to the recording unit 31 for recording, and ends the process.
  • step S34 the metadata reattachment unit 24 of the control unit 21 may attach metadata indicating that the frames other than the annotation candidate image frame group G are not frames to be annotated (step S34).
  • FIG. 16 is an explanatory diagram showing a modification.
  • the hardware configuration in this modified example is the same as that in FIG. 1, and the only difference is the method of determining the representative frame by the representative image determination unit 22 .
  • FIG. 16 shows each frame of the annotation candidate image frame group G(1) and the annotation candidate image frame group G(2) by the same description method as in FIG.
  • the annotation candidate image frame group G(1) is composed of frames f31 to f34, and the representative frame IG(1) is frame f31 indicated by diagonal lines.
  • the annotation candidate image frame group G(2) is composed of frames f41 to f45.
  • the representative image determining unit 22 refers to the representative frame IG(n-1) when determining the representative frame IG(n).
  • the representative frame IG(2) determined by the representative image determining unit 22 only from the frames of the annotation candidate image frame group G(2) is the frame f41.
  • the representative image determination unit 22 refers to the characteristics of the image of the frame f31, which is the representative frame IG(1) of the annotation candidate image frame group G(1), and refers to the image of the annotation candidate image frame group G(2).
  • frame f44 which is most similar to the image feature of frame f31, is reselected as the representative frame IG(2) of the annotation candidate image frame group G(2).
  • the representative frame IG (2) may be selected by referring to the representative frame IG of the candidate image frame group G.
  • the present invention is not limited to the above-described embodiments as they are, and can be embodied by modifying the constituent elements without departing from the gist of the present invention at the implementation stage.
  • various inventions can be formed by appropriate combinations of the plurality of constituent elements disclosed in the above embodiments. For example, some components of all components shown in the embodiments may be omitted. Furthermore, components across different embodiments may be combined as appropriate.
  • most of the controls and functions mainly described in the flow charts can be set by a program, and a computer can read and execute the program to realize the above-described controls and functions.
  • the program can be recorded or stored in whole or in part as a computer program product on portable media such as flexible disks, CD-ROMs, non-volatile memories, etc., or storage media such as hard disks and volatile memories. It can be distributed or provided at the time of product shipment or via a portable medium or communication line.
  • the user can easily implement the annotation support method and annotation support device of the present embodiment by downloading the program via a communication network and installing it on the computer, or by installing it from a recording medium on the computer. can do.
  • endoscopes are not limited to endoscopes used for examinations of the inside of the body. Needless to say, it can also be applied to the field of scopes. It can also be applied to images captured by patrol robots, robot vacuum cleaners, drones, cameras attached to security guards, etc. It can also be applied to surveillance cameras and in-vehicle cameras to create necessary training data from moving images.

Landscapes

  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Endoscopes (AREA)

Abstract

La présente invention concerne un procédé d'aide à l'annotation qui consiste à acquérir une vidéo capturée en continu qui comprend une pluralité de trames, détecter des trames qui comprennent une image d'un objet cible spécifique parmi les trames de la vidéo acquise, regrouper une série de trames détectées en tant que groupe de trames d'image candidates d'annotation, afficher au moins l'une parmi une trame de départ et une trame de fin de la série groupée de trames et, conformément à des opérations de correction de la trame de début et de la trame de fin du groupe de trames d'image candidates d'annotation groupées, finaliser le groupe de trames d'image candidates d'annotation sous la forme d'une série de trames de la trame de départ à la trame de fin.
PCT/JP2021/029450 2021-08-06 2021-08-06 Procédé d'aide à l'annotation, programme d'aide à l'annotation, et dispositif d'aide à l'annotation WO2023013080A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/029450 WO2023013080A1 (fr) 2021-08-06 2021-08-06 Procédé d'aide à l'annotation, programme d'aide à l'annotation, et dispositif d'aide à l'annotation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/029450 WO2023013080A1 (fr) 2021-08-06 2021-08-06 Procédé d'aide à l'annotation, programme d'aide à l'annotation, et dispositif d'aide à l'annotation

Publications (1)

Publication Number Publication Date
WO2023013080A1 true WO2023013080A1 (fr) 2023-02-09

Family

ID=85155503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/029450 WO2023013080A1 (fr) 2021-08-06 2021-08-06 Procédé d'aide à l'annotation, programme d'aide à l'annotation, et dispositif d'aide à l'annotation

Country Status (1)

Country Link
WO (1) WO2023013080A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015211831A (ja) * 2014-04-18 2015-11-26 株式会社東芝 医用画像診断装置及び医用画像処理装置
JP2018078974A (ja) * 2016-11-15 2018-05-24 コニカミノルタ株式会社 動態画像処理システム
US20190110856A1 (en) * 2017-10-17 2019-04-18 Verily Life Sciences Llc Systems and Methods for Segmenting Surgical Videos

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015211831A (ja) * 2014-04-18 2015-11-26 株式会社東芝 医用画像診断装置及び医用画像処理装置
JP2018078974A (ja) * 2016-11-15 2018-05-24 コニカミノルタ株式会社 動態画像処理システム
US20190110856A1 (en) * 2017-10-17 2019-04-18 Verily Life Sciences Llc Systems and Methods for Segmenting Surgical Videos

Similar Documents

Publication Publication Date Title
Ali et al. Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy
JP6927211B2 (ja) 画像診断学習装置、画像診断装置、方法およびプログラム
Bergen et al. Stitching and surface reconstruction from endoscopic image sequences: a review of applications and methods
CN110913746B (zh) 诊断辅助装置、诊断辅助方法及存储介质
JP6501800B2 (ja) 信頼度マッチング付き生体内マルチカメラカプセルからの画像の再構築
JP4749732B2 (ja) 医用画像処理装置
WO2021075418A1 (fr) Procédé de traitement d'image, procédé de génération de données d'apprentissage, procédé de génération de modèle entraîné, procédé de prédiction d'apparition de maladie, dispositif de traitement d'image, programme de traitement d'image et support d'enregistrement sur lequel un programme est enregistré
WO2021176664A1 (fr) Système et procédé d'aide à l'examen médical et programme
JP6707131B2 (ja) 画像処理装置、学習装置、画像処理方法、識別基準の作成方法、学習方法およびプログラム
JP7385731B2 (ja) 内視鏡システム、画像処理装置の作動方法及び内視鏡
JPWO2019087969A1 (ja) 内視鏡システム、報知方法、及びプログラム
CN104936505B (zh) 使用预先采集的图像进行的导航
CN113827171A (zh) 内窥镜成像方法、内窥镜成像系统和软件程序产品
WO2023013080A1 (fr) Procédé d'aide à l'annotation, programme d'aide à l'annotation, et dispositif d'aide à l'annotation
JP7441452B2 (ja) 教師データ生成方法、学習済みモデル生成方法、および発病予測方法
US20220361739A1 (en) Image processing apparatus, image processing method, and endoscope apparatus
Phillips et al. Video capsule endoscopy: pushing the boundaries with software technology
US20220222840A1 (en) Control device, image processing method, and storage medium
CN116724334A (zh) 计算机程序、学习模型的生成方法、以及手术辅助装置
CN118119329A (zh) 内窥镜插入引导装置、内窥镜插入引导方法、内窥镜信息取得方法、引导服务器装置及图像推导模型学习方法
US20220202284A1 (en) Endoscope processor, training device, information processing method, training method and program
WO2023188169A1 (fr) Endoscope, dispositif d'acquisition d'image candidate de données d'entraînement, procédé d'acquisition d'image candidate de données d'entraînement, et programme
WO2023218523A1 (fr) Second système endoscopique, premier système endoscopique et procédé d'inspection endoscopique
EP4111938A1 (fr) Système d'endoscope, dispositif de traitement d'image médicale, et son procédé de fonctionnement
CN114785948B (zh) 内窥镜调焦方法、装置、内镜图像处理器及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21952903

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE