WO2023037413A1 - Data acquisition system and data acquisition method - Google Patents

Data acquisition system and data acquisition method Download PDF

Info

Publication number
WO2023037413A1
WO2023037413A1 PCT/JP2021/032865 JP2021032865W WO2023037413A1 WO 2023037413 A1 WO2023037413 A1 WO 2023037413A1 JP 2021032865 W JP2021032865 W JP 2021032865W WO 2023037413 A1 WO2023037413 A1 WO 2023037413A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
data
inference
unit
doctor
Prior art date
Application number
PCT/JP2021/032865
Other languages
French (fr)
Japanese (ja)
Inventor
浩一 新谷
憲 谷
学 市川
智子 後町
修 野中
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Priority to PCT/JP2021/032865 priority Critical patent/WO2023037413A1/en
Publication of WO2023037413A1 publication Critical patent/WO2023037413A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • the present invention relates to a data acquisition system and data acquisition method for collecting image data for inference model generation in medical equipment.
  • An inference model is generated by collecting a large amount of data, creating teacher data by annotating this data, and performing machine learning such as deep learning using this teacher data. It is known to make various inferences by inputting data into an inference unit having this inference model.
  • Patent Document 1 discloses an inference device that inputs sensor data provided on a target machine into an inference model and performs inference for controlling the target machine.
  • This inference device includes a diagnosis target machine to be diagnosed and a database DB for acquiring and storing sensor data of machines similar to this machine, and a first feature amount created from the sensor data acquired for the similar machine.
  • the reasoning device of Patent Document 1 can correctly determine the state of the target machine even when the sensor data of the target machine is abnormal data.
  • the inference device in Patent Document 1 normal diagnosis can be performed even when abnormal data is input.
  • the inference device of Patent Document 1 does not describe acquisition of rare data in order to generate an inference model for properly inferring even rare data. That is, the inference device in Patent Literature 1 cannot create an inference model in consideration of rare data such as when an abnormality occurs.
  • the present invention has been made in view of such circumstances, and an object of the present invention is to provide a data acquisition system and a data acquisition method that can acquire rare data to be input to an inference device.
  • a data acquisition system comprises an inference unit having an inference model for inferring and displaying a lesion from an image, an inference result in the inference unit, and a diagnosis of the lesion. and an acquisition unit that acquires image data when the display is performed when the relationship of actions of the doctor who performs the display is not in the expected relationship.
  • a data acquisition system is the data acquisition system according to the first invention, wherein the image data acquired by the acquisition unit when the display is performed is a moving image including image frames at the time of the display, and the image frames are displayed. Data containing identifiable information.
  • a data acquisition system is the data acquisition system according to the first aspect of the invention, further comprising a recording unit that adds information about the rarity of the image to the image acquired by the acquisition unit and records the image.
  • a data acquisition system further comprising a teacher data candidate selection unit that selects the image data acquired by the acquisition unit as a teacher data candidate.
  • a data acquisition system is the data acquisition system according to the first aspect, wherein the image data acquired by the acquisition unit when the display is performed is a time-series image group including image frames at the time of the display, and This is data containing information that can specify an image frame.
  • a data acquisition system has a database that records an expected relationship between an inference result obtained using the inference model in the first aspect and an action of the doctor who makes the diagnosis.
  • a data acquisition system is the data acquisition system according to the first invention, according to the similarity between the image when the lesion is displayed and the teacher data group when the inference model is learned, the teacher data A candidate image similarity determination unit is provided, and the acquisition unit acquires the image determined as the teacher data candidate by the image similarity determination unit.
  • a data acquisition system according to an eighth invention is the data acquisition system according to the first invention, in which the inference results of the inference unit are not displayed, and based on the action taken by the doctor, whether or not the expected relationship is met. judge.
  • a data acquisition method uses an inference model for inferring and displaying a lesion from an image, and the relationship between the result of the inference and the action of a doctor who diagnoses the lesion is established. If the relationship is not in the expected relationship, the image data when the above display is performed is acquired.
  • a data acquisition method makes inferences using an inference model for displaying an operation guide for an endoscope, and determines the relationship between the operation guide display and the actions of a doctor who operates the endoscope. If the relationship is not in the expected relationship, the operation-related data at the time when the operation guide display is performed is acquired.
  • a data processing method inputs sensor data provided in a medical device or operation data of a doctor to the medical device into an inference model to make an inference, and uses the result of the inference and the medical device.
  • an identification code can be given to the sensor data or the operation data when the expected relationship is not met.
  • a data processing system comprises an inference unit that inputs sensor data provided in a medical device or operation data of a doctor on the medical device into an inference model to make an inference; a result of the inference; a data processing unit capable of assigning an identification code to the sensor data or the operation data when the action relationship of the doctor using the device does not meet the expected relationship. have.
  • the present invention it is possible to provide a data acquisition system and a data acquisition method for acquiring rare data to be input to an inference device.
  • FIG. 1 is a block diagram mainly showing an electrical configuration of a data acquisition system according to an embodiment of the present invention
  • FIG. 1 is a block diagram mainly showing an electrical configuration of a data acquisition system according to an embodiment of the present invention
  • FIG. 4 is a flowchart showing image acquisition operations in the data acquisition system according to one embodiment of the present invention.
  • 4 is a flow chart showing the operation of acquiring important images in the data acquisition system according to one embodiment of the present invention; It is a figure which shows the recording content of the database in the data acquisition system which concerns on one Embodiment of this invention.
  • 4 is a flow chart showing the operation of an in-hospital system in the data acquisition system according to one embodiment of the present invention.
  • FIG. 4 is a flow chart showing the operation of the device in the data acquisition system according to one embodiment of the present invention.
  • the data acquisition system according to one embodiment of the present invention (a) shows acquisition of data for generating an inference model, and (b) shows inference in an inference model generated using the acquired data. .
  • This data acquisition system includes a device 10 such as an endoscope, an in-hospital system 20 that can cooperate with this device, and a management server 30 .
  • a device 10 such as an endoscope
  • an in-hospital system 20 that can cooperate with this device
  • a management server 30 Each device or the like has each unit for realizing various functions, but the device or the like that realizes each function may be changed as appropriate.
  • the device 10 is described as an endoscope, but the device 10 is not limited to an endoscope as long as it outputs data such as image data and performs inference using this data.
  • the part described as the in-hospital system 20 need not be limited to what is provided in the hospital, such as a cloud-based management tool, as long as the information of the device 10 can be shared.
  • a person who can access a cloud-based management tool or the like to determine data is described as a doctor, but it is also possible to write an expert.
  • FIG. 1A shows a configuration of a device 10 such as an endoscope and an in-hospital system 20 that cooperates with the endoscope 10.
  • FIG. 1B also shows a management server 30 that cooperates with the hospital system 20.
  • This management server 30 has a recording unit that records existing teacher data and the like used to create an inference model, an annotation/learning unit, and the like.
  • the hospital system 20 and the management server 30 can communicate for data exchange and the like through a communication network such as the Internet.
  • the arrangement of each block in the device 10, the hospital system 20, and the management server 30 may be appropriately changed other than the arrangement shown in FIGS. 1A and 1B.
  • image data and the like acquired by the equipment 10 are transmitted to the management server 30 through the hospital system 20.
  • communication may be performed directly by
  • a device 10 such as an endoscope has a control unit 11, a display unit 12, an image acquisition unit 13, an inference unit 14, and an operation result reflection unit 15.
  • the image acquisition unit 13 includes various circuits such as an imaging lens, an imaging device, an imaging device control circuit, and an imaging data processing circuit.
  • the image acquisition unit 13 acquires an image inside the body of a patient or the like.
  • the input image P1 is an example of image data.
  • This input image P1 has information (tag data) such as category 1 attached to the image data P1d. be.
  • FIG. 1A shows only P1 as an input image, but of course, many images are input at the time of inspection, and not only still images but also moving images may be used.
  • the image acquisition unit 13 may input a plurality of images for each examination, for example, as indicated by examinations Ex1 to Ex3 in FIG.
  • the operation result reflection unit 15 has an operation unit for operating the device 10, a user interface, and the like.
  • the operation unit includes a release operation unit for instructing imaging, a light source, an operation unit for supplying air, water, and suction, and mechanisms such as an angle mechanism and a biopsy mechanism. have.
  • As an interface it may have a user interface such as a touch screen.
  • the operation result reflection unit 15 implements the operation of each operation unit according to the operation of the operation unit described above. For example, when the release operation unit instructs acquisition of a still image, the image acquisition unit 13 acquires the still image, and when the release operation unit instructs acquisition of a motion, the image acquisition unit 13 acquires the motion. .
  • the inference unit 14 has an inference engine equipped with an existing inference model 14a, a category determination unit 14b, and an image similarity determination unit 14c.
  • the existing inference model 14a is an inference model generated by the annotation/learning unit 32 using existing teacher data 33a recorded in the recording unit 33 of the management server 30, which will be described later.
  • the existing inference model 14a functions as an inference model for inferring and displaying a lesion from an image, and the inference unit 14 functions as an inference unit equipped with this existing inference model 14a (for example, see S1 in FIG. 2). ).
  • the existing inference model 14a is not limited to inferring a lesion, and may be, for example, an inference model for displaying an operation guide for an endoscope.
  • the image obtained by the image obtaining unit 13 and/or the operation state information obtained by the operation result reflecting unit 15 are input to this inference model, the operation guide is inferred, and the operation guide is displayed based on the inference result.
  • the inference unit 14 functions as an inference unit equipped with this existing inference model 14a.
  • the existing inference model 14a may be an inference model other than an inference model that infers a lesion or an operation guide.
  • the medical device may be provided with other sensors in addition to acquiring image data by the imaging device.
  • the operation data of the doctor's medical equipment may be input to the existing inference model 14a.
  • the inference unit 14 functions as an inference unit that inputs sensor data provided in a medical device or operation data of a doctor on the medical device to an inference model and makes an inference.
  • the annotation/learning unit 32 requests an external learning device to generate an inference model
  • the generated inference model may be
  • the inference engine may be configured by hardware, may be configured by software (program), or may be a combination of hardware and software.
  • the inference unit 14 may be provided in the hospital system 20 or the management server 30 and perform inference using an existing inference model in the hospital system 20 or the management server 30 .
  • the existing inference model 14a can infer whether or not there is a tumor or the like in the input image, and if there is a tumor or the like, it can display its position.
  • the existing inference model 14a may infer lesions and the like in addition to tumors. Further, the existing inference model 14 a may perform inference for guiding the doctor or the like when operating the device 10 and display the inference result on the display unit 12 .
  • the image data P1d acquired by the image acquisition unit 13 is input to the inference engine having the existing inference model 14a, and the inference result by the existing inference model 14a is output.
  • the inference engine also calculates the reliability of this inference when making an inference, and outputs this reliability as well.
  • the category determination unit 14b determines the category of the image acquired by the image acquisition unit 13.
  • the category determination unit 14b determines categories such as observation site, operation state, treatment, model used, etc., based on information recorded in category 1P1c of input image P1 and information such as equipment used.
  • the image similarity determination unit 14c determines whether the image acquired by the image acquisition unit 13 is a similar image. There are many images acquired by the image acquisition unit 13, and many of them are similar to each other. Therefore, the image similarity determination unit 14c determines whether or not the images are similar to each other. In addition, in this embodiment, as training data for generating a new inference model, image data obtained when the inference result of the existing inference model differs from the doctor's diagnosis result is acquired (contradiction determination unit 23, new inference model data 33c, etc.).
  • Whether an image with a high degree of similarity or an image with a low degree of similarity is required depends on the situation. Even if there is a sufficient number of similar training data, there are cases where similar images are requested as test data, and conversely, there are cases where dissimilar images are requested even if there are a sufficient number of similar images. be. Furthermore, even if the images are similar, the circumstances or objects in which the images were obtained may be rare. It is often helpful to have a program or the like.
  • the image similarity determination unit 14c may acquire an image determined to be similar to an image that has been used as teacher data so far, and use this image.
  • the image similarity determination unit 14c functions as an image similarity determination unit that selects training data candidates according to the degree of similarity between the image when the lesion is displayed and the training data group when the inference model is learned.
  • the acquisition unit acquires the image determined as the teacher data candidate by the image similarity determination unit.
  • the display unit 12 has a display monitor such as a display, and displays menu screens (images for operation), internal examination images acquired by the image acquisition unit 13, and the like.
  • a doctor or the like can operate the device 10 such as an endoscope while observing the acquired image (capture image), and can also perform examinations, diagnoses, and the like.
  • the display unit 12 receives the image obtained by the image obtaining unit 13 and the inference result 12a by the inference unit 14, and displays the inference result. When displaying the inference result 12a, the inference reliability 12b may be displayed.
  • the control unit 11 is a processor including a CPU (Central Processing Unit) and various peripheral circuits.
  • the processing device such as the CPU controls each section in the device 10 according to a program stored in the storage section of the device 10 .
  • the control unit 11 may perform some or all of the functions of the image acquisition unit 13 , the operation result reflection unit 15 , the inference unit 14 , and the display unit 12 .
  • the device 10 has, for example, a communication section and the like in addition to the above-described sections. This communication unit communicates with the hospital system 20 , but may communicate directly with the management server 30 .
  • the in-hospital system 20 is a server or the like provided in the hospital, can be connected to a plurality of devices 10 by wireless communication or wired communication, and can exchange various data. Note that the in-hospital system 20 may be provided in the cloud as long as it can be connected to the device 10 or the like.
  • the in-hospital system 20 receives the image input by the image acquisition unit 13 from the device 10, and the inference result, the reliability of the inference result, the category determination result, the image similarity determination result, etc. by the inference unit 14.
  • the image acquired by the device 10 is recorded, and an image for generating a new inference model is selected according to the inference result and the reaction of the doctor or the like and transmitted to the management server 30 .
  • an image for generating a new inference model can be selected according to the inference result, the image displayed according to the reaction of the doctor, etc., confirmed by the doctor, and the reaction of the doctor who confirmed the image. good.
  • the hospital system 20 has a control unit 21 , a UI unit 22 , a contradiction determination unit 23 , a communication unit 24 and a recording unit 25 .
  • the UI unit 22 determines the operating state of the UI (user interface) of the device 10 .
  • the UI unit 22 includes user interfaces such as a keyboard for manually inputting text information by a doctor and a microphone for inputting voice information. Further, the UI unit 22 may be an interface for inputting the findings of the doctor in charge, and may be linked with an electronic medical chart in which the doctor writes and creates the above-mentioned findings, for example. If it is not possible to link with the electronic medical chart, the medical chart information may be input manually, or the medical chart image may be read by a scanner or the like, and the result may be read as characters. In some cases, the medical record is recorded by voice, and in that case, the function to convert voice to text should be used together.
  • the contradiction determination unit 23 determines the inference result of the inference unit 14 and the reaction of the doctor who operated the device 10 by the UI unit 22 or who made a diagnosis by viewing the image (including the result of converting the above findings into text), in other words, Compare the reaction with the response of the expert who made the expert judgment, and determine whether there is a contradiction between the two. For example, when an existing inference model for inferring whether or not there is a tumor is set, the inference unit 14 infers that there is no tumor in the image acquired by the device 10. Suppose you observe and determine that there is a tumor. In this case, there is a contradiction between the inference result of the inference unit 14 and the doctor's diagnosis result.
  • the contradiction determination unit 23 determines whether or not there is a contradiction between the inference result of the inference unit 14 and the diagnosis result by the doctor. In addition to determining whether or not there is a contradiction between the inference result and the doctor's diagnosis result, the contradiction determination unit 23 may determine the degree of importance in consideration of, for example, the reliability of the inference result. good. Determination by the contradiction determination unit 23 will be described later with reference to FIG.
  • the timing for the contradiction determination unit 23 to determine whether or not there is a contradiction may be when the doctor observes the acquired image when operating the device 10 . In addition to this timing, it may be performed when diagnosing while observing the image recorded in the recording unit 25 on the display unit of the in-hospital system after the examination. An electronic medical chart for recording diagnostic results may be filled in while observing images during or after an examination.
  • the operating state of the doctor's equipment may be input and used. For example, when it is output that there is no tumor as an inference result, the doctor may observe the site many times, or the site may be biopsied, dyed, or excised. If so, a tumor may have developed. In such a case, the contradiction determination unit 23 may determine that there is a contradiction in the inference result.
  • the contradiction determination unit 23 determines whether or not the operation guide display and the inference result are inconsistent, not only when the inference result and the action (reaction) of the doctor are not in the relationship expected for a lesion such as a tumor. You may make it judge. For example, if the existing inference model 14a is an inference model for displaying operation guidance for an endoscope, this inference model can be used to display the operation guidance. When this operation guide display and the operation of the doctor who operates the endoscope while looking at this operation guide do not match, the contradiction determination unit 23 determines that there is a mismatch, and the image and/or image when this mismatch occurs. Alternatively, operation information at that time may be acquired.
  • the functions of the UI unit 22 and the contradiction determination unit 23 may be realized by hardware and/or software. Of course, part of the functions of the UI unit 22 and the contradiction determination unit 23 may be implemented by hardware, and the rest may be implemented by software. Also, the functions of the UI unit 22 and the contradiction unit 23 may be combined with the control unit 21 . That is, a processor may be provided for realizing all or part of the functions of the control unit 21, the UI unit 22, and the contradiction determination unit 23. FIG.
  • the contradiction determination unit 23 is an acquisition unit (processor) that acquires image data when displaying when the relationship between the inference result in the inference unit and the action of the doctor who diagnoses the lesion is not in the expected relationship. (See S5 in FIG. 2, S13 to S17 in FIG. 3, S31 in FIG. 5, S53 in FIG. 6, etc.).
  • the image data acquired by the acquisition unit when displaying is a moving image (time-series image group) including image frames at the time of display, and is data containing information that can specify the image frames.
  • the image data acquired by the acquisition unit when displaying is a time-series image group including image frames at the time of display, and is data containing information that can specify the image frames.
  • the contradiction determination unit 23 attaches information about the rarity of the image to the image acquired by the acquisition unit.
  • the contradiction determination unit 23 functions as a training data candidate selection unit that selects the image data acquired by the acquisition unit as training data candidates (see S7 in FIG. 2, for example).
  • image data can be selectively used according to the required performance and required specifications of the inference model when learning and testing the inference model.
  • the reason for the rarity can be recorded in the metadata of the image. You may do so.
  • the acquired images may be assumed to be still images taken, or may be acquired as a series of inspection videos (time-series image group), and acquired in such a way that frames in them can be specified.
  • the method of That is, the image data acquired by the acquisition unit when displaying is a time-series image group including image frames at the time of display, and may be data containing information that can specify the image frames.
  • the system and control unit know which frame of the video is being displayed in live view or playback, and it is possible to determine which frame is being displayed, so important images can be recorded as frame-specific information. is easy.
  • the contradiction determination unit 23 and the like may determine whether or not there is an expected relationship between the display of the operation guide and the action (reaction) of the doctor or the like, not limited to the lesion such as a tumor.
  • the in-hospital system 20 (which may be the device 10 or the management server 30) has an inference unit that makes inference by inputting sensor data provided in the medical device or doctor's operation data for the medical device into the inference model. If so, the contradiction determination unit 23 and the like, if the relationship between the result of inference and the action of the doctor using the medical device is not in the expected relationship, the sensor when the expected relationship is not met It has a function as a data processing section that can assign an identification code to data or operation data. This data processing unit may simply assign an identification code to sensor data or operation data when the expected relationship is not met, or simply select data to which an identification code has been assigned. , or may be further acquired and recorded.
  • the communication unit 24 has a communication circuit and can communicate with the communication unit 35 of the management server 30 and the like.
  • the communication unit 24 can transmit information such as image data recorded in the recording unit 25 to the management server 30 .
  • the communication unit 24 creates teacher data by annotating image data and the like recorded in the recording unit 25, and receives generation of an inference model generated by the learning unit from the management server 30 or the like. You can also The communication unit 24 can also receive the existing inference model 32b stored in the management server 30 and transmit it to the inference unit 14 in the device 10 .
  • the communication unit 24 transmits image data and the like for generating a new inference model to the management server.
  • the inference result is no tumor, while the doctor's diagnosis is a tumor
  • both are associated and transmitted. That is, since the acquisition timing of the image data differs from the timing at which the doctor's diagnosis results are obtained based on the description in the electronic medical record, if simultaneous transmission is not possible, the two are associated with each other.
  • the control unit 21 may associate the two.
  • the recording unit 25 is an electrically rewritable non-volatile memory, and records the image data acquired by the image acquisition unit 13 and the recorded tag data associated with this image data.
  • the recording unit 25 may record all the image data acquired by the image acquisition unit 13, or may record only images in which the inference result and the diagnosis result are contradicted by the contradiction determination unit 23. good.
  • the recording unit 25 also stores a database showing expected relationships based on inference results and doctor actions, which will be described later with reference to FIG.
  • the recording unit 25 functions as a recording unit that attaches information about the rarity (importance) of the image to the image acquired by the acquisition unit and records the image (see, for example, S15 of FIG. 3, FIG. 4, etc.). Note that this recording unit is not limited to the hospital system 20, and may be provided in the management server 30. A recording unit is provided in the device 10 to determine the degree of rarity (importance) and record it in association with the image. may The recording unit 25 records a database indicating the expected relationship between the inference result obtained using the inference model and the action of the doctor who makes the diagnosis (see FIG. 4, for example).
  • the control unit 21 is a processor including a CPU (Central Processing Unit) and various peripheral circuits.
  • the processing device such as the CPU controls each part within the hospital system 20 according to a program stored in the storage unit within the hospital system 20 .
  • the control unit 21 may perform some or all of the functions of the UI unit 22 , the contradiction determination unit 23 , the communication unit 24 and the recording unit 25 .
  • the management server 30 can be connected to a plurality of hospital systems 20 and the like through a communication network such as the Internet, records training data, generates an inference model based on this training data, or generates an inference model externally. can be requested to the learning device of This generated inference model can be transmitted to the hospital system 20 through a communication network such as the Internet.
  • the management server 30 has a control unit 31 , an annotation/learning unit 32 , an existing inference model 32 b , a recording unit 33 , a recording control unit 34 and a communication unit 35 .
  • the recording unit 33 in the management server 30 has an electrically rewritable non-volatile memory and can record various data.
  • the management server 30 records existing teacher data A33a, existing teacher data B33b, and new inference model data 33c in the recording area.
  • the existing teacher data A33a is teacher data that has been used when generating the existing inference model 32b.
  • the existing teacher data B32b is teacher data that was created as teacher data but was not adopted when generating the existing inference model 32b for some reason.
  • the recording unit 33 can record a plurality of pieces of teacher data as existing teacher data A32a and existing teacher data B32b.
  • the existing teacher data A 33a depicts three teacher data of a plurality of teacher data 33aa, 33ab, and 33ac
  • the existing teacher data B 33b depicts one teacher data of the teacher data 33ba.
  • a large number of teacher data can be recorded.
  • the teacher data 33aa consists of image data 33aaa, categories 133aab, and annotations 33aac.
  • the image data 33aaa is image data adopted as teacher data, and the category 133aab indicates the category of the image data.
  • the annotation 33 aac is an annotation added by the annotation/learning unit 32 .
  • annotations for example, the presence or absence of a tumor, the position of the tumor, the type of the tumor, and the like are added, and a doctor's action guide when the image data is acquired may be added.
  • Action guides for doctors include, for example, treatment candidates displayed as guides such as biopsy, pigment spraying, staining, and excision, and may also be items described in medical charts.
  • the image data in the teacher data 33aab, 33aac, and 33ba are recorded in association with the same information as the category 1 annotation teacher data 33aa.
  • the new inference model data 33c is image data or the like when the contradiction determination unit 23 in the hospital system 20 determines that there is a contradiction between the inference result of the existing inference model 14a and the doctor's diagnosis result. .
  • the image data recorded as the new inference model data is the data transmitted to the management server 30 as the new inference model data among the image data recorded in the recording unit 25 .
  • this image data is recorded in the hospital system 20, it may not yet be determined to be inconsistent.
  • annotation information, category 1 information, etc. to be attached to the image data are sent to the annotation/learning unit 32, and teacher data with annotations is created and recorded as new inference model data 33c.
  • the new inference model data 33c can be used not only as teaching data for generating a new inference model, but also for evaluating the new inference model and calculating reliability.
  • the new inference model data 33c may be transmitted by the hospital system 20 to the management server 30 when the image data is temporarily recorded in the recording unit 25 in the hospital system 0, and the image data may be temporarily recorded, and when the contradiction determination unit 23 determines that there is a contradiction, the annotation/learning unit 32 may add an annotation and record it as new inference model data 33c. Also, similar images that are similar to the images for the new inference model may be collected. Further, as the data for the new inference model, not only the image data but also the specification of the additional teacher data may be recorded.
  • the annotation/learning unit 32 creates teacher data by annotating image data transmitted from the hospital system 20 or the like. Annotations may be added based on data transmitted together with image data from the hospital system 20 or the like, or may be automatically added based on other information. Alternatively, an expert such as a doctor may manually add annotations.
  • the annotation/learning unit 32 generates an inference model by machine learning such as deep learning using teacher data such as the new inference model data 33c. Therefore, the annotation/learning unit 32 has hardware such as an inference engine for inference model generation, or software for inference model generation. Further, if the inference model cannot be generated by learning within the management server 30, it may have a request unit for requesting an external learning device to generate the inference model.
  • the existing inference model 32b is an inference model generated using the existing teacher data A33a.
  • This inference model 32 b is transmitted to the device 10 and set in the inference section 14 .
  • the device 10 inputs the image data acquired by the image acquisition unit 13 to the inference unit 14, performs inference using the existing inference model 14a, and outputs the inference result to the display unit 12, the contradiction determination unit 23, and the like. do.
  • this inference may be performed in the in-hospital system 20 as well as in the device 10 .
  • an inference section similar to the inference section 14 is provided within the hospital system 20 .
  • Deep learning is a multilayer structure of the process of "machine learning” using neural networks.
  • a typical example is a "forward propagation neural network” that sends information from front to back and makes decisions.
  • the simplest forward propagation neural network consists of an input layer composed of N1 neurons, an intermediate layer composed of N2 neurons given by parameters, and N3 neurons corresponding to the number of classes to be discriminated. It suffices if there are three output layers composed of neurons.
  • the neurons of the input layer and the intermediate layer, and the intermediate layer and the output layer are connected by connection weights, respectively, and the intermediate layer and the output layer are added with bias values, so that logic gates can be easily formed.
  • the neural network may have three layers for simple discrimination, but by increasing the number of intermediate layers, it is also possible to learn how to combine multiple feature values in the process of machine learning. In recent years, 9 to 152 layers have become practical from the viewpoint of the time required for learning, judgment accuracy, and energy consumption.
  • a process called “convolution” that compresses the feature amount of an image may be performed, and a “convolution neural network” that operates with minimal processing and is strong in pattern recognition may be used.
  • a "recurrent neural network” fully-connected recurrent neural network
  • which can handle more complicated information and can handle information analysis whose meaning changes depending on the order and order, may be used in which information flows in both directions.
  • NPU neural network processing unit
  • machine learning such as support vector machines and support vector regression.
  • the learning involves calculation of classifier weights, filter coefficients, and offsets, and there is also a method using logistic regression processing. If you want a machine to judge something, you have to teach the machine how to judge.
  • a method of deriving image determination by machine learning is used.
  • a rule-based method that applies rules acquired by humans through empirical rules and heuristics may be used.
  • the recording control unit 34 controls data recording in the recording unit 33 . As described above, when image data is transmitted from the recording unit 25 of the hospital system 20, or when data related to contradiction determination is transmitted from the contradiction determination unit 23, these data are recorded. control.
  • the communication unit 35 has a communication circuit and can communicate with the communication unit 24 of the hospital system 20 and the like.
  • the communication unit 35 can receive information such as image data recorded in the recording unit 25 .
  • the communication unit 35 can also communicate with servers other than the hospital system 24 to collect various information. Note that in FIGS. 1A and 1B, the device 10 and the management server 30 do not communicate directly, but they may communicate data directly. In this case, when the contradiction is clarified by the contradiction determination unit 23, the data may be recorded as new inference model data.
  • the control unit 31 is a processor that includes a CPU (Central Processing Unit), etc., and is composed of an ASIC (Application Specific Integrated Circuit) that includes various peripheral circuits.
  • the processing device such as the CPU controls each section in the management server 30 according to a program stored in the storage section in the management server 30 .
  • the control unit 31 may perform some or all of the functions of the annotation/learning unit 32 , the recording control unit 34 , and the communication unit 35 .
  • the existing inference model 32b generated in the management server 30 or the like is set in the inference unit 14 in the device 10 as the existing inference model 14a.
  • the input image P1 is inferred using the model 14a.
  • this inference for example, the presence or absence of a lesion such as a tumor may be inferred, or a guide for operating the device 10 or the like may be inferred for a doctor or the like.
  • the relationship between the inference result of the inference model and the action of the doctor who makes the diagnosis is expected. If there is no such relationship, the acquired image at that time is acquired as the new inference model data and recorded as the new inference model data 33c.
  • a new inference model can be generated using the recorded new inference model data 33c. This new inference model can reduce the number of doctors' judgments that the inference results are strange when compared with the existing inference model 14a that has been used so far.
  • the new inference model data acquired in this embodiment can be used to create teacher data by annotating the presence or absence of lesions such as tumors and their positions, and using this teacher data to generate a new inference model. . In addition to the generation of this new inference model, it can also be used for purposes such as evaluation of the new inference model and determination of reliability.
  • the important image is an image displayed when the relationship between the inference result of the existing inference model and the action performed by the doctor who makes the diagnosis does not meet the expected relationship.
  • This flow is realized by the control unit 11 in the device 10 and/or the control unit 21 in the hospital system 20 operating in cooperation according to the programs stored in the memory in each control unit.
  • the patient is examined using AI (S1).
  • AI existing inference model 14a (AI) of the inference unit 14 is used to infer the presence or absence of a lesion such as a tumor and its position.
  • the guide display for operating the device 10 may be performed by AI.
  • the contradiction determination unit 23 determines whether or not there is a contradiction between the inference result of the existing inference model 14a of the inference unit 14 and the doctor's diagnosis result. That is, it is determined whether or not the prescription assumed from the inference result is performed.
  • a doctor's prescription is determined, for example, based on an electronic medical chart or the like, and whether or not it is different from the inference result is determined based on the description of the electronic medical chart.
  • the doctor's reaction based on the electronic medical record, for example, based on the output of the operation result reflection unit 15 or the like, biopsy, pigment spraying, staining, polyp
  • the device 10 such as an endoscope is repeatedly operated and observed, and a normal part is determined. The determination may be made based on the doctor's operation state, such as whether or not an operation different from that for observation has been performed.
  • the degree of importance of the image may be determined in consideration of the reliability of the inference result. Even if the reliability of the inference result is high, when the doctor makes a diagnosis different from the inference result, the importance of the image at that time is high, and it can be said that it is useful for generating a new inference model. For example, when creating teacher data for generating a new inference model, the priority of using an image with a high degree of importance may be increased. Further, when the number of images having a high degree of importance reaches a predetermined number or more, generation of a new inference model is started, and the degree of importance may be used when determining the timing of generation of a new inference model.
  • step S1 when the guide display for operating the device 10 is performed by AI, in this step S3, it is determined whether or not the doctor has performed an operation contrary to the operation guide display by AI. good too. That is, it may be determined whether or not there is an expected relationship between the operation guide display and the action of the doctor who operates the endoscope.
  • the image at that time is acquired as the important image (S5).
  • an image is obtained in which the doctor shows a reaction contrary to the inference result.
  • the electronic medical chart may be created later than when the doctor observes the image.
  • all the images acquired by the device 10 may be temporarily recorded and recorded as important images when it is found that the doctor has shown a reaction contrary to the inference result. Therefore, the image data is temporarily recorded in the recording unit 25 in the hospital system 20, or is temporarily recorded entirely in the management server 30, and when it is found that the doctor has shown a reaction contrary to the inference result, It may be recorded as an important image.
  • the important image here does not have to be a still image record like a photograph, but the image under inspection should be recorded at all times, and the part corresponding to the important image frame is a frame.
  • a specific reference time such as the start of a moving image
  • step S3 If it is determined in step S3 that the doctor has performed an operation contrary to the operation guide display by AI, the doctor's operation information at this time is acquired in step S5. That is, when it is determined that the relationship between the operation guide display and the action of the doctor who operates the endoscope is not in the expected relationship, the operation-related data when the operation guide display is performed is acquired. In addition, when inference is performed by inputting sensor data provided in the medical device or doctor's operation data to the inference model in step S1, the inference is performed in step S3. and the action of the doctor using the medical device are in the expected relationship.
  • An identification code may be assigned to data or operation data.
  • step S7 As a result of the determination in step S3, if the doctor did not show a reaction contrary to the inference result (such as when an assumed prescription was given), or if the image at that time was acquired as the important image in step S5, then annotation (S7).
  • the image data of the important images for generating the new inference model has been acquired, it may be requested to annotate the image data in order to create the teacher data.
  • the inference model is often used as an operation guide.
  • This operation guide is based on the movement of the subject in the acquired image data (this is mainly due to the operation of the endoscope) and the data on the operation of the operation unit (whether or not rotation or pressing is performed, and the amount of operation is checked at each timing). It assists the operator by presenting the operation required at that time according to the change over time of the data acquired in the past.
  • an inference model can be obtained by learning from image data and operation data the situations in which operation errors are likely to occur. , it will be possible to issue guides for such scenes in the future.
  • inference is made using an inference model for displaying an endoscope operation guide. It is possible to improve the inference model by acquiring operation-related data (images obtained in time series, history of operation data, etc.) when performing.
  • FIG. shows the operation related to the determination of whether or not the image is an important image, which is performed in step S31 of FIG. 5 and step S53 of FIG.
  • This flow is also realized by the control unit 11 in the device 10 and the control unit 21 in the hospital system 20 operating in cooperation according to the programs stored in the memories in the respective control units.
  • the inference result and reliability determination are acquired (S11).
  • the contradiction determination unit 23 acquires the inference result and the reliability value in the inference unit 14 .
  • the contradiction determination unit 23 acquires the doctor's reaction from the UI unit 22 based on the electronic medical record created by the doctor. As the response of the doctor, other than the electronic chart, as described above, biopsy, pigment spraying, staining, excision, and other treatments performed by the doctor may be obtained, and the operating state of the device 10 at the time of examination by the doctor may be acquired. may be used for determination.
  • the contradiction determination unit 23 determines whether there is any contradiction between the doctor's reaction and the inference result, in other words, whether the doctor reacted as expected from the inference result. judge.
  • the degree of importance may be determined in consideration of reliability. In other words, when the reliability of the inference result is high, if the doctor reacts contrary to this inference result, the importance (rarity) of the image is high because the existing inference model is not sufficient. Therefore, it is desirable to generate a new inference model considering this importance.
  • the image is acquired (or requested to be acquired) as an important image (rare image) (S15).
  • the image at this time is an important image (rare image).
  • the important image here may be a specific frame in a moving image, and does not need to be recorded as a still image.
  • the image under inspection may be recorded at all times so that it is possible to record which frame the part corresponding to the important image frame is, and to record information for designating this specific image frame. It is easier to handle if there is a video of the examination, and there are many cases where other image information can be saved as evidence.
  • step S15 if the image is acquired (or the acquisition is requested), then the degree of importance (rarity) is determined and the image is recorded (S17).
  • the contradiction determination unit 23 determines the importance (rarity) based on the doctor's reaction, the inference result, the reliability, the teacher data of the inference model, the past inference image, etc., and determines the importance (rarity). is higher than a predetermined value, the image determined to have no expected relationship is temporarily recorded in the recording unit 25 or recorded in the recording unit 33 .
  • step S17 when the degree of importance is determined and the image is recorded, or when the result of determination in step S13 is that there is an expected relationship, the important image determination flow of FIG. 3 ends.
  • the existing inference model 14a is generated by performing deep learning on an image obtained during an endoscopy using teacher data annotating the position of a tumor or the like.
  • an image obtained by the image obtaining unit 13 is input to the existing inference model 14a, whether or not a tumor is detected, and if detected, its position is inferred and output.
  • the calculation result of the reliability of the inference at this time is also output.
  • the vertical axis on the left side of the chart in FIG. 4 has columns for the inference results of tumor detection and the reliability of the inference results. There is a column for what was obtained from the treatment or what was obtained from the treatment.
  • an image is acquired as an important image in the following cases.
  • B-1 A case where it is inferred that no tumor is detected and the reliability of the inference result is high, but the doctor's action supported by the medical record indicates that there is a tumor. In this case, the importance is judged to be large.
  • B-2) When it is inferred that a tumor is not detected and the reliability of the inference result is low, but when the doctor's action supported by the medical record is a tumor (B-3) It is inferred that a tumor has been detected , if the inference result is highly reliable, but if the doctor's action supported by the chart is tumor-free. In this case, the importance is judged to be large.
  • the degree of importance may be considered. For example, an image with a high degree of importance may have a high priority for adoption as teacher data when generating a new inference model. Also, if there are many images with a high degree of importance, it may be used when determining the timing, such as advancing the timing of generating a new inference model. Also, when evaluating a new inference model or when calculating reliability, an image with a high degree of importance may be preferentially used.
  • patient information is entered (S21).
  • the patient information at this time is recorded in the recording unit 25 through the UI unit 22 within the hospital system 20 . Also, if there is already recorded information, it may be associated with the patient information.
  • the information can be shared by cooperating with a device 10 such as an endoscope used in a medical facility or the like.
  • a device 10 such as an endoscope used in a medical facility or the like.
  • steps S43 and S57 see FIG. 6
  • the hospital system 20 and the device 10 cooperate and share information.
  • doctor's findings it is determined whether or not the doctor's findings have been input (S25).
  • the doctor inputs the doctor's findings and the like through the UI unit 22 and the like.
  • a doctor's opinion is input by text input or voice input using a keyboard.
  • step S25 if the doctor's findings or the like are entered, they are reflected in the chart (S27).
  • the text input result is reflected in an electronic chart or the like.
  • voice input it can be converted into text data using voice recognition. It should be reflected in the chart.
  • informed consent refers to obtaining consent from the subject after giving sufficient explanation.
  • the patient/families should fully understand the medical condition and treatment, how the medical staff received the patient/family's intentions, various situations and explanations, what kind of medical care to choose, and how the patient ⁇ It is a process of sharing information and reaching consensus among concerned parties such as family members, medical professionals, social workers and care managers.
  • ICs for medical practice ICs for handling information that can be personal information, and related information.
  • the contradiction determination unit 23 determines whether or not the image acquired by the device 10 is an important image, and acquires and records this image if it is determined to be important.
  • the determination as to whether or not the image is the important image is made according to the flow in FIG. 3 (see, for example, S15 in FIG. 3). That is, whether or not the image is an important image is determined based on whether or not there is an expected relationship between the inference result of the inference model and the action of the doctor who makes the diagnosis.
  • the degree of importance is also judged (S17 in FIG. 3, see FIG. 4). In this step, if the image is an important image, the image is acquired and recorded, and the degree of importance is obtained and recorded.
  • the important image here may be a specific frame in a moving image, and does not need to be recorded as a still image. Images during inspection are recorded at all times, and it is possible to determine which frames correspond to important image frames, such as the number of minutes and seconds from a specific reference time such as the start of a moving image, or the number of frames.
  • This specific image frame may be specified by recording whether or not. In other words, it is not necessary to separately record specific image frames from those recorded as continuous frame images such as moving images and continuous shot frame images. Here too, it is easier to handle if there is a moving image of the examination, and there are many cases where other image information can be left as evidence.
  • step S33 If the important image is recorded in step S31, or if the doctor's findings are not input as a result of the determination in step S25, it is determined whether or not the medical examination has ended (S33).
  • the doctor completes the examination of the patient in step S21, the doctor inputs that effect to the hospital system 20 through the UI unit 22, so determination is made based on whether or not this input has been made. As a result of this determination, if the medical examination has not ended, the process returns to step S23 and the above-described operations are executed.
  • step S35 Since the medical examination is completed, the patient completes the payment of the medical examination fee, etc., makes an appointment for the next medical examination if necessary, and receives the prescription, if any. After executing these procedures, the flow of the hospital system ends.
  • the flow in FIG. 6 starts, it is first determined whether or not to cooperate with the hospital system (S41). As described above, the equipment 10 such as an endoscope and the hospital system 20 can cooperate with each other. In this step, it is determined whether or not there is an in-hospital system 20 with which the device 10 cooperates. If there are devices to be linked with the hospital system 20 in the medical facility, setting processing may be performed in advance so that they can be linked.
  • step S41 If the result of determination in step S41 is that cooperation with the hospital system is possible, cooperation and information are shared (S43). If communication between the hospital system 20 and the device 10 is enabled by wireless communication or wired communication, then in this step the two will cooperate and share information. As for cooperation, for example, image data and related information acquired by the device 10 are transmitted to the hospital system 20 , and the existing inference model received by the hospital system 20 from the management server 30 is transmitted to the device 10 . In addition, the inference performed using the existing inference model 14a does not necessarily have to be performed in the device 10. An inference unit is arranged in the hospital system 20, and inference is performed based on the image transmitted from the device 10 to the hospital system 20. may be performed, and the inference result may be sent back to the device 10 .
  • step S43 If cooperation and information sharing are performed in step S43, or if the result of determination in step S41 is that there is no cooperation, then patient information and the like are input (S45).
  • patient information such as the patient's name and examination items is input through the UI unit (not shown) of the device 10 .
  • the hospital system 20 may transfer the information input in step S21 (see FIG. 5) to the device 10.
  • preparations are made for examination using the device 10 such as an endoscope.
  • the device 10 such as an endoscope.
  • an endoscope is prepared according to the examination site, and the patient is asked to drink an antifoaming agent and an anesthetic and enter the examination room.
  • the preparation for the inspection is finished, it is next determined whether or not the inspection is in progress (S49).
  • the control unit 11 determines that the examination is being performed.
  • a diagnosis guide and a diagnosis assistance determination are performed when a doctor makes a diagnosis using the device 10 such as an endoscope.
  • a diagnostic guide a method of operating the device 10 may be displayed, and as diagnostic assistance determination, whether or not a lesion such as a tumor is present in an acquired image may be displayed.
  • data that serves as evidence such as still images and videos during examinations, is recorded. If the device 10 has a recording unit, the data may be recorded in this recording unit or may be recorded in the recording unit 25 within the hospital system 20 .
  • an image acquisition (S53). Whether or not an image is an important image is determined according to the flow shown in FIG. That is, whether or not the image is an important image is determined based on whether or not there is an expected relationship between the inference result of the inference model and the action of the doctor who makes the diagnosis. If the device 10 can determine whether the image is important, it acquires the image. If the determination cannot be made within the device 10, the in-hospital system 20 may be requested to make the determination in step S57. As described above, the image acquisition here may be to allow designation of a specific frame in a moving image, and does not have to be recorded as a still image.
  • step S55 it is determined whether or not to cooperate with the hospital system. Similar to step S41, it is determined whether or not there is an in-hospital system 20 with which the device 10 cooperates. As a result of this determination, if cooperation with the hospital system is possible, cooperation and information are shared (S57). Here, as in step S43, cooperation is established between the hospital system 20 and the device 10 to share information.
  • step S57 If cooperation and information sharing are performed in step S57, or if the result of determination in step S55 is that there is no cooperation with the hospital system, it is next determined whether or not to end (S59).
  • the doctor inputs that fact through the UI unit in the device 10, so determination is made based on whether or not this input has been made. If the result of this determination is that the inspection has not ended, the process returns to step S49 and the above-described operations are executed.
  • step S59 if the result of determination in step S59 is that it has ended, and if there is an important image or the like, the IC or the like is also recorded (S61).
  • the important image in step S53 together with the important image in step S53, the fact that informed consent has been given in step S29 (see FIG. 5) is recorded.
  • IC informed consent
  • the doctor or the like inputs the subject's agreement or non-agreement, and the subject clicks an icon of an agreement button or a disagreement button.
  • a consent form may be signed by the subject, signed by a doctor or the like, and the signed consent form may be scanned.
  • the IC may be obtained by e-mail, entry on the Web, or the like. In any case, it is sufficient to ensure that sufficient explanation is provided, that the individual agrees, and that these facts are recorded. IC may be performed before the examination, and may be performed after the examination as long as the sedative has worn off and the patient is in a normal state.
  • the image is determined to be an important image
  • the image is acquired and recorded (for example, see S31 and S53).
  • steps S31 and S53 it is determined whether or not the image is an important image, and if it is an important image, it is recorded. Either of them may carry out, or both may carry out in cooperation with each other.
  • an image is determined to be an important image (rare image).
  • an important image IR image
  • (Pattern 1) Cases in which doctors, etc. did not perform treatment even though AI discovered a tumor. In this case, whether it was an erroneous diagnosis or whether it was decided not to treat the polyp because it is a safe polyp that does not turn into cancer is linked to the information in the electronic medical record.
  • (Pattern 2) When treatment was performed on a site where AI could not detect a tumor. In this case, it is possible to determine whether or not the image is an important image (rare image) to be learned based on the image being treated and the information in the electronic medical record.
  • FIG. 7 a method of using an inference model to select image data for teacher data will be described.
  • the doctor when a doctor performs an examination/diagnosis using the device 10 such as an endoscope, the doctor takes an action that the inference result of the existing inference model is contrary to the doctor's diagnosis result.
  • whether or not an image is an important image is determined by inputting an image or the like when a doctor diagnoses it into an inference model for extracting teacher data candidates.
  • an inference model is generated for inferring a case where the inference result of the existing inference model and the doctor's diagnosis result contradict each other (see FIG. 7(a)).
  • this inference model is generated, a test image can be input to this inference model during normal examination/diagnosis, and training data candidates (annotation candidates) can be extracted (see FIG. 7B).
  • FIG. 7(a) shows deep learning for generating an inference model for inferring training data candidates.
  • a series of inspection image groups P11 to P15 are images acquired when a doctor or the like performs inspection Ex1 using a device 10 such as an endoscope. timing).
  • Inspection images P11 and P12 are a group of images when approaching the discrimination part
  • image P13 is an image in the vicinity of a site (differentiation part) that is suspected to be a tumor
  • images P14 and P15 are images of the discrimination part. This is a group of images when moving away from .
  • These inspection image groups P11 to P15, P21 to P25, and P31 to P35 are recorded in the recording unit. Note that the images P22 and P33 are images near the discrimination portion.
  • these images are then annotated to create training data.
  • at least images P13, P22, and P33 have lesions (discrimination portions) such as tumors, and teacher data T13, T22, and T33 annotated so that their positions can be identified are created.
  • the teacher data is created, the teacher data is input to the input layer IN of the neural network NNW for generating the inference model, and the position of the lesion (discrimination part) is output from the output layer OUT as the discrimination result Output1. , determine the weighting of the intermediate layers. Learning is performed to make an inference that an image with such a tendency should be training data. That is, when the learning is completed, an inference model is created that outputs the image as a training data candidate when the inference result of the existing inference model contradicts the doctor's diagnosis result when an inspection image is input.
  • the important images in the learning required here are specific frames in the moving image recorded as still images. No need. Images are recorded at all times during the inspection, and it is possible to determine which frames correspond to the important image frames, such as how many minutes and seconds from a specific reference time such as the start of a moving image, or how many frames.
  • This specific image frame may be specified by recording whether or not. In other words, it is not necessary to separately record the specific image frames of the teacher data one by one from those recorded as continuous frame images such as moving images and continuous shot frame images. Here too, it is easier to handle if there is a moving image of the examination, and there are many cases where other image information can be left as evidence. Based on this idea, it is not necessary to extract and record the important image from the moving image in FIG.
  • FIG. 7(b) shows an inference operation of outputting an annotation candidate image when an acquired inspection image is input using the inference model generated in FIG. 7(a).
  • a doctor performs an examination Exa and acquires examination images Px1 to Px5.
  • the annotation candidate Output2 is output from the output layer Out.
  • the acquired image here is also a specific frame in a moving image, and does not need to be recorded as a still image. For example, it is possible to specify the number of minutes and seconds from a specific reference time such as the start of a movie, or the number of frames, etc. If necessary, it may be possible to record separately from the movie. Alternatively, a signal specifying a specific image frame of a moving image may be recorded.
  • the operation information Op1 to Op3 may be acquired together and recorded in association with the inspection image group. Operation information is omitted in the examinations Ex1 and Ex2 of FIG. Operation information is associated and recorded. Operation information is also associated when creating teacher data, and by performing deep learning using this teacher data, an inference model that takes into account the doctor's operation information is generated.
  • operation information Opx1 to Opx3 are input to the input layer In of the neural network NNW to infer annotation candidates in consideration of the doctor's operation. be able to.
  • the inference model for extracting teacher data candidates is generated, if a test image is subsequently input to this inference model, it is possible to determine whether the inference result of the existing inference model contradicts the doctor's diagnosis result. If it is inferred that the inference result and the diagnosis result do not match, the image can be output as a teacher data candidate. Therefore, it is possible to determine whether or not an image is an image to be a teacher data candidate without displaying the image acquired by the device 10 such as an endoscope. In other words, it is possible to determine whether or not the relationship was as expected based on the action taken by the doctor in a state where the inference result of the inference unit is not displayed.
  • inference is performed using an inference model for inferring and displaying a lesion from an image (see, for example, S1 in FIG. 2), and the inference result and
  • the action relationship of the doctor diagnosing the lesion is not in the expected relationship (for example, see S3 Yes, S5, etc. in FIG. 2)
  • the image data at the time of display is acquired (for example, FIG. 2, S5, etc.). Therefore, rare (important) data to be input to the inference unit can be obtained.
  • This acquired data can be efficiently acquired for generating a new inference model, and for calculating the evaluation and reliability of the new inference model.
  • the acquired data can be used when generating a new inference model.
  • inference is performed using an inference model for displaying an operation guide for an endoscope (see, for example, S1 in FIG. 2, FIG. 7B, etc.), and operation guide display and , the operation-related data when the operation guide is displayed is acquired when the relationship between the actions of the doctor who operates the endoscope is not the expected relationship (for example, S3 Yes in FIG. 2, S5 in FIG. 7 etc.).
  • the acquired data can be efficiently acquired for generating a new inference model for operation guidance, and for calculating the evaluation and reliability of the new inference model for operation guidance.
  • the acquired data can be used when generating a new inference model.
  • sensor data provided in the medical device or operation data of the medical device by the doctor is input to the inference model for inference (for example, S1 in FIG. 2, FIG. 7B) etc.), and when the relationship between the result of inference and the action of the doctor using the medical device does not meet the expected relationship, an identification code is given to the sensor data or the operation data when the expected relationship does not exist.
  • the data to which this identification code is assigned can be used for efficiently collecting data for generating a new inference model and calculating the evaluation and reliability of the new inference model. Also, the data with the identification code can be used when generating a new inference model.
  • the explanation was focused on the endoscopic image, but the present invention can also be applied to an information processing device that generates an inference model using images of various inspection devices other than the endoscopic image. can.
  • the technique of selecting training data candidates from time-series image frames is expected to be used in various fields.
  • an example has been described in which an inference result for an image acquired in a body cavity unique to an endoscope and an evaluator's table of the image do not match, and are selected as teaching data candidates or the like.
  • such a so-called normalized procedure is not limited to endoscopic imaging, as it can be in any field.
  • the device 10, the hospital system 20, and the management server 30 are described as separate entities, but these three may be configured integrally. may be configured integrally.
  • the controllers 11, 21, and 31 have been described as devices configured from CPUs, memories, and the like.
  • part or all of each part may be configured as a hardware circuit, and is described in Verilog, VHDL (Verilog Hardware Description Language), etc.
  • a hardware configuration such as a gate circuit generated based on a program language may be used, or a hardware configuration using software such as a DSP (Digital Signal Processor) may be used. Of course, these may be combined as appropriate.
  • control units 11, 21, and 31 are not limited to CPUs, and may be elements that function as controllers. good.
  • each unit may be a processor configured as an electronic circuit, or may be each circuit unit in a processor configured with an integrated circuit such as an FPGA (Field Programmable Gate Array).
  • a processor composed of one or more CPUs may read and execute a computer program recorded on a recording medium, thereby executing the function of each unit.
  • the device 10 has been described as having the control unit 11, the display unit 12, the image input unit 13, the inference unit 14, and the operation result reflection unit 15. However, they do not need to be provided in an integrated device, and the above-described units may be distributed as long as they are connected by a communication network such as the Internet.
  • the hospital system 20 has been described as having the control unit 11 , the UI unit 22 , the contradiction determination unit 23 , the communication unit 24 and the recording unit 25 . However, they do not need to be provided in an integrated device, and the above-described units may be distributed as long as they are connected by a communication network such as the Internet.
  • the management server 30 has been described as having a control unit 31 , an annotation/learning unit 32 , a recording control unit 34 and a communication unit 35 . However, they do not need to be provided in an integrated device, and the above-described units may be distributed as long as they are connected by a communication network such as the Internet.
  • control described mainly in the flowcharts can often be set by a program, and may be stored in a recording medium or recording unit.
  • the method of recording in the recording medium and the recording unit may be recorded at the time of product shipment, using a distributed recording medium, or downloading via the Internet.
  • the present invention is not limited to the above-described embodiment as it is, and can be embodied by modifying the constituent elements without departing from the spirit of the present invention at the implementation stage.
  • various inventions can be formed by appropriate combinations of the plurality of constituent elements disclosed in the above embodiments. For example, some components of all components shown in the embodiments may be deleted. Furthermore, components across different embodiments may be combined as appropriate.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Physics & Mathematics (AREA)
  • Primary Health Care (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

Provided are a data acquisition system and a data acquisition method which make it possible to acquire rare data that is input to an inference device. An inference model for inferring and displaying a lesion from an image is used to perform inference (S1), and when the relation between the result of the inference and an action of a doctor who diagnoses the lesion is not an expected relation (S3 Yes, S5), image data at the time of display is acquired (S7). Further, the acquired image data is regarded as a training data candidate, an annotation is given, and training data is created (S7).

Description

データ取得システムおよびデータ取得方法Data acquisition system and data acquisition method
 本発明は、医療機器において、推論モデル生成用の画像データを収集するためのデータ取得システムおよびデータ取得方法に関する。 The present invention relates to a data acquisition system and data acquisition method for collecting image data for inference model generation in medical equipment.
 データを多数収集し、このデータにアノテーションを施して教師データを作成し、この教師データを用いて深層学習等の機械学習を行うことによって推論モデルが生成される。この推論モデルを備えた推論部に、データを入力することによって、各種の推論を行うことが知られている。 An inference model is generated by collecting a large amount of data, creating teacher data by annotating this data, and performing machine learning such as deep learning using this teacher data. It is known to make various inferences by inputting data into an inference unit having this inference model.
 例えば、特許文献1には、対象機械に設けられたセンサデータを、推論モデルに入力して対象機械の制御を行うための推論を行う推論装置が開示されている。この推論装置は、診断対象である診断対象機械と、この機械に類似する機械のセンサデータを取得して記憶するデータベースDBを備え、類似機械について取得したセンサデータから作成した第1の特徴量を用いて推論モデルを作成し、診断対象機械について取得したセンサデータから作成した第2の特徴量を第1の特徴量によって補正し、補正された第2の特徴量を推論モデルに適用することで、診断対象機械の状態を判定している。この構成を有していることから、特許文献1の推論装置は、対象機械のセンサデータが異常データの場合であっても、対象機械の状態を正しく判定することができる。 For example, Patent Document 1 discloses an inference device that inputs sensor data provided on a target machine into an inference model and performs inference for controlling the target machine. This inference device includes a diagnosis target machine to be diagnosed and a database DB for acquiring and storing sensor data of machines similar to this machine, and a first feature amount created from the sensor data acquired for the similar machine. By creating an inference model using , determines the state of the machine to be diagnosed. With this configuration, the reasoning device of Patent Document 1 can correctly determine the state of the target machine even when the sensor data of the target machine is abnormal data.
特開2020-187516号公報JP 2020-187516 A
 特許文献1における推論装置においては、異常データが入力した場合であっても正常に診断することができる。しかし、特許文献1の推論装置では、稀なデータに対しても適正に推論するための推論モデルを生成するために、稀なデータを取得することについては、何ら記載されていない。すなわち、特許文献1における推論装置では、異常が発生したような稀なデータも考慮して推論モデルを作成することができない。 In the inference device in Patent Document 1, normal diagnosis can be performed even when abnormal data is input. However, the inference device of Patent Document 1 does not describe acquisition of rare data in order to generate an inference model for properly inferring even rare data. That is, the inference device in Patent Literature 1 cannot create an inference model in consideration of rare data such as when an abnormality occurs.
 本発明は、このような事情を鑑みてなされたものであり、推論装置に入力される稀なデータを取得できるようにしたデータ取得システムおよびデータ取得方法を提供することを目的とする。 The present invention has been made in view of such circumstances, and an object of the present invention is to provide a data acquisition system and a data acquisition method that can acquire rare data to be input to an inference device.
 上記目的を達成するため第1の発明に係るデータ取得システムは、画像から病変部を推論して表示するための推論モデルを有する推論部と、上記推論部における推論結果と、上記病変部について診断を行う医師のアクションの関係が期待される関係にない場合に、上記表示を行った時の画像データを取得する取得部と、を有する。 In order to achieve the above object, a data acquisition system according to a first invention comprises an inference unit having an inference model for inferring and displaying a lesion from an image, an inference result in the inference unit, and a diagnosis of the lesion. and an acquisition unit that acquires image data when the display is performed when the relationship of actions of the doctor who performs the display is not in the expected relationship.
 第2の発明に係るデータ取得システムは、上記第1の発明において、上記取得部が上記表示を行った時に取得する画像データは、上記表示時の画像コマを含む動画であり、上記画像コマを特定可能な情報を含んだデータである。
 第3の発明に係るデータ取得システムは、上記第1の発明において、上記取得部によって取得された画像に、画像の稀少度に関する情報を付して記録する記録部を有する。
A data acquisition system according to a second invention is the data acquisition system according to the first invention, wherein the image data acquired by the acquisition unit when the display is performed is a moving image including image frames at the time of the display, and the image frames are displayed. Data containing identifiable information.
A data acquisition system according to a third aspect of the invention is the data acquisition system according to the first aspect of the invention, further comprising a recording unit that adds information about the rarity of the image to the image acquired by the acquisition unit and records the image.
 第4の発明に係るデータ取得システムは、上記第1の発明において、上記取得部によって取得された上記画像データを教師データ候補とする教師データ候補選定部を有する。
 第5の発明に係るデータ取得システムは、上記第1の発明において、上記取得部が上記表示を行った時に取得する画像データは、上記表示時の画像コマを含む時系列画像群であり、上記画像コマを特定可能な情報を含んだデータである。
According to a fourth aspect of the present invention, there is provided a data acquisition system according to the first aspect, further comprising a teacher data candidate selection unit that selects the image data acquired by the acquisition unit as a teacher data candidate.
A data acquisition system according to a fifth aspect is the data acquisition system according to the first aspect, wherein the image data acquired by the acquisition unit when the display is performed is a time-series image group including image frames at the time of the display, and This is data containing information that can specify an image frame.
 第6の発明に係るデータ取得システムは、上記第1の発明において、上記推論モデルを用いて行った推論結果と、上記診断を行う医師のアクションの関係が期待される関係を記録したデータベースを有する。
 第7の発明に係るデータ取得システムは、上記第1の発明において、上記病変部を表示した時の上記画像と、上記推論モデルを学習した時の教師データ群との類似度に従って、上記教師データ候補とする画像類似判定部を有し、上記取得部は、上記画像類似判定部によって上記教師データ候補とされたときの上記画像を取得する。
 第8の発明に係るデータ取得システムは、上記第1の発明において、上記推論部における推論結果は表示しない状態において、上記医師がとったアクションに基づいて、上記期待される関係であったか否かを判定する。
A data acquisition system according to a sixth aspect of the present invention has a database that records an expected relationship between an inference result obtained using the inference model in the first aspect and an action of the doctor who makes the diagnosis. .
A data acquisition system according to a seventh invention is the data acquisition system according to the first invention, according to the similarity between the image when the lesion is displayed and the teacher data group when the inference model is learned, the teacher data A candidate image similarity determination unit is provided, and the acquisition unit acquires the image determined as the teacher data candidate by the image similarity determination unit.
A data acquisition system according to an eighth invention is the data acquisition system according to the first invention, in which the inference results of the inference unit are not displayed, and based on the action taken by the doctor, whether or not the expected relationship is met. judge.
 第9の発明に係るデータ取得方法は、画像から病変部を推論して表示するための推論モデルを用いて推論し、上記推論の結果と、上記病変部について診断を行う医師のアクションの関係が期待される関係にない場合に、上記表示を行った時の画像データを取得する。
 第10の発明に係るデータ取得方法は、内視鏡の操作ガイドを表示するための推論モデルを用いて推論し、上記操作ガイド表示と、上記内視鏡の操作を行う医師のアクションの関係が期待される関係にない場合に、上記操作ガイド表示を行った時の操作関連データを取得する。
A data acquisition method according to a ninth aspect of the invention uses an inference model for inferring and displaying a lesion from an image, and the relationship between the result of the inference and the action of a doctor who diagnoses the lesion is established. If the relationship is not in the expected relationship, the image data when the above display is performed is acquired.
A data acquisition method according to a tenth aspect of the invention makes inferences using an inference model for displaying an operation guide for an endoscope, and determines the relationship between the operation guide display and the actions of a doctor who operates the endoscope. If the relationship is not in the expected relationship, the operation-related data at the time when the operation guide display is performed is acquired.
 第11の発明に係るデータ処理方法は、医療機器に設けられたセンサデータ、もしくは医師の上記医療機器に対する操作データを推論モデルに入力して推論し、上記推論の結果と、上記医療機器を使用する医師のアクションの関係が期待される関係にない場合に、該期待される関係になかった時の上記センサデータもしくは上記操作データに識別符号を付与可能にする。
 第12の発明に係るデータ処理システムは、医療機器に設けられたセンサデータ、もしくは医師の上記医療機器に対する操作データを推論モデルに入力して推論する推論部と、 上記推論の結果と、上記医療機器を使用する医師のアクションの関係が期待される関係にない場合に、該期待される関係になかった時の上記センサデータもしくは上記操作データに識別符号を付与可能にするデータ処理部と、を有する。
A data processing method according to an eleventh invention inputs sensor data provided in a medical device or operation data of a doctor to the medical device into an inference model to make an inference, and uses the result of the inference and the medical device. When the doctor's actions are not in the expected relationship, an identification code can be given to the sensor data or the operation data when the expected relationship is not met.
A data processing system according to a twelfth aspect of the present invention comprises an inference unit that inputs sensor data provided in a medical device or operation data of a doctor on the medical device into an inference model to make an inference; a result of the inference; a data processing unit capable of assigning an identification code to the sensor data or the operation data when the action relationship of the doctor using the device does not meet the expected relationship. have.
 本発明によれば、推論装置に入力される稀なデータを取得するようにしたデータ取得システムおよびデータ取得方法を提供することができる。 According to the present invention, it is possible to provide a data acquisition system and a data acquisition method for acquiring rare data to be input to an inference device.
本発明の一実施形態に係るデータ取得システムの主として電気的構成を示すブロック図である。1 is a block diagram mainly showing an electrical configuration of a data acquisition system according to an embodiment of the present invention; FIG. 本発明の一実施形態に係るデータ取得システムの主として電気的構成を示すブロック図である。1 is a block diagram mainly showing an electrical configuration of a data acquisition system according to an embodiment of the present invention; FIG. 本発明の一実施形態に係るデータ取得システムにおける画像取得の動作を示すフローチャートである。4 is a flowchart showing image acquisition operations in the data acquisition system according to one embodiment of the present invention. 本発明の一実施形態に係るデータ取得システムにおける重要画像取得の動作を示すフローチャートである。4 is a flow chart showing the operation of acquiring important images in the data acquisition system according to one embodiment of the present invention; 本発明の一実施形態に係るデータ取得システムにおけるデータベースの記録内容を示す図である。It is a figure which shows the recording content of the database in the data acquisition system which concerns on one Embodiment of this invention. 本発明の一実施形態に係るデータ取得システムにおいて、院内システムの動作を示すフローチャートである。4 is a flow chart showing the operation of an in-hospital system in the data acquisition system according to one embodiment of the present invention. 本発明の一実施形態に係るデータ取得システムにおいて、機器の動作を示すフローチャートである。4 is a flow chart showing the operation of the device in the data acquisition system according to one embodiment of the present invention. 本発明の一実施形態に係るデータ取得システムにおいて、(a)は推論モデル生成用のデータの取得を示し、(b)は、取得したデータを用いて生成した推論モデルにおける推論を示す図である。In the data acquisition system according to one embodiment of the present invention, (a) shows acquisition of data for generating an inference model, and (b) shows inference in an inference model generated using the acquired data. .
 以下、本発明の一実施形態として本発明をデータ取得システムに適用した例について説明する。このデータ取得システムは、内視鏡等の機器10と、この機器と連携可能な院内システム20と、管理サーバ30とからなる。それぞれの機器等において各種機能を実現するための各部を有しているが、それぞれの機能は、いずれの機器等において実現するかは、適宜変更してもよい。また、本実施形態においては、機器10は内視鏡として説明するが、画像データ等のデータを出力し、このデータを用いて推論を行うものであれば、内視鏡には限られない。また、院内システム20と記載した部分は、機器10の情報を共有できれば、クラウドベースの管理ツールであるなど、院内に備えられているものに限定する必要はない。また、クラウドベースの管理ツール等にアクセスしてデータを判定できる人を医師として記述するが、専門家という書き方も可能である。 An example in which the present invention is applied to a data acquisition system will be described below as an embodiment of the present invention. This data acquisition system includes a device 10 such as an endoscope, an in-hospital system 20 that can cooperate with this device, and a management server 30 . Each device or the like has each unit for realizing various functions, but the device or the like that realizes each function may be changed as appropriate. Further, in the present embodiment, the device 10 is described as an endoscope, but the device 10 is not limited to an endoscope as long as it outputs data such as image data and performs inference using this data. Also, the part described as the in-hospital system 20 need not be limited to what is provided in the hospital, such as a cloud-based management tool, as long as the information of the device 10 can be shared. A person who can access a cloud-based management tool or the like to determine data is described as a doctor, but it is also possible to write an expert.
 図1Aは、内視鏡等の機器10と、この内視鏡10と連携する院内システム20の構成を示す。また、図1Bは、院内システム20と連携する管理サーバ30を示し、この管理サーバ30は、推論モデル作成に使用した既存教師データ等を記録する記録部、アノテーション・学習部等を有する。院内システム20と管理サーバ30は、インターネット等の通信網を通じてデータ交換等のための通信を行うことができる。機器10、院内システム20、および管理サーバ30内における各ブロックの配置は、図1A、図1Bに示す配置以外にも、適宜変更してもよい。また、図1Aおよび図1Bにおいては、機器10において取得した画像データ等は、院内システム20を通じて、管理サーバ30に送信しているが、機器10と管理サーバ30の間で、無線通信または有線通信によって直接、通信を行うようにしても勿論かまわない。 FIG. 1A shows a configuration of a device 10 such as an endoscope and an in-hospital system 20 that cooperates with the endoscope 10. FIG. FIG. 1B also shows a management server 30 that cooperates with the hospital system 20. This management server 30 has a recording unit that records existing teacher data and the like used to create an inference model, an annotation/learning unit, and the like. The hospital system 20 and the management server 30 can communicate for data exchange and the like through a communication network such as the Internet. The arrangement of each block in the device 10, the hospital system 20, and the management server 30 may be appropriately changed other than the arrangement shown in FIGS. 1A and 1B. In FIGS. 1A and 1B, image data and the like acquired by the equipment 10 are transmitted to the management server 30 through the hospital system 20. Of course, communication may be performed directly by
 図1Aにおいて、内視鏡等の機器10は、制御部11、表示部12、画像取得部13、推論部14、および操作結果反映部15を有する。画像取得部13は、撮像用レンズ、撮像素子、撮像素子制御回路、撮像データ処理回路等の種々の回路等を有する。この画像取得部13は、機器10が内視鏡の場合には、患者等の体内における画像を取得する。入力画像P1は、画像データの一例を示し、この入力画像P1は、画像データP1dにカテゴリ1等の情報(タグデータ)が添付されており、また、後述するように、稀なケースの画像である。なお、図1Aには、入力画像としてはP1のみしか描かれていないが、勿論、検査時には多数の画像が入力され、また静止画に限らず動画であってもよい。画像取得部13は、例えば、図7の検査Ex1~Ex3に示すように、検査毎に複数の画像を入力してもよい。 In FIG. 1A, a device 10 such as an endoscope has a control unit 11, a display unit 12, an image acquisition unit 13, an inference unit 14, and an operation result reflection unit 15. The image acquisition unit 13 includes various circuits such as an imaging lens, an imaging device, an imaging device control circuit, and an imaging data processing circuit. When the device 10 is an endoscope, the image acquisition unit 13 acquires an image inside the body of a patient or the like. The input image P1 is an example of image data. This input image P1 has information (tag data) such as category 1 attached to the image data P1d. be. Note that FIG. 1A shows only P1 as an input image, but of course, many images are input at the time of inspection, and not only still images but also moving images may be used. The image acquisition unit 13 may input a plurality of images for each examination, for example, as indicated by examinations Ex1 to Ex3 in FIG.
 操作結果反映部15は、機器10を操作するための操作部やユーザインターフェース等を有する。操作部としては、例えば、機器10が内視鏡の場合には、撮影指示用のレリーズ操作部、光源、送気・送水・吸引等の操作部、アングル機構・生検機構等の機構等を有する。インターフェースとして、タッチスクリーン等のユーザインターフェースを有していてもよい。操作結果反映部15は、前述の操作部の操作に応じて、各操作部による操作を実現する。例えば、レリーズ操作部によって静止画の取得が指示されると、画像取得部13によって静止画を取得し、またレリーズ操作部によって動作の取得が指示されると、画像取得部13によって動作を取得する。 The operation result reflection unit 15 has an operation unit for operating the device 10, a user interface, and the like. For example, when the device 10 is an endoscope, the operation unit includes a release operation unit for instructing imaging, a light source, an operation unit for supplying air, water, and suction, and mechanisms such as an angle mechanism and a biopsy mechanism. have. As an interface, it may have a user interface such as a touch screen. The operation result reflection unit 15 implements the operation of each operation unit according to the operation of the operation unit described above. For example, when the release operation unit instructs acquisition of a still image, the image acquisition unit 13 acquires the still image, and when the release operation unit instructs acquisition of a motion, the image acquisition unit 13 acquires the motion. .
 推論部14は、既存推論モデル14aを備えた推論エンジン、カテゴリ判定部14b、画像類似判定部14cを有する。既存推論モデル14aは、後述する管理サーバ30の記録部33内に記録されている既存教師データ33aを用いて、アノテーション・学習部32において生成された推論モデルである。既存推論モデル14aは、画像から病変部を推論して表示するための推論モデルとして機能し、推論部14は、この既存推論モデル14aを備えた推論部として機能する(例えば、図2のS1参照)。 The inference unit 14 has an inference engine equipped with an existing inference model 14a, a category determination unit 14b, and an image similarity determination unit 14c. The existing inference model 14a is an inference model generated by the annotation/learning unit 32 using existing teacher data 33a recorded in the recording unit 33 of the management server 30, which will be described later. The existing inference model 14a functions as an inference model for inferring and displaying a lesion from an image, and the inference unit 14 functions as an inference unit equipped with this existing inference model 14a (for example, see S1 in FIG. 2). ).
 また、既存推論モデル14aは、病変部を推論するに限らず、例えば、内視鏡の操作ガイドを表示するための推論モデルであってもよい。この推論モデルに、画像取得部13において取得した画像およびまたは操作結果反映部15によって取得した操作状態の情報を入力し、操作ガイドを推論し、推論結果に基づいて、操作ガイドを表示する。推論部14は、この既存推論モデル14aを備えた推論部として機能する。 Also, the existing inference model 14a is not limited to inferring a lesion, and may be, for example, an inference model for displaying an operation guide for an endoscope. The image obtained by the image obtaining unit 13 and/or the operation state information obtained by the operation result reflecting unit 15 are input to this inference model, the operation guide is inferred, and the operation guide is displayed based on the inference result. The inference unit 14 functions as an inference unit equipped with this existing inference model 14a.
 また、既存推論モデル14aは、病変部や操作ガイドを推論する推論モデル以外であってもよい。医療機器には、撮像素子によって画像データを取得する以外にも、他のセンサを設けてよい。もちろん、この既存推論モデル14aに、医師の医療機器に対する操作データを入力するようにしてもよい。推論部14は、医療機器に設けられたセンサデータ、もしくは医師の上記医療機器に対する操作データを推論モデルに入力して推論する推論部として機能する。 Also, the existing inference model 14a may be an inference model other than an inference model that infers a lesion or an operation guide. The medical device may be provided with other sensors in addition to acquiring image data by the imaging device. Of course, the operation data of the doctor's medical equipment may be input to the existing inference model 14a. The inference unit 14 functions as an inference unit that inputs sensor data provided in a medical device or operation data of a doctor on the medical device to an inference model and makes an inference.
 なお、この既存推論モデル自体は、管理サーバ30内のアノテーション・学習部32が生成しなくても、アノテーション・学習部32が外部の学習装置に推論モデルの生成を依頼し、生成された推論モデルであってもよい。また、推論エンジンはハードウエアによって構成れていてもよく、またソフトウエア(プログラム)によって構成されていてもよく、またハードウエアとソフトウエアの組み合わせであってもよい。また、推論部14は、院内システム20内または管理サーバ30に設け、院内システム20または管理サーバ30内において既存推論モデルを用いて推論を行ってもよい。 Even if the existing inference model itself is not generated by the annotation/learning unit 32 in the management server 30, the annotation/learning unit 32 requests an external learning device to generate an inference model, and the generated inference model may be Also, the inference engine may be configured by hardware, may be configured by software (program), or may be a combination of hardware and software. Further, the inference unit 14 may be provided in the hospital system 20 or the management server 30 and perform inference using an existing inference model in the hospital system 20 or the management server 30 .
 既存推論モデル14aは、入力画像内に腫瘍等が有るか否かを推論し、腫瘍等があれば、その位置を表示することができる。既存推論モデル14aは、腫瘍以外にも病変部等の推論を行うようにしてもよい。また、既存推論モデル14aは、医師等が機器10を操作する際のガイドを行うための推論を行い、この推論結果を表示部12において表示するようにしてもよい。既存推論モデル14aを備えた推論エンジンには、画像取得部13によって取得された画像データP1dが入力され、既存推論モデル14aによる推論結果が出力される。また、推論エンジンは、推論を行う際に、この推論の信頼性についても算出し、この信頼性も出力する。 The existing inference model 14a can infer whether or not there is a tumor or the like in the input image, and if there is a tumor or the like, it can display its position. The existing inference model 14a may infer lesions and the like in addition to tumors. Further, the existing inference model 14 a may perform inference for guiding the doctor or the like when operating the device 10 and display the inference result on the display unit 12 . The image data P1d acquired by the image acquisition unit 13 is input to the inference engine having the existing inference model 14a, and the inference result by the existing inference model 14a is output. The inference engine also calculates the reliability of this inference when making an inference, and outputs this reliability as well.
 カテゴリ判定部14bは、画像取得部13によって取得された画像のカテゴリを判定する。カテゴリ判定部14bは、入力画像P1のカテゴリ1P1cに記録されている情報や、使用している機器等の情報等に基づいて、観察部位、操作状態、処置、使用機種等のカテゴリを判定する。 The category determination unit 14b determines the category of the image acquired by the image acquisition unit 13. The category determination unit 14b determines categories such as observation site, operation state, treatment, model used, etc., based on information recorded in category 1P1c of input image P1 and information such as equipment used.
 画像類似判定部14cは、画像取得部13によって取得された画像が類似画像か否かを判定する。画像取得部13によって取得された画像は多数あり、それらは互いに類似している画像が多い。そこで、画像類似判定部14cは、互いに類似している画像であるか否かを判定する。また、本実施形態においては、新推論モデル生成用の教師データとして、既存推論モデルによる推論結果と、医師の診断結果が異なっていた場合の画像データを、取得して行っている(矛盾判定部23、新推論モデル用データ33c等参照)。 The image similarity determination unit 14c determines whether the image acquired by the image acquisition unit 13 is a similar image. There are many images acquired by the image acquisition unit 13, and many of them are similar to each other. Therefore, the image similarity determination unit 14c determines whether or not the images are similar to each other. In addition, in this embodiment, as training data for generating a new inference model, image data obtained when the inference result of the existing inference model differs from the doctor's diagnosis result is acquired (contradiction determination unit 23, new inference model data 33c, etc.).
 なお、類似度の高い画像か低い画像か、どちらの画像が求められるかは、その状況によって異なる。十分な数の類似の教師データを有する場合でも、更にテストデータとして類似の画像を要求される場合もあり、逆に、十分な数の類似画像があるとして、類似でない画像を要求される場合もある。さらに、類似であっても、その画像を得た状況や対象物がレアな場合もあり、こうした状況のいずれにおいても、画像を比較して、その特徴が類似しているかどうかを判定する回路やプログラムなどを有することが役立つ場合が多い。 Whether an image with a high degree of similarity or an image with a low degree of similarity is required depends on the situation. Even if there is a sufficient number of similar training data, there are cases where similar images are requested as test data, and conversely, there are cases where dissimilar images are requested even if there are a sufficient number of similar images. be. Furthermore, even if the images are similar, the circumstances or objects in which the images were obtained may be rare. It is often helpful to have a program or the like.
 この方法以外にも、画像類似判定部14cが、これまで教師データとした実績のある画像と類似と判定した画像を取得し、この画像を用いるようにしてもよい。画像類似判定部14cは、病変部を表示した時の画像と、推論モデルを学習した時の教師データ群との類似度に従って、教師データ候補とする画像類似判定部として機能する。この場合には、取得部は、画像類似判定部によって教師データ候補とされたときの画像を取得する。 In addition to this method, the image similarity determination unit 14c may acquire an image determined to be similar to an image that has been used as teacher data so far, and use this image. The image similarity determination unit 14c functions as an image similarity determination unit that selects training data candidates according to the degree of similarity between the image when the lesion is displayed and the training data group when the inference model is learned. In this case, the acquisition unit acquires the image determined as the teacher data candidate by the image similarity determination unit.
 表示部12は、ディスプレイ等の表示モニタを有し、メニュー画面(操作用画像)や、画像取得部13によって取得した体内の検査画像等を表示する。医師等はこの取得した画像(キャプチャ画像)を観察しながら、内視鏡等の機器10の操作を行い、また検査・診断等を行うこともできる。また、表示部12には、画像取得部13によって取得した画像と共に、推論部14による推論結果12aを入力し、推論結果を表示する。この推論結果12aの表示の際に、推論の信頼性12bについて表示してもよい。 The display unit 12 has a display monitor such as a display, and displays menu screens (images for operation), internal examination images acquired by the image acquisition unit 13, and the like. A doctor or the like can operate the device 10 such as an endoscope while observing the acquired image (capture image), and can also perform examinations, diagnoses, and the like. The display unit 12 receives the image obtained by the image obtaining unit 13 and the inference result 12a by the inference unit 14, and displays the inference result. When displaying the inference result 12a, the inference reliability 12b may be displayed.
 制御部11は、CPU(Central Processing Unit:中央処理装置)等を含み、種々の周辺回路を含むプロセッサである。このCPU等の処理装置は、機器10の記憶部に記憶されたプログラムに従って、機器10内の各部を制御する。なお、画像取得部13、操作結果反映部15、推論部14、表示部12の一部または全部の機能を、制御部11が実行してもよい。なお、機器10は、上述した各部の他にも、例えば、通信部等を有している。この通信部は、院内システム20と通信を行うが、管理サーバ30と直接通信を行うようにしてもよい。 The control unit 11 is a processor including a CPU (Central Processing Unit) and various peripheral circuits. The processing device such as the CPU controls each section in the device 10 according to a program stored in the storage section of the device 10 . Note that the control unit 11 may perform some or all of the functions of the image acquisition unit 13 , the operation result reflection unit 15 , the inference unit 14 , and the display unit 12 . Note that the device 10 has, for example, a communication section and the like in addition to the above-described sections. This communication unit communicates with the hospital system 20 , but may communicate directly with the management server 30 .
 院内システム20は、病院内に設けられたサーバ等であり、複数の機器10と無線通信または有線通信によって接続可能であり、各種データのやり取りを行うことができる。なお、院内システム20は、機器10等と接続可能であれば、クラウド内に設けても構わない。 The in-hospital system 20 is a server or the like provided in the hospital, can be connected to a plurality of devices 10 by wireless communication or wired communication, and can exchange various data. Note that the in-hospital system 20 may be provided in the cloud as long as it can be connected to the device 10 or the like.
 院内システム20は、機器10から、画像取得部13が入力した画像を入力し、また推論部14による推論結果、推論結果の信頼性、カテゴリ判定結果、画像類似判定結果等を入力する。そして、機器10が取得した画像を記録し、また、推論結果と、医師等の反応に応じて、新推論モデル生成用の画像を選択し、管理サーバ30に送信する。この場合、推論結果と、医師等の反応に応じて表示され、医師らが確認した画像と、この画像を確認した医師らの反応に応じて、新推論モデル生成用の画像を選択してもよい。さらに、画像が表示されている時の反応であってもよいし、画像表示が終わって、所見を書き込むときの反応に応じて、新推論モデル生成用の画像を選択してもよい。また、これらの画像を用いて、新推論モデルの生成を依頼し、生成された新推論モデルを取得する。院内システム20は、制御部21、UI部22、矛盾判定部23、通信部24、記録部25を有する。 The in-hospital system 20 receives the image input by the image acquisition unit 13 from the device 10, and the inference result, the reliability of the inference result, the category determination result, the image similarity determination result, etc. by the inference unit 14. The image acquired by the device 10 is recorded, and an image for generating a new inference model is selected according to the inference result and the reaction of the doctor or the like and transmitted to the management server 30 . In this case, an image for generating a new inference model can be selected according to the inference result, the image displayed according to the reaction of the doctor, etc., confirmed by the doctor, and the reaction of the doctor who confirmed the image. good. Furthermore, the image for generating the new inference model may be selected according to the reaction when the image is displayed, or the reaction when the observation is written after the image display is finished. Also, using these images, a request is made to generate a new inference model, and the generated new inference model is obtained. The hospital system 20 has a control unit 21 , a UI unit 22 , a contradiction determination unit 23 , a communication unit 24 and a recording unit 25 .
 UI部22は、機器10のUI(ユーザインターフェース)の操作状態を判定する。UI部22は、医師の手動でテキスト情報を入力するキーボード等や、音声情報を入力するためのマイクロフォン等のユーザインターフェースを含む。また、UI部22は、担当している医師の所見等を入力するためのインターフェースであってもよく、例えば、医師が上記所見等を書き込み作成する電子カルテと連携するようにしてもよい。電子カルテと連携できない場合には、カルテの情報を手動で入力できるようにしてもよく、スキャナ等によってカルテのイメージを読み込んで、その結果を文字読み取りしてもよい。音声でカルテを記録する場合もあり、その場合は、音声をテキスト化する機能を併用すればよい。 The UI unit 22 determines the operating state of the UI (user interface) of the device 10 . The UI unit 22 includes user interfaces such as a keyboard for manually inputting text information by a doctor and a microphone for inputting voice information. Further, the UI unit 22 may be an interface for inputting the findings of the doctor in charge, and may be linked with an electronic medical chart in which the doctor writes and creates the above-mentioned findings, for example. If it is not possible to link with the electronic medical chart, the medical chart information may be input manually, or the medical chart image may be read by a scanner or the like, and the result may be read as characters. In some cases, the medical record is recorded by voice, and in that case, the function to convert voice to text should be used together.
 矛盾判定部23は、推論部14の推論結果と、UI部22が機器10を操作した、あるいは画像を見て診断した医師の反応(上記所見等をテキストにした結果等を含む)、言い換えれば専門的な判断をした専門家の反応とを比較し、両者の間に矛盾があるか否かを判定する。例えば、腫瘍があるか否かを推論する既存推論モデルが設定されている場合に、推論部14が機器10によって取得した画像に対して腫瘍がないと推論したのに対して、医師が画像を観察し、腫瘍があると判定したとする。この場合には、推論部14による推論結果と、医師の診断結果に矛盾が生じている。矛盾判定部23は、推論部14の推論結果と、医師による診断結果に矛盾が生じているか否かを判定する。また、矛盾判定部23が、推論結果と医師による診断結果に矛盾があるか否かを判定する以外に、例えば、推論結果の信頼性等を考慮して、重要度を判定するようにしてもよい。矛盾判定部23における判定については、図4を用いて後述する。 The contradiction determination unit 23 determines the inference result of the inference unit 14 and the reaction of the doctor who operated the device 10 by the UI unit 22 or who made a diagnosis by viewing the image (including the result of converting the above findings into text), in other words, Compare the reaction with the response of the expert who made the expert judgment, and determine whether there is a contradiction between the two. For example, when an existing inference model for inferring whether or not there is a tumor is set, the inference unit 14 infers that there is no tumor in the image acquired by the device 10. Suppose you observe and determine that there is a tumor. In this case, there is a contradiction between the inference result of the inference unit 14 and the doctor's diagnosis result. The contradiction determination unit 23 determines whether or not there is a contradiction between the inference result of the inference unit 14 and the diagnosis result by the doctor. In addition to determining whether or not there is a contradiction between the inference result and the doctor's diagnosis result, the contradiction determination unit 23 may determine the degree of importance in consideration of, for example, the reliability of the inference result. good. Determination by the contradiction determination unit 23 will be described later with reference to FIG.
 なお、矛盾判定部23が上述の矛盾があるか否を判定するタイミングとしては、医師が機器10を操作する際に、取得画像を観察するときに行ってもよい。また、このタイミング以外にも、検査後に記録部25に記録されている画像を院内システムの表示部で観察しながら診断する際に行ってもよい。診断結果を記録する電子カルテには、検査時や検査後に画像を観察しながら記入してもよい。 The timing for the contradiction determination unit 23 to determine whether or not there is a contradiction may be when the doctor observes the acquired image when operating the device 10 . In addition to this timing, it may be performed when diagnosing while observing the image recorded in the recording unit 25 on the display unit of the in-hospital system after the examination. An electronic medical chart for recording diagnostic results may be filled in while observing images during or after an examination.
 また、UI部22におけるユーザインターフェースとしては、前述したように電子カルテ以外にも、医師の機器の操作状態を入力し、この操作状態を利用してもよい。例えば、推論結果として、腫瘍がないと出力されている場合に、医師がその部位を何度も観察しているような場合や、その部位について生検したり、色素散布したり、切除しているような場合には、腫瘍が発生している可能性がある。このような場合に、矛盾判定部23は、推論結果に矛盾があると判定してもよい。 In addition, as a user interface in the UI unit 22, as described above, in addition to the electronic medical record, the operating state of the doctor's equipment may be input and used. For example, when it is output that there is no tumor as an inference result, the doctor may observe the site many times, or the site may be biopsied, dyed, or excised. If so, a tumor may have developed. In such a case, the contradiction determination unit 23 may determine that there is a contradiction in the inference result.
 また、矛盾判定部23は、腫瘍等の病変部について、推論結果と医師のアクション(反応)が期待される関係にない場合に限らず、操作ガイド表示と推論結果が矛盾しているか否かを判定するようにしてもよい。例えば、既存推論モデル14aが、内視鏡の操作ガイドを表示するための推論モデルである場合に、この推論モデルを用いて操作ガイドを表示することができる。この操作ガイド表示と、この操作ガイドをみながら内視鏡を操作する医師の操作が一致していない場合には、矛盾判定部23が不一致を判定し、この不一致が発生した場合の画像および/又はその時の操作情報を取得するようにしてもよい。 In addition, the contradiction determination unit 23 determines whether or not the operation guide display and the inference result are inconsistent, not only when the inference result and the action (reaction) of the doctor are not in the relationship expected for a lesion such as a tumor. You may make it judge. For example, if the existing inference model 14a is an inference model for displaying operation guidance for an endoscope, this inference model can be used to display the operation guidance. When this operation guide display and the operation of the doctor who operates the endoscope while looking at this operation guide do not match, the contradiction determination unit 23 determines that there is a mismatch, and the image and/or image when this mismatch occurs. Alternatively, operation information at that time may be acquired.
 UI部22と矛盾判定部23の機能は、ハードウエアおよび/またはソフトウエアによって実現されていてもよい。もちろん、UI部22と矛盾判定部23の機能の一部をハードウエアによって実現し、残りをソフトウエアによって実現してもよい。また、UI部22と矛盾部23の機能は、制御部21が兼ねていてもよい。すなわち、制御部21、UI部22、矛盾判定部23の全部または一部の機能を実現するためのプロセッサを設けてもよい。 The functions of the UI unit 22 and the contradiction determination unit 23 may be realized by hardware and/or software. Of course, part of the functions of the UI unit 22 and the contradiction determination unit 23 may be implemented by hardware, and the rest may be implemented by software. Also, the functions of the UI unit 22 and the contradiction unit 23 may be combined with the control unit 21 . That is, a processor may be provided for realizing all or part of the functions of the control unit 21, the UI unit 22, and the contradiction determination unit 23. FIG.
 矛盾判定部23は、推論部における推論結果と、病変部について診断を行う医師のアクションの関係が期待される関係にない場合に、表示を行った時の画像データを取得する取得部(プロセッサ)として機能する(図2のS5、図3のS13~S17、図5のS31、図6のS53等参照)。取得部が表示を行った時に取得する画像データは、表示時の画像コマを含む動画(時系列画像群)であり、画像コマを特定可能な情報を含んだデータである。取得部が表示を行った時に取得する画像データは、表示時の画像コマを含む時系列画像群であり、画像コマを特定可能な情報を含んだデータである。矛盾判定部23は、取得部によって取得された画像に、画像の稀少度に関する情報を付する。矛盾判定部23は、取得部によって取得された画像データを教師データ候補とする教師データ候補選定部として機能する(例えば、図2のS7参照)。 The contradiction determination unit 23 is an acquisition unit (processor) that acquires image data when displaying when the relationship between the inference result in the inference unit and the action of the doctor who diagnoses the lesion is not in the expected relationship. (See S5 in FIG. 2, S13 to S17 in FIG. 3, S31 in FIG. 5, S53 in FIG. 6, etc.). The image data acquired by the acquisition unit when displaying is a moving image (time-series image group) including image frames at the time of display, and is data containing information that can specify the image frames. The image data acquired by the acquisition unit when displaying is a time-series image group including image frames at the time of display, and is data containing information that can specify the image frames. The contradiction determination unit 23 attaches information about the rarity of the image to the image acquired by the acquisition unit. The contradiction determination unit 23 functions as a training data candidate selection unit that selects the image data acquired by the acquisition unit as training data candidates (see S7 in FIG. 2, for example).
 画像が稀少であれば、推論モデルの学習時、テスト時等において、画像データを推論モデルの要求性能、要求仕様に従って選択的に使い分けることが出来る。また、稀少かどうかは、その画像を得た対象物の属性や、その画像に写っている対象物の特徴等からも決めることが出来るので、希少である理由を画像のメタデータ等に記録できるようにしてもよい。なお、取得された画像は、静止画の撮影のようなものを想定してもよいし、一連の検査の動画(時系列画像群)として取得して、その中のコマを特定できるような取得の仕方でも良い。つまり、上述の取得部が表示を行った時に取得する画像データは、表示時の画像コマを含む時系列画像群であり、画像コマを特定可能な情報を含んだデータであってもよい。動画のどのコマがライブビュー表示や再生表示されているかはシステム、制御部は把握しており、そのどのコマが表示されているかも判定可能なので、重要画像を、コマ指定の情報として記録することは容易である。 If images are scarce, image data can be selectively used according to the required performance and required specifications of the inference model when learning and testing the inference model. In addition, since the rarity can be determined from the attributes of the object from which the image was obtained and the characteristics of the object appearing in the image, the reason for the rarity can be recorded in the metadata of the image. You may do so. The acquired images may be assumed to be still images taken, or may be acquired as a series of inspection videos (time-series image group), and acquired in such a way that frames in them can be specified. The method of That is, the image data acquired by the acquisition unit when displaying is a time-series image group including image frames at the time of display, and may be data containing information that can specify the image frames. The system and control unit know which frame of the video is being displayed in live view or playback, and it is possible to determine which frame is being displayed, so important images can be recorded as frame-specific information. is easy.
 矛盾判定部23等は、上述したように、腫瘍等の病変部に限らず、操作ガイド表示と医師等のアクション(反応)が期待した関係にあるか否かを判定するようにしてもよい。さらに、院内システム20(機器10や管理サーバ30であってもよい)が、医療機器に設けられたセンサデータ、もしくは医師の医療機器に対する操作データを推論モデルに入力して推論する推論部を有している場合には、矛盾判定部23等は、推論の結果と、医療機器を使用する医師のアクションの関係が期待される関係にない場合に、該期待される関係になかった時のセンサデータもしくは操作データに識別符号を付与可能にするデータ処理部としての機能を有する。このデータ処理部は、期待される関係になかった時のセンサデータもしくは操作データに対して、識別符号を付与しておくだけでもよく、また、識別符号が付与されたデータを選択しておくだけでもよく、さらに取得し、記録しておいてもよい。 As described above, the contradiction determination unit 23 and the like may determine whether or not there is an expected relationship between the display of the operation guide and the action (reaction) of the doctor or the like, not limited to the lesion such as a tumor. Furthermore, the in-hospital system 20 (which may be the device 10 or the management server 30) has an inference unit that makes inference by inputting sensor data provided in the medical device or doctor's operation data for the medical device into the inference model. If so, the contradiction determination unit 23 and the like, if the relationship between the result of inference and the action of the doctor using the medical device is not in the expected relationship, the sensor when the expected relationship is not met It has a function as a data processing section that can assign an identification code to data or operation data. This data processing unit may simply assign an identification code to sensor data or operation data when the expected relationship is not met, or simply select data to which an identification code has been assigned. , or may be further acquired and recorded.
 通信部24は、通信回路を有し、管理サーバ30の通信部35等と通信を行うことができる。通信部24は、記録部25に記録されている画像データ等の情報を管理サーバ30に送信可能である。また、通信部24は、記録部25に記録されている画像データ等に対して、アノテーションを付与して教師データを作成し、学習部によって生成された推論モデルの生成を管理サーバ30等から受信することもできる。また、通信部24は、管理サーバ30に記憶されている既存推論モデル32bを受信し、機器10内の推論部14に送信することも可能である。 The communication unit 24 has a communication circuit and can communicate with the communication unit 35 of the management server 30 and the like. The communication unit 24 can transmit information such as image data recorded in the recording unit 25 to the management server 30 . Further, the communication unit 24 creates teacher data by annotating image data and the like recorded in the recording unit 25, and receives generation of an inference model generated by the learning unit from the management server 30 or the like. You can also The communication unit 24 can also receive the existing inference model 32b stored in the management server 30 and transmit it to the inference unit 14 in the device 10 .
 矛盾判定部23が、推論結果と医師の診断結果に矛盾があると判定した場合には、通信部24が新推論モデル生成用に画像データ等を管理サーバに送信する際に、矛盾が生じている点(例えば、推論結果が腫瘍なし、一方医師の診断は腫瘍あり)について、両者を関連付けて送信する。すなわち、画像データの取得タイミングと、電子カルテの記載に基づいて医師の診断結果が分かるタイミングは異なっていることから、同時送信できない場合には、両者を関連付けられるようにする。制御部21が、両者を関連付けられるようにしてもよい。 When the contradiction determination unit 23 determines that there is a discrepancy between the inference result and the doctor's diagnosis result, the communication unit 24 transmits image data and the like for generating a new inference model to the management server. (For example, the inference result is no tumor, while the doctor's diagnosis is a tumor), both are associated and transmitted. That is, since the acquisition timing of the image data differs from the timing at which the doctor's diagnosis results are obtained based on the description in the electronic medical record, if simultaneous transmission is not possible, the two are associated with each other. The control unit 21 may associate the two.
 記録部25は、電気的に書き換え可能な不揮発性メモリであり、画像取得部13が取得した画像データ、またこの画像データに関連付けられた記録されたタグデータ等を記録する。記録部25は画像取得部13が取得した全画像データ等を記録しておいてもよく、また、矛盾判定部23によって推論結果と診断結果に矛盾があった画像のみを記録するようにしてもよい。また、記録部25内には、図4を用いて後述する推論結果と医師アクションに基づく期待される関係を示すデータベースを記憶する。 The recording unit 25 is an electrically rewritable non-volatile memory, and records the image data acquired by the image acquisition unit 13 and the recorded tag data associated with this image data. The recording unit 25 may record all the image data acquired by the image acquisition unit 13, or may record only images in which the inference result and the diagnosis result are contradicted by the contradiction determination unit 23. good. The recording unit 25 also stores a database showing expected relationships based on inference results and doctor actions, which will be described later with reference to FIG.
 記録部25は、取得部によって取得された画像に、画像の稀少度(重要度)に関する情報を付して記録する記録部として機能する(例えば、図3のS15、図4等参照)。なお、この記録部は、院内システム20内に限らず、管理サーバ30に設けてもよく、また機器10内において、希少度(重要度)を判定し、画像に関連付けて記録する記録部を設けてもよい。記録部25は、推論モデルを用いて行った推論結果と、診断を行う医師のアクションの関係が期待される関係を示すデータベースを記録する(例えば、図4参照)。 The recording unit 25 functions as a recording unit that attaches information about the rarity (importance) of the image to the image acquired by the acquisition unit and records the image (see, for example, S15 of FIG. 3, FIG. 4, etc.). Note that this recording unit is not limited to the hospital system 20, and may be provided in the management server 30. A recording unit is provided in the device 10 to determine the degree of rarity (importance) and record it in association with the image. may The recording unit 25 records a database indicating the expected relationship between the inference result obtained using the inference model and the action of the doctor who makes the diagnosis (see FIG. 4, for example).
 制御部21は、CPU(Central Processing Unit:中央処理装置)等を含み、種々の周辺回路を含むプロセッサである。このCPU等の処理装置は、院内システム20内の記憶部に記憶されたプログラムに従って、院内システム20内の各部を制御する。なお、UI部22、矛盾判定部23、通信部24、記録部25の一部または全部の機能を、制御部21が実行してもよい。 The control unit 21 is a processor including a CPU (Central Processing Unit) and various peripheral circuits. The processing device such as the CPU controls each part within the hospital system 20 according to a program stored in the storage unit within the hospital system 20 . Note that the control unit 21 may perform some or all of the functions of the UI unit 22 , the contradiction determination unit 23 , the communication unit 24 and the recording unit 25 .
 管理サーバ30は、インターネット等の通信網を通じて、複数の院内システム20等と接続可能であり、教師データを記録し、この教師データに基づいて推論モデルを生成するか、または推論モデルの生成を外部の学習装置に依頼することができる。この生成された推論モデルは、インターネット等の通信網を通じて、院内システム20に送信することができる。管理サーバ30は、制御部31、アノテーション・学習部32、既存推論モデル32b、記録部33、記録制御部34、および通信部35を有する。 The management server 30 can be connected to a plurality of hospital systems 20 and the like through a communication network such as the Internet, records training data, generates an inference model based on this training data, or generates an inference model externally. can be requested to the learning device of This generated inference model can be transmitted to the hospital system 20 through a communication network such as the Internet. The management server 30 has a control unit 31 , an annotation/learning unit 32 , an existing inference model 32 b , a recording unit 33 , a recording control unit 34 and a communication unit 35 .
 管理サーバ30内の記録部33は、電気的に書き換え可能な不揮発性メモリを有し、各種データを記録することができる。管理サーバ30は、その記録領域に、既存教師データA33a、既存教師データB33b、新推論モデル用データ33cを記録する。既存教師データA33aは、既存推論モデル32bを生成する際に採用実績のある教師データである。一方既存教師データB32bは、教師データとして作成されたが、何らかの理由によって、既存推論モデル32bを生成する際に採用されなかった教師データである。 The recording unit 33 in the management server 30 has an electrically rewritable non-volatile memory and can record various data. The management server 30 records existing teacher data A33a, existing teacher data B33b, and new inference model data 33c in the recording area. The existing teacher data A33a is teacher data that has been used when generating the existing inference model 32b. On the other hand, the existing teacher data B32b is teacher data that was created as teacher data but was not adopted when generating the existing inference model 32b for some reason.
 記録部33は、既存教師データA32aと既存教師データB32bとして、いずれも複数の教師データを記録することができる。図1Bにおいて、既存教師データA33aは、複数の教師データ33aa、33ab、33acの3つの教師データが描かれており、また既存教師データB33bは教師データ33baの1つの教師データが描かれているが、多数の教師データが記録可能である。教師データ33aaは、画像データ33aaa、カテゴリ133aab、アノテーション33aacからなる。 The recording unit 33 can record a plurality of pieces of teacher data as existing teacher data A32a and existing teacher data B32b. In FIG. 1B, the existing teacher data A 33a depicts three teacher data of a plurality of teacher data 33aa, 33ab, and 33ac, and the existing teacher data B 33b depicts one teacher data of the teacher data 33ba. , a large number of teacher data can be recorded. The teacher data 33aa consists of image data 33aaa, categories 133aab, and annotations 33aac.
 画像データ33aaaは、教師データとして採用された画像データであり、カテゴリ133aabは、その画像データのカテゴリを示す。アノテーション33aacは、アノテーション・学習部32において付与されたアノテーションである。アノテーションとして、例えば、腫瘍の有無や位置、腫瘍の種類等が付与され、また画像データが取得された時の医師の行動ガイドが付与されている場合もある。医師の行動ガイドとしては、例えば、生検、色素散布、染色、切除等のガイド表示された処置候補があり、またカルテに記載された事項であってもよい。教師データ33aab、33aac、33ba内の画像データにも、カテゴリ1、アノテーション教師データ33aaと同様の情報が関連付けて記録されている。 The image data 33aaa is image data adopted as teacher data, and the category 133aab indicates the category of the image data. The annotation 33 aac is an annotation added by the annotation/learning unit 32 . As annotations, for example, the presence or absence of a tumor, the position of the tumor, the type of the tumor, and the like are added, and a doctor's action guide when the image data is acquired may be added. Action guides for doctors include, for example, treatment candidates displayed as guides such as biopsy, pigment spraying, staining, and excision, and may also be items described in medical charts. The image data in the teacher data 33aab, 33aac, and 33ba are recorded in association with the same information as the category 1 annotation teacher data 33aa.
 新推論モデル用データ33cは、院内システム20内の矛盾判定部23によって、既存推論モデル14aの推論結果と、医師の診断結果の間で、矛盾があると判定されたときの画像データ等である。この新推論モデル用データとして記録された画像データは、記録部25に記録された画像データの中で、新推論モデル用データとして管理サーバ30に送信されたデータである。この画像データが院内システム20において記録された時点では、未だ矛盾している判定されていない場合もあることから、矛盾が明らかになった時点で、記録部25から新推論モデル用データとして送信され、また同時に画像データに添付すべきアノテーション情報やカテゴリ1情報等が、アノテーション・学習部32に送信され、アノテーションが付与された教師データが作成されて、新推論モデル用データ33cとして記録される。新推論モデル用データ33cは、新推論モデル生成用の教師データとして使用する以外にも、新推論モデルの評価する場合や、信頼性を算出する場合等にも使用できる。 The new inference model data 33c is image data or the like when the contradiction determination unit 23 in the hospital system 20 determines that there is a contradiction between the inference result of the existing inference model 14a and the doctor's diagnosis result. . The image data recorded as the new inference model data is the data transmitted to the management server 30 as the new inference model data among the image data recorded in the recording unit 25 . When this image data is recorded in the hospital system 20, it may not yet be determined to be inconsistent. At the same time, annotation information, category 1 information, etc. to be attached to the image data are sent to the annotation/learning unit 32, and teacher data with annotations is created and recorded as new inference model data 33c. The new inference model data 33c can be used not only as teaching data for generating a new inference model, but also for evaluating the new inference model and calculating reliability.
 なお、新推論モデル用データ33cは、上述の方法以外にも、例えば、院内システム0内の記録部25に画像データを一時記録した際に、院内システム20が管理サーバ30に送信し、画像データを一時記録しておき、矛盾判定部23が矛盾あると判定した際に、アノテーション・学習部32がアノテーションを付与して、新推論モデル用データ33cとして記録してもよい。また、新推論モデル用の画像と類似している類似画像を収集してもよい。また、新推論モデル用データとしては、画像データのみならず、追加の教師データを仕様も記録してもよい。 In addition to the method described above, the new inference model data 33c may be transmitted by the hospital system 20 to the management server 30 when the image data is temporarily recorded in the recording unit 25 in the hospital system 0, and the image data may be temporarily recorded, and when the contradiction determination unit 23 determines that there is a contradiction, the annotation/learning unit 32 may add an annotation and record it as new inference model data 33c. Also, similar images that are similar to the images for the new inference model may be collected. Further, as the data for the new inference model, not only the image data but also the specification of the additional teacher data may be recorded.
 アノテーション・学習部32は、院内システム20等から送信されてくる画像データにアノテーションを付与し、教師データを作成する。アノテーションは、院内システム20等から画像データと一緒に送信されて来たデータに基づいてアノテーションを付与してもよく、その他の情報に基づいて自動的に際にアノテーションを付与してもよい。また、医師等の専門家が手動でアノテーションを付与するようにしてもよい。 The annotation/learning unit 32 creates teacher data by annotating image data transmitted from the hospital system 20 or the like. Annotations may be added based on data transmitted together with image data from the hospital system 20 or the like, or may be automatically added based on other information. Alternatively, an expert such as a doctor may manually add annotations.
 アノテーション・学習部32は、新推論モデル用データ33c等の教師データを用いて深層学習等の機械学習によって推論モデルを生成する。このため、アノテーション・学習部32は、推論モデル生成用の推論エンジン等のハードウエア、または推論モデル生成用のソフトウエアを有する。また、管理サーバ30内において学習を行って推論モデルを生成できない場合には、外部の学習装置に推論モデルの生成を依頼する依頼部を有していてもよい。 The annotation/learning unit 32 generates an inference model by machine learning such as deep learning using teacher data such as the new inference model data 33c. Therefore, the annotation/learning unit 32 has hardware such as an inference engine for inference model generation, or software for inference model generation. Further, if the inference model cannot be generated by learning within the management server 30, it may have a request unit for requesting an external learning device to generate the inference model.
 既存推論モデル32bは、既存教師データA33aを用いて生成された推論モデルである。この推論モデル32bは機器10に送信され、推論部14に設定される。機器10は、前述したように、画像取得部13によって取得した画像データを推論部14に入力し、既存推論モデル14aを用いて推論し、推論結果を表示部12や矛盾判定部23等に出力する。なお、この推論は、機器10内以外にも、院内システム20において行ってもよい。この場合には、院内システム20内に推論部14と同様の推論部を設ける。 The existing inference model 32b is an inference model generated using the existing teacher data A33a. This inference model 32 b is transmitted to the device 10 and set in the inference section 14 . As described above, the device 10 inputs the image data acquired by the image acquisition unit 13 to the inference unit 14, performs inference using the existing inference model 14a, and outputs the inference result to the display unit 12, the contradiction determination unit 23, and the like. do. Note that this inference may be performed in the in-hospital system 20 as well as in the device 10 . In this case, an inference section similar to the inference section 14 is provided within the hospital system 20 .
 ここで、深層学習について、説明する。「深層学習(ディープ・ラーニング)」は、ニューラル・ネットワークを用いた「機械学習」の過程を多層構造化したものである。情報を前から後ろに送って判定を行う「順伝搬型ニューラル・ネットワーク」が代表的なものである。順伝搬型ニューラル・ネットワークは、最も単純なものでは、N1個のニューロンで構成される入力層、パラメータで与えられるN2個のニューロンで構成される中間層、判別するクラスの数に対応するN3個のニューロンで構成される出力層の3層があればよい。入力層と中間層、中間層と出力層の各ニューロンはそれぞれが結合加重で結ばれ、中間層と出力層はバイアス値が加えられることによって、論理ゲートを容易に形成できる。 Here, deep learning will be explained. "Deep learning" is a multilayer structure of the process of "machine learning" using neural networks. A typical example is a "forward propagation neural network" that sends information from front to back and makes decisions. The simplest forward propagation neural network consists of an input layer composed of N1 neurons, an intermediate layer composed of N2 neurons given by parameters, and N3 neurons corresponding to the number of classes to be discriminated. It suffices if there are three output layers composed of neurons. The neurons of the input layer and the intermediate layer, and the intermediate layer and the output layer are connected by connection weights, respectively, and the intermediate layer and the output layer are added with bias values, so that logic gates can be easily formed.
 ニューラル・ネットワークは、簡単な判別を行うのであれば3層でもよいが、中間層を多数にすることによって、機械学習の過程において複数の特徴量の組み合わせ方を学習することも可能となる。近年では、9層~152層のものが、学習にかかる時間や判定精度、消費エネルギーの観点から実用的になっている。また、画像の特徴量を圧縮する、「畳み込み」と呼ばれる処理を行い、最小限の処理で動作し、パターン認識に強い「畳み込み型ニューラル・ネットワーク」を利用してもよい。また、より複雑な情報を扱え、順番や順序によって意味合いが変わる情報分析に対応して、情報を双方向に流れる「再帰型ニューラル・ネットワーク」(全結合リカレントニューラルネット)を利用してもよい。 The neural network may have three layers for simple discrimination, but by increasing the number of intermediate layers, it is also possible to learn how to combine multiple feature values in the process of machine learning. In recent years, 9 to 152 layers have become practical from the viewpoint of the time required for learning, judgment accuracy, and energy consumption. In addition, a process called "convolution" that compresses the feature amount of an image may be performed, and a "convolution neural network" that operates with minimal processing and is strong in pattern recognition may be used. In addition, a "recurrent neural network" (fully-connected recurrent neural network), which can handle more complicated information and can handle information analysis whose meaning changes depending on the order and order, may be used in which information flows in both directions.
 これらの技術を実現するために、CPUやFPGA(Field Programmable Gate Array)等の従来からある汎用的な演算処理回路を使用してもよい。しかし、これに限らず、ニューラル・ネットワークの処理の多くが行列の掛け算であることから、行列計算に特化したGPU(Graphic Processing Unit)やTensor Processing Unit(TPU)と呼ばれるプロセッサを利用してもよい。近年ではこのような人工知能(AI)専用ハードの「ニューラル・ネットワーク・プロセッシング・ユニット(NPU)」がCPU等その他の回路とともに集積して組み込み可能に設計され、処理回路の一部になっている場合もある。 In order to realize these technologies, conventional general-purpose arithmetic processing circuits such as CPUs and FPGAs (Field Programmable Gate Arrays) may be used. However, not only this, but since most neural network processing is matrix multiplication, it is also possible to use processors called GPUs (Graphic Processing Units) and Tensor Processing Units (TPUs) that specialize in matrix calculations. good. In recent years, such artificial intelligence (AI) dedicated hardware "neural network processing unit (NPU)" is designed to be integrated and embedded with other circuits such as CPU, and has become a part of the processing circuit. In some cases.
 その他、機械学習の方法としては、例えば、サポートベクトルマシン、サポートベクトル回帰という手法もある。ここでの学習は、識別器の重み、フィルター係数、オフセットを算出するものあり、これ以外にも、ロジスティック回帰処理を利用する手法もある。機械に何かを判定させる場合、人間が機械に判定の仕方を教える必要がある。本実施形態においては、画像の判定を、機械学習によって導出する手法を採用したが、そのほか、人間が経験則・ヒューリスティクスによって獲得したルールを適応するルールベースの手法を用いてもよい。 In addition, there are other methods of machine learning, such as support vector machines and support vector regression. The learning here involves calculation of classifier weights, filter coefficients, and offsets, and there is also a method using logistic regression processing. If you want a machine to judge something, you have to teach the machine how to judge. In the present embodiment, a method of deriving image determination by machine learning is used. In addition, a rule-based method that applies rules acquired by humans through empirical rules and heuristics may be used.
 記録制御部34は、記録部33内におけるデータの記録の制御を行う。前述したように、院内システム20の記録部25から画像データが送信されてきた場合や、矛盾判定部23から矛盾判定した場合の関連データが送信されたきた場合等に、これらのデータ等の記録の制御を行う。 The recording control unit 34 controls data recording in the recording unit 33 . As described above, when image data is transmitted from the recording unit 25 of the hospital system 20, or when data related to contradiction determination is transmitted from the contradiction determination unit 23, these data are recorded. control.
 通信部35は、通信回路を有し、院内システム20の通信部24等と通信を行うことができる。通信部35は、記録部25に記録されている画像データ等の情報を受信することができる。また、通信部35は、院内システム24以外の他のサーバ等とも通信し、種々の情報を収集することができる。なお、図1A、図1Bにおいて、機器10と管理サーバ30は通信を直接していないが、直接、データを通信するようにしてもよい。この場合には、矛盾判定部23から矛盾が明らかになった時点で、新推論モデル用データとして記録すればよい。 The communication unit 35 has a communication circuit and can communicate with the communication unit 24 of the hospital system 20 and the like. The communication unit 35 can receive information such as image data recorded in the recording unit 25 . The communication unit 35 can also communicate with servers other than the hospital system 24 to collect various information. Note that in FIGS. 1A and 1B, the device 10 and the management server 30 do not communicate directly, but they may communicate data directly. In this case, when the contradiction is clarified by the contradiction determination unit 23, the data may be recorded as new inference model data.
 制御部31は、CPU(Central Processing Unit:中央処理装置)等を含み、種々の周辺回路を含むASIC(Application Specific Integrated Circuit:特定用途向け集積回路)で構成されたプロセッサである。このCPU等の処理装置は、管理サーバ30内の記憶部に記憶されたプログラムに従って、管理サーバ30内の各部を制御する。なお、アノテーション・学習部32、記録制御部34、通信部35の一部または全部の機能を、制御部31が実行してもよい。 The control unit 31 is a processor that includes a CPU (Central Processing Unit), etc., and is composed of an ASIC (Application Specific Integrated Circuit) that includes various peripheral circuits. The processing device such as the CPU controls each section in the management server 30 according to a program stored in the storage section in the management server 30 . Note that the control unit 31 may perform some or all of the functions of the annotation/learning unit 32 , the recording control unit 34 , and the communication unit 35 .
 このように、図1A、図1Bにおけるデータ取得システムにおいて、管理サーバ30等において生成された既存推論モデル32bを、機器10内の推論部14に既存推論モデル14aとして設定しておき、この既存推論モデル14aを用いて入力画像P1を推論している。この推論としては、例えば、腫瘍等の病変部の有無等を推論し、また医師等に対して機器10等の操作を行うためのガイドを推論してもよい。 Thus, in the data acquisition system in FIGS. 1A and 1B, the existing inference model 32b generated in the management server 30 or the like is set in the inference unit 14 in the device 10 as the existing inference model 14a. The input image P1 is inferred using the model 14a. As this inference, for example, the presence or absence of a lesion such as a tumor may be inferred, or a guide for operating the device 10 or the like may be inferred for a doctor or the like.
 また、推論部14における推論結果と、医師等が機器10を使用しての診断結果に矛盾が生じている場合、言い換えると推論モデルによる推論結果と、診断を行う医師等のアクションの関係が期待される関係にない場合には、その時の取得画像を新推論モデル用データとして取得し、新推論モデル用データ33cとして記録している。この記録された新推論モデル用データ33cを用いて、新推論モデルを生成することができる。この新推論モデルは、それまでに使用していた既存推論モデル14aと比較すると、推論結果が変であると医師が判断することを減少させることができる。 In addition, when there is a contradiction between the inference result of the inference unit 14 and the diagnosis result of the doctor using the device 10, in other words, the relationship between the inference result of the inference model and the action of the doctor who makes the diagnosis is expected. If there is no such relationship, the acquired image at that time is acquired as the new inference model data and recorded as the new inference model data 33c. A new inference model can be generated using the recorded new inference model data 33c. This new inference model can reduce the number of doctors' judgments that the inference results are strange when compared with the existing inference model 14a that has been used so far.
 このように、本実施形態に係るデータ取得システムでは、新推論モデルの生成するために、推論モデルの推論結果と、診断を行う医師が行ったアクションとの関係が、期待される関係にない場合に、表示された時の画像を取得するようにしている。なお、新推論モデル用データの取得方法としては、上述の方法以外にも、(1)これまで教師データになった実績のあるものを参考にしてレアなデータであるか否かを判定し、このレアと判定されたデータを新推論モデル用のデータとして収集する方法や、(2)実際には推論結果を出さないで医師の反応と突き合わせ、そのときのデータを新推論モデル用のデータとして収集する方法がある。上述の(1)の方法については、図7を用いて後述する。 As described above, in the data acquisition system according to the present embodiment, in order to generate a new inference model, if the relationship between the inference result of the inference model and the action taken by the doctor making the diagnosis does not meet the expected relationship, , I am trying to get the image when it is displayed. In addition to the methods described above, there are other ways to acquire the data for the new inference model. A method of collecting this data judged to be rare as data for a new inference model, and (2) matching the doctor's reaction without actually producing an inference result, and using the data at that time as data for the new inference model There are ways to collect. The above method (1) will be described later with reference to FIG.
 本実施形態において取得した新推論モデル用データは、腫瘍等の病変部の有無や、その位置をアノテーションすることよって教師データを作成し、この教師データを用いて新推論モデルを生成することができる。この新推論モデルの生成以外にも、新推論モデルの評価や、信頼性の判定等の用途に使用することもできる。 The new inference model data acquired in this embodiment can be used to create teacher data by annotating the presence or absence of lesions such as tumors and their positions, and using this teacher data to generate a new inference model. . In addition to the generation of this new inference model, it can also be used for purposes such as evaluation of the new inference model and determination of reliability.
 次に、図2に示すフローチャートを用いて、重要画像取得の動作について説明する。重要画像は、既存推論モデルの推論結果と、診断を行う医師が行ったアクションとの関係が、期待される関係にない場合に、表示された時の画像のことでる。このフローは、機器10内の制御部11および/または院内システム20内の制御部21が、それぞれの制御部内のメモリに記憶されたプログラムに従って、連携して動作することによって実現する。 Next, the operation of acquiring important images will be described using the flowchart shown in FIG. The important image is an image displayed when the relationship between the inference result of the existing inference model and the action performed by the doctor who makes the diagnosis does not meet the expected relationship. This flow is realized by the control unit 11 in the device 10 and/or the control unit 21 in the hospital system 20 operating in cooperation according to the programs stored in the memory in each control unit.
 図2のフローがスタートすると、まず、AI利用で診察する(S1)。ここでは、医師が内視鏡等の機器10を用いて患者等の検査・診察を行う。このとき、推論部14の既存推論モデル14a(AI)を用いて、腫瘍等の病変部の有無やその位置の推論を行う。このとき、機器10を操作するためのガイド表示をAIによって行ってもよい。 When the flow in Figure 2 starts, first, the patient is examined using AI (S1). Here, a doctor examines and diagnoses a patient or the like using a device 10 such as an endoscope. At this time, the existing inference model 14a (AI) of the inference unit 14 is used to infer the presence or absence of a lesion such as a tumor and its position. At this time, the guide display for operating the device 10 may be performed by AI.
 次に、医師が推論結果に反する反応を示したか否かを判定する(S3)。ここでは、矛盾判定部23が、推論部14の既存推論モデル14aによる推論結果と、医師の診断結果に矛盾が生じているか否かを判定する。すなわち、推論結果から想定される処方をしたか否かを判定する。医師の処方は、例えば、電子カルテ等に基づいて判定し、電子カルテの記載に基づいて推論結果と異なっているか否かを判定する。なお、医師の反応は、電子カルテに基づいて判定する以外にも、例えば、操作結果反映部15等の出力に基づいて、生検を行ったり、色素散布を行ったり、染色を行ったり、ポリープ等の切除を行ったりする等の処置を判定してもよく、また画像取得部13が取得した画像に基づいて、内視鏡等の機器10を繰り返し操作して観察する等、正常な部位を観察する場合と異なる操作を行ったかどうか等の医師の操作状態等に基づいて判定してもよい。  Next, it is determined whether or not the doctor showed a reaction contrary to the inference result (S3). Here, the contradiction determination unit 23 determines whether or not there is a contradiction between the inference result of the existing inference model 14a of the inference unit 14 and the doctor's diagnosis result. That is, it is determined whether or not the prescription assumed from the inference result is performed. A doctor's prescription is determined, for example, based on an electronic medical chart or the like, and whether or not it is different from the inference result is determined based on the description of the electronic medical chart. In addition to judging the doctor's reaction based on the electronic medical record, for example, based on the output of the operation result reflection unit 15 or the like, biopsy, pigment spraying, staining, polyp Also, based on the image acquired by the image acquisition unit 13, the device 10 such as an endoscope is repeatedly operated and observed, and a normal part is determined. The determination may be made based on the doctor's operation state, such as whether or not an operation different from that for observation has been performed. 
 また、ステップS3における判定において、推論結果の信頼性を考慮して、画像の重要度を判定してもよい。推論結果の信頼性が高かったにもかかわらず、医師がこの推論結果と異なる診断を行った場合には、そのときの画像の重要度は大であり、新推論モデルの生成に役立つといえる。例えば、新推論モデル生成用の教師データを作成する際に、重要度が大の画像の利用の優先度を高くするようにしてもよい。また、重要度が大の画像が所定数以上となると、新推論モデルの生成を開始する等、新推論モデルの生成のタイミングを決める際に、重要度を利用してもよい。 Also, in the determination in step S3, the degree of importance of the image may be determined in consideration of the reliability of the inference result. Even if the reliability of the inference result is high, when the doctor makes a diagnosis different from the inference result, the importance of the image at that time is high, and it can be said that it is useful for generating a new inference model. For example, when creating teacher data for generating a new inference model, the priority of using an image with a high degree of importance may be increased. Further, when the number of images having a high degree of importance reaches a predetermined number or more, generation of a new inference model is started, and the degree of importance may be used when determining the timing of generation of a new inference model.
 なお、ステップS1において、機器10を操作するためのガイド表示をAIで行った場合には、このステップS3において、医師がAIによる操作ガイド表示に反する操作を行ったか否かを判定するようにしてもよい。すなわち、操作ガイド表示と、内視鏡の操作を行う医師のアクションの関係が期待される関係にあるか否かを判定しても良い。 In step S1, when the guide display for operating the device 10 is performed by AI, in this step S3, it is determined whether or not the doctor has performed an operation contrary to the operation guide display by AI. good too. That is, it may be determined whether or not there is an expected relationship between the operation guide display and the action of the doctor who operates the endoscope.
 ステップS3における判定の結果、医師が推論結果に反する反応を示した場合には、重要画像として、その時の画像を取得する(S5)。ここでは、医師が推論結果に反する反応を示した画像を取得する。なお、医師が推論結果に反する反応を示したことが判明するのが、後から分かる場合がある。例えば、医師の反応として電子カルテを利用する場合には、医師が画像を観察した時点よりも、電子カルテの作成時の方が遅くなることがある。その場合には、機器10によって取得した画像をすべて一時記録しておき、医師が推論結果に反する反応を示したことが判明した時点で、重要画像として記録するようにしてもよい。そのため、画像データは院内システム20内の記録部25に一時記録しておくか、または管理サーバ30にすべて一時記録しておき、医師が推論結果に反する反応を示したことが判明した時点で、重要画像として記録するようにしてもよい。 As a result of the determination in step S3, if the doctor shows a reaction contrary to the inference result, the image at that time is acquired as the important image (S5). Here, an image is obtained in which the doctor shows a reaction contrary to the inference result. In addition, it may be found later that the doctor showed a reaction contrary to the inference result. For example, when an electronic medical chart is used as a doctor's response, the electronic medical chart may be created later than when the doctor observes the image. In that case, all the images acquired by the device 10 may be temporarily recorded and recorded as important images when it is found that the doctor has shown a reaction contrary to the inference result. Therefore, the image data is temporarily recorded in the recording unit 25 in the hospital system 20, or is temporarily recorded entirely in the management server 30, and when it is found that the doctor has shown a reaction contrary to the inference result, It may be recorded as an important image.
 前述したように、ここでの重要画像は写真撮影のような静止画記録である必要はなく、検査中の画像は常時記録しておき、重要な画像コマに対応する部分が、どのコマであったかを、例えば動画開始など特定の基準時間からの何分何秒の時点とか、あるいは、何コマ目か等を記録できるようにしてもよい。つまり、動画など連続コマ画像、連写コマ画像として記録されたものから、特定コマを別に分けてわざわざ記録しなければならないわけではない。もとの動画が残っていた方が、その他の画像情報等もエビデンスとして残すことができて良い場合も多い。 As mentioned above, the important image here does not have to be a still image record like a photograph, but the image under inspection should be recorded at all times, and the part corresponding to the important image frame is a frame. may be recorded as, for example, how many minutes and seconds from a specific reference time such as the start of a moving image, or how many frames. In other words, it is not necessary to separately record specific frames separately from those recorded as continuous frame images such as moving images and continuous shot frame images. In many cases, it is better to keep the original video because other image information can also be left as evidence.
 なお、ステップS3において、医師がAIによる操作ガイド表示に反する操作を行ったと判定した場合には、ステップS5において、このときの医師の操作情報を取得する。すなわち、操作ガイド表示と、内視鏡の操作を行う医師のアクションの関係が期待される関係にないと判定された場合には、操作ガイド表示を行った時の操作関連データを取得する。また、病変部や操作ガイドに限らず、ステップS1において、医療機器に設けられたセンサデータ、もしくは医師の医療機器に対する操作データを推論モデルに入力して推論した場合には、ステップS3において、推論の結果と、医療機器を使用する医師のアクションの関係が期待される関係にあるか否かを判定し、ステップS5における判定の結果、期待される関係になかった場合には、この時のセンサデータもしくは操作データに識別符号を付与可能にすればよい。 If it is determined in step S3 that the doctor has performed an operation contrary to the operation guide display by AI, the doctor's operation information at this time is acquired in step S5. That is, when it is determined that the relationship between the operation guide display and the action of the doctor who operates the endoscope is not in the expected relationship, the operation-related data when the operation guide display is performed is acquired. In addition, when inference is performed by inputting sensor data provided in the medical device or doctor's operation data to the inference model in step S1, the inference is performed in step S3. and the action of the doctor using the medical device are in the expected relationship. An identification code may be assigned to data or operation data.
 ステップS3における判定の結果、医師が推論結果に反する反応を示さなかった場合(想定される処方した場合等)、またはステップS5において重要画像としてその時の画像を取得した場合には、次に、アノテーションを依頼する(S7)。ここでは、新推論モデル生成用に重要画像の画像データを取得したことから、教師データを作成するために、画像データにアノテーションを付与することを依頼してもよい。ステップS7における処理を行うと、図2の重要画像の取得のフローを終了する。 As a result of the determination in step S3, if the doctor did not show a reaction contrary to the inference result (such as when an assumed prescription was given), or if the image at that time was acquired as the important image in step S5, then annotation (S7). Here, since the image data of the important images for generating the new inference model has been acquired, it may be requested to annotate the image data in order to create the teacher data. After the processing in step S7 is performed, the flow for acquiring the important image in FIG. 2 ends.
 ここでは、もっぱら、病変の判定用推論モデル用の重要画像取得について説明したが、内視鏡などの専門性の高い機器では、推論モデルを操作ガイドに使う場合も多い。この操作ガイドは、取得した画像データ内の被写体の動き(これは、もっぱら、内視鏡の操作に起因する)や、操作部の操作のデータ(回転や押し込みの操作の有無や量をタイミングごとに取得したもの)の経時変化によって、その時々で必要とさせる操作を提示して、操作者を補助するものである。病変の判定用推論モデルと同様、操作ミスをしがちな状況を画像データや操作データで学習すれば推論モデルが得られるが、想定外のミスが起こった場合には、その状況証拠となるデータを取得しておいた方が、今後、そのようなシーンにも対応したガイドを出すことが可能になる。つまり、内視鏡操作ガイドを表示するための推論モデルを用いて推論し、ガイド表示の結果と、内視鏡の操作を行う医師のアクションの関係が期待される関係にない場合に、ガイド表示を行った時の操作関連データ(時系列で得られた画像や操作データの履歴など)を取得することによって、推論モデルの改良が可能になる。 Here, we have mainly explained the acquisition of important images for the inference model for lesion judgment, but in highly specialized equipment such as endoscopes, the inference model is often used as an operation guide. This operation guide is based on the movement of the subject in the acquired image data (this is mainly due to the operation of the endoscope) and the data on the operation of the operation unit (whether or not rotation or pressing is performed, and the amount of operation is checked at each timing). It assists the operator by presenting the operation required at that time according to the change over time of the data acquired in the past. Similar to the inference model for judging lesions, an inference model can be obtained by learning from image data and operation data the situations in which operation errors are likely to occur. , it will be possible to issue guides for such scenes in the future. In other words, inference is made using an inference model for displaying an endoscope operation guide. It is possible to improve the inference model by acquiring operation-related data (images obtained in time series, history of operation data, etc.) when performing.
 次に、図3に示すフローを用いて重要画像判定の動作について説明する。このフローは、図5のステップS31および図6のステップS53において行われる重要画像か否かの判定に係る動作を示している。このフローも、機器10内の制御部11および院内システム20内の制御部21が、それぞれの制御部内のメモリに記憶されたプログラムに従って、連携して動作することによって実現する。 Next, the operation of important image determination will be described using the flow shown in FIG. This flow shows the operation related to the determination of whether or not the image is an important image, which is performed in step S31 of FIG. 5 and step S53 of FIG. This flow is also realized by the control unit 11 in the device 10 and the control unit 21 in the hospital system 20 operating in cooperation according to the programs stored in the memories in the respective control units.
 図3のフローがスタートすると、まず、推論結果と信頼性判定を取得する(S11)。ここでは、矛盾判定部23が推論部14における推論結果および信頼性値を取得する。また、矛盾判定部23は、UI部22から医師が作成した電子カルテ等に基づいて医師の反応を取得する。医師の反応としては電子カルテ以外にも、前述したように、生検、色素散布、染色、切除等の医師の行った処置等を取得してもよく、医師の検査時の機器10の操作状態を用いて判定しても良い。 When the flow in FIG. 3 starts, first, the inference result and reliability determination are acquired (S11). Here, the contradiction determination unit 23 acquires the inference result and the reliability value in the inference unit 14 . In addition, the contradiction determination unit 23 acquires the doctor's reaction from the UI unit 22 based on the electronic medical record created by the doctor. As the response of the doctor, other than the electronic chart, as described above, biopsy, pigment spraying, staining, excision, and other treatments performed by the doctor may be obtained, and the operating state of the device 10 at the time of examination by the doctor may be acquired. may be used for determination.
 続いて、医師の反応、推論結果、信頼性等が、期待した関係にないか否かを判定する(S13)。ここでは、矛盾判定部23は、ステップS11において取得した情報に基づいて、医師の反応と推論結果に矛盾が無いか、言い換えると、推論結果が期待するような反応を、医師がとったか否かを判定する。この場合、信頼性も考慮して重要度(希少度)を判定してもよい。すなわち、推論結果の信頼性が高い場合に、医師がこの推論結果に反するような反応していた場合には、既存推論モデルが十分でないことから、その画像の重要度(希少度)は大であり、この重要度を勘案して新推論モデルを生成することが望ましいといえる。 Next, it is determined whether or not the doctor's reaction, inference results, reliability, etc. are in the expected relationship (S13). Here, based on the information acquired in step S11, the contradiction determination unit 23 determines whether there is any contradiction between the doctor's reaction and the inference result, in other words, whether the doctor reacted as expected from the inference result. judge. In this case, the degree of importance (rarity) may be determined in consideration of reliability. In other words, when the reliability of the inference result is high, if the doctor reacts contrary to this inference result, the importance (rarity) of the image is high because the existing inference model is not sufficient. Therefore, it is desirable to generate a new inference model considering this importance.
 また、操作者のトレーニング等においては、複数の専門家(初心者とベテラン・トレーナー)が立ち会うことがあり、トレーニングを受ける初心者が、推論モデルの出力を参考にしていても、誤った判断や操作を行う場合がある。この場合、操作していた初心者のミスに応じて、トレーナーが注意をすることがあるが、このような場合も、推論モデルが正しく初心者に伝えられなかった状況であるから、貴重なベテランの反応を利用して、重要データとして残しておく。また、一度、初心者が判定した結果を、ベテランがチェックして間違いを指摘するような場合もある。これも、推論モデルが正しく初心者に伝えられなかった状況であるから、貴重なベテランの反応を利用して、重要データとして残しておく。また、そのミスが軽度か重度かによって、その時のデータの重要度を勘案して新推論モデルを生成することが可能である。 In addition, in operator training, etc., multiple experts (beginners and veteran trainers) may be present, and even if beginners who receive training refer to the output of the inference model, they may make erroneous judgments and operations. may do so. In this case, the trainer may give a warning depending on the mistakes of the beginner who was operating, but even in such a case, the inference model was not correctly communicated to the beginner, so the reaction of the veteran is valuable. is used to store important data. In addition, there are cases in which a veteran checks the results once determined by a novice and points out mistakes. This is also a situation in which the inference model was not correctly communicated to the beginner, so we will use the valuable veteran's reaction and leave it as important data. Also, it is possible to generate a new inference model by considering the importance of data at that time depending on whether the mistake is light or serious.
 ステップS13における判定の結果、期待していた関係にないと判定された場合には、重要画像(希少画像)として、画像を取得する(または取得を依頼する)(S15)。ステップS13における判定の結果、期待関係にないことから、このときの画像は重要画像(希少画像)であり、このときの画像データを用いて新推論モデルを生成するために、画像を取得するか、取得することを依頼する。前述したように、ここでの重要画像は、動画の中の特定コマであってもよく、静止画として記録されたものである必要はない。検査中の画像は常時記録しておき、重要な画像コマに対応する部分が、どのコマであったかを記録できるようにして、この特定の画像コマを指定可能な情報を記録してもよい。検査を通しての動画が残っていた方が、取り扱いが楽で、その他の画像情報などもエビデンスとして残せて良い場合も多い。 As a result of the determination in step S13, if it is determined that the expected relationship is not met, the image is acquired (or requested to be acquired) as an important image (rare image) (S15). As a result of the determination in step S13, since the expected relationship is not met, the image at this time is an important image (rare image). , ask to get. As described above, the important image here may be a specific frame in a moving image, and does not need to be recorded as a still image. The image under inspection may be recorded at all times so that it is possible to record which frame the part corresponding to the important image frame is, and to record information for designating this specific image frame. It is easier to handle if there is a video of the examination, and there are many cases where other image information can be saved as evidence.
 ステップS15において、画像を取得(また取得を依頼)した場合には、次に、重要度(希少度)を判定して、画像を記録する(S17)。ここでは、矛盾判定部23が、医師の反応、推論結果、信頼性、推論モデルの教師データ、過去の推論画像等に基づいて、重要度(希少度)を判定し、重要度(希少度)が所定値より高いと判定されると、期待関係がないとされた画像を記録部25に一時記録、または記録部33に記録する。前述したように、既存推論モデル14aを用いて推論した結果が、医師の診断結果に反するような場合、すなわち、推論モデルによる推論結果と、診断を行う医師のアクションの関係が期待される関係にない場合には、教師データを作成し、新推論モデルを作成するために、その画像を新推論モデル用データとして記録する。ステップ13~S17における判定動作について、腫瘍検出の場合を例に挙げて、図4を用いて後述する。 In step S15, if the image is acquired (or the acquisition is requested), then the degree of importance (rarity) is determined and the image is recorded (S17). Here, the contradiction determination unit 23 determines the importance (rarity) based on the doctor's reaction, the inference result, the reliability, the teacher data of the inference model, the past inference image, etc., and determines the importance (rarity). is higher than a predetermined value, the image determined to have no expected relationship is temporarily recorded in the recording unit 25 or recorded in the recording unit 33 . As described above, when the result of inference using the existing inference model 14a is contrary to the diagnosis result of the doctor, that is, the relationship between the inference result of the inference model and the action of the doctor who makes the diagnosis does not match the expected relationship. If not, the teacher data is created, and the image is recorded as new inference model data in order to create a new inference model. The determination operations in steps 13 to S17 will be described later using FIG. 4, taking the case of tumor detection as an example.
 ステップS17において、重要度を判定し、画像を記録すると、または、またはステップS13における判定の結果、期待して関係にある場合には、図3の重要画像判定のフローを終了する。 In step S17, when the degree of importance is determined and the image is recorded, or when the result of determination in step S13 is that there is an expected relationship, the important image determination flow of FIG. 3 ends.
 次に、図4を用いて、推論モデルによる推論結果と、診断を行う医師のアクションの関係が期待される関係にあるか否かの判定について、腫瘍検出の例を用いて説明する。この例では、既存推論モデル14aは、内視鏡検査の際に取得した画像に対して、腫瘍等の位置をアノテーションした教師データを用いて深層学習を行って生成されている。この既存推論モデル14aに、画像取得部13によって取得した画像を入力すると、腫瘍を検出したか否か、また検出した場合にはその位置が推論されて出力される。このときの推論の信頼性の算出結果も出力される。 Next, using FIG. 4, the determination of whether or not the relationship between the inference result of the inference model and the action of the doctor making the diagnosis is in the expected relationship will be described using an example of tumor detection. In this example, the existing inference model 14a is generated by performing deep learning on an image obtained during an endoscopy using teacher data annotating the position of a tumor or the like. When an image obtained by the image obtaining unit 13 is input to the existing inference model 14a, whether or not a tumor is detected, and if detected, its position is inferred and output. The calculation result of the reliability of the inference at this time is also output.
 図4の図表の左側の縦軸には、腫瘍の検出の推論結果とその推論結果の信頼性の欄が設けられ、図表の上側の横軸には、医師のアクションがカルテから得られたものか、処置から得られたものについての欄が設けられている。 The vertical axis on the left side of the chart in FIG. 4 has columns for the inference results of tumor detection and the reliability of the inference results. There is a column for what was obtained from the treatment or what was obtained from the treatment.
 図4の図表において、画像を取得しないのは、次の場合である。
(A-1) 腫瘍が検出されないと推論され、その推論結果の信頼性が高い場合であって、カルテに裏付けられる医師のアクションが腫瘍なしの場合
(A-2) 腫瘍が検出されないと推論され、その推論結果の信頼性が低い場合であって、カルテに裏付けられる医師のアクションが腫瘍なしの場合
(A-3) 腫瘍が検出されたと推論され、その推論結果の信頼性が高い場合であって、カルテに裏付けられる医師のアクションが腫瘍ありの場合
(A-4) 腫瘍が検出されたと推論され、その推論結果の信頼性が低い場合であって、カルテに裏付けられる医師のアクションが腫瘍ありの場合
(A-5) 腫瘍が検出された推論され、その推論結果の信頼性が高い場合であって、医師の処置があった場合
(A-6) 腫瘍が検出されたと推論され、その推論結果の信頼性が低い場合であって、医師の処置があった場合
In the chart of FIG. 4, images are not acquired in the following cases.
(A-1) When it is inferred that no tumor is detected, the reliability of the inference result is high, and the doctor's action supported by the chart is no tumor (A-2) It is inferred that no tumor is detected , when the reliability of the inference result is low and the doctor's action supported by the medical record is no tumor (A-3) When it is inferred that a tumor has been detected and the reliability of the inference result is high (A-4) When it is inferred that a tumor has been detected, and the reliability of the inference result is low, and the doctor's action supported by the medical record indicates that there is a tumor (A-5) When it is inferred that a tumor has been detected, and the reliability of the inference result is high, and there is treatment by a doctor (A-6) When it is inferred that a tumor has been detected, and that inference When the reliability of the result is low and there is a doctor's treatment
 上述の(A-1)から(A-6)の状況は、いずれも推論結果と医師の診断結果が一致している場合である。すなわち、取得画像に対する推論が的確になされており、既存推論モデル14aを修正しなくてもよい場合である。 The above situations (A-1) to (A-6) are all cases in which the inference results and the doctor's diagnosis match. In other words, this is the case where the inference for the acquired image is accurately performed and the existing inference model 14a does not need to be modified.
 一方、図4の図表において、重要画像として画像を取得するのは、次の場合である。
(B-1) 腫瘍が検出されないと推論され、その推論結果の信頼性が高い場合であるが、カルテに裏付けられる医師のアクションが腫瘍ありの場合。この場合には、重要度を大と判断。
(B-2) 腫瘍が検出されないと推論され、その推論結果の信頼性が低い場合であるが、カルテに裏付けられる医師のアクションが腫瘍ありの場合
(B-3) 腫瘍が検出されたと推論され、その推論結果の信頼性が高い場合であるが、カルテに裏付けられる医師のアクションが腫瘍なしの場合。この場合には、重要度を大と判断。
(B-4) 腫瘍が検出されたと推論され、その推論結果の信頼性が低い場合であるが、カルテに裏付けられる医師のアクションが腫瘍なしの場合
(B-5) 腫瘍が検出されないと推論され、その推論結果の信頼性が高い場合であるが、医師の処置があった場合。この場合、重要度は処置の程度による。
(B-6) 腫瘍が検出されないと推論され、その推論結果の信頼性が低い場合であるが、医師の処置があった場合。この場合、重要度は処置の程度による。
On the other hand, in the chart of FIG. 4, an image is acquired as an important image in the following cases.
(B-1) A case where it is inferred that no tumor is detected and the reliability of the inference result is high, but the doctor's action supported by the medical record indicates that there is a tumor. In this case, the importance is judged to be large.
(B-2) When it is inferred that a tumor is not detected and the reliability of the inference result is low, but when the doctor's action supported by the medical record is a tumor (B-3) It is inferred that a tumor has been detected , if the inference result is highly reliable, but if the doctor's action supported by the chart is tumor-free. In this case, the importance is judged to be large.
(B-4) When it is inferred that a tumor has been detected and the reliability of the inference result is low, but when the doctor's action supported by the medical record is no tumor (B-5) It is inferred that no tumor has been detected. , if the inference result is highly reliable, but if there is a doctor's treatment. In this case, the importance depends on the degree of treatment.
(B-6) When it is inferred that no tumor is detected, and the reliability of the inference result is low, but there is treatment by a doctor. In this case, the importance depends on the degree of treatment.
 上述の(B-1)から(B-6)の状況は、いずれも推論結果と医師の診断結果が不一致の場合である。すなわち、取得画像に対する推論が的確になされていない可能性があり、既存推論モデル14aを修正した方がよい場合である。これらの画像は重要画像として取得する。 The above situations (B-1) to (B-6) are all cases in which the inference results and the doctor's diagnosis do not match. In other words, there is a possibility that the inference for the acquired image is not accurately performed, and it is better to correct the existing inference model 14a. These images are acquired as important images.
 また、上述の例において、推論結果の信頼性が高いにも関わらず、推論結果と医師の診断結果が不一致の場合には、重要度は大としている。信頼性が高い場合には、推論結果と医師の診断結果が一致する可能性が極めて高いにも関わらず、結果が相反していることから、そのときの取得画像は稀な画像といえる。新推論モデルを生成する際に、重要度の大小を考慮してもよい。例えば、重要度が大の画像は、新推論モデルを生成する際の教師データとして採用する優先度を高くしてもよい。また、重要度が大の画像が多い場合には、新推論モデルを生成するタイミングを早くする等、タイミング決める際に利用してもよい。また、新推論モデルを評価する場合や、信頼性を算出する際に、重要度が大の画像を優先的に使用してもよい。 Also, in the above example, if the inference result and the doctor's diagnosis result do not match despite the high reliability of the inference result, the importance is high. When the reliability is high, although the inference result and the doctor's diagnosis result are highly likely to match, the results are contradictory, so the image acquired at that time is a rare image. When generating a new inference model, the degree of importance may be considered. For example, an image with a high degree of importance may have a high priority for adoption as teacher data when generating a new inference model. Also, if there are many images with a high degree of importance, it may be used when determining the timing, such as advancing the timing of generating a new inference model. Also, when evaluating a new inference model or when calculating reliability, an image with a high degree of importance may be preferentially used.
 次に、図5に示すフローチャートを用いて、院内システム20における動作について説明する。このフローは、院内システム20内の制御部21が、制御部内のメモリに記憶されたプログラムに従って、院内システム20内の各部を制御することによって実現する。 Next, the operation of the hospital system 20 will be described using the flowchart shown in FIG. This flow is realized by the controller 21 in the hospital system 20 controlling each part in the hospital system 20 according to the program stored in the memory in the controller.
 図5のフローが開始すると、まず、患者情報を入力する(S21)。ここでは、患者が医療施設等において受け付けを行うと、この時の患者情報が院内システム20内のUI部22を通じて、記録部25に記録される。また、既に記録済みの情報があれば、患者情報に関連付けてもよい。 When the flow in FIG. 5 starts, first, patient information is entered (S21). Here, when a patient receives an appointment at a medical facility or the like, the patient information at this time is recorded in the recording unit 25 through the UI unit 22 within the hospital system 20 . Also, if there is already recorded information, it may be associated with the patient information.
 次に、機器連携、情報共有等を行う(S23)。ここでは、医療施設等において使用される内視鏡等の機器10と連携させ、情報を共有できるようする。後述するように、ステップS43、S57(図6参照)において、院内システム20と機器10は連携し、情報を共有する。 Next, device cooperation, information sharing, etc. are performed (S23). In this case, the information can be shared by cooperating with a device 10 such as an endoscope used in a medical facility or the like. As will be described later, in steps S43 and S57 (see FIG. 6), the hospital system 20 and the device 10 cooperate and share information.
 続いて、医師の所見等が入力されたか否かを判定する(S25)。医師は内視鏡等の機器10を用いて検査を行った後に、UI部22等を通じて医師の所見等を入力する。例えば、キーボードによるテキスト入力や音声入力等によって、医師の所見等を入力する。 Next, it is determined whether or not the doctor's findings have been input (S25). After performing an examination using the device 10 such as an endoscope, the doctor inputs the doctor's findings and the like through the UI unit 22 and the like. For example, a doctor's opinion is input by text input or voice input using a keyboard.
 ステップS25における判定の結果、医師の所見等を入力した場合には、カルテに反映する(S27)。ここでは、テキスト入力結果を電子カルテ等に反映する。音声入力の場合には、音声認識を利用してテキストデータに変換すればよく、また、所見等が紙面等に手書きで作成される場合には、スキャナ等を利用してデータに変換して、カルテに反映するようにすればよい。 As a result of the determination in step S25, if the doctor's findings or the like are entered, they are reflected in the chart (S27). Here, the text input result is reflected in an electronic chart or the like. In the case of voice input, it can be converted into text data using voice recognition. It should be reflected in the chart.
 続いて、IC(インフォームド・コンセント:informed consent)や治療、処方等を行う(S29)。ここで、インフォームド・コンセントは、十分な説明をした上で対象者から同意を取得することをいう。言い換えると、患者・家族が病状や治療について十分に理解し、また、医療職も患者・家族の意向や様々な状況や説明内容をどのように受け止めたか、どのような医療を選択するか、患者・家族、医療職、ソーシャルワーカーやケアマネジャーなど関係者と互いに情報共有し、皆で合意するプロセスである。検査の結果を伝え、検査に基づいて治療等を行う際には、患者やその家族にインフォームド・コンセントを行う。その上で、治療を行い、また患者の症状に応じて処方する。医療行為に対するICもあれば、個人情報になりうる情報や関連する情報を扱うためのICなどがある。 Subsequently, IC (informed consent), treatment, prescription, etc. are performed (S29). Here, informed consent refers to obtaining consent from the subject after giving sufficient explanation. In other words, the patient/families should fully understand the medical condition and treatment, how the medical staff received the patient/family's intentions, various situations and explanations, what kind of medical care to choose, and how the patient・It is a process of sharing information and reaching consensus among concerned parties such as family members, medical professionals, social workers and care managers. Obtain informed consent from patients and their families when informing them of the test results and providing treatment, etc. based on the test results. On that basis, treatment is given and prescriptions are given according to the patient's symptoms. There are ICs for medical practice, ICs for handling information that can be personal information, and related information.
 続いて、入力画像に重要画像等があれば、画像取得し、記録する(S31)。矛盾判定部23が、機器10によって取得された画像が重要画像であるか否かを判定し、重要と判定された場合には、この画像を取得し記録する。この重要画像であるか否かの判定は前述の図3のフローによって行う(例えば、図3のS15参照)。すなわち、推論モデルによる推論結果と、診断を行う医師のアクションの関係が期待される関係にあるか否かに基づいて重要画像であるか否かを判定する。また、推論結果と医師の診断結果の関係を判定する際に、重要度も判定する(図3のS17、図4参照)。このステップでは、重要画像であれば、画像を取得し、記録すると共に、重要度を求め、記録する。 Next, if there is an important image in the input image, the image is acquired and recorded (S31). The contradiction determination unit 23 determines whether or not the image acquired by the device 10 is an important image, and acquires and records this image if it is determined to be important. The determination as to whether or not the image is the important image is made according to the flow in FIG. 3 (see, for example, S15 in FIG. 3). That is, whether or not the image is an important image is determined based on whether or not there is an expected relationship between the inference result of the inference model and the action of the doctor who makes the diagnosis. When judging the relationship between the inference result and the doctor's diagnosis result, the degree of importance is also judged (S17 in FIG. 3, see FIG. 4). In this step, if the image is an important image, the image is acquired and recorded, and the degree of importance is obtained and recorded.
 ここでの重要画像は、前述したように、動画の中の特定コマであってもよく、静止画として記録されたものである必要はない。検査中の画像は常時記録しておき、重要な画像コマに対応する部分が、どのコマであったかを、例えば動画開始など特定の基準時間からの何分何秒の時点とか、あるいは、何コマ目か等を記録できるようにして、この特定の画像コマを指定してもよい。つまり、動画など連続コマ画像、連写コマ画像として記録されたものから、特定画像コマを別々に分けて記録しなければならないわけではない。ここでも、検査を通しての動画が残っていた方が、取り扱いが楽で、その他の画像情報などもエビデンスとして残せて良い場合も多い。 As mentioned above, the important image here may be a specific frame in a moving image, and does not need to be recorded as a still image. Images during inspection are recorded at all times, and it is possible to determine which frames correspond to important image frames, such as the number of minutes and seconds from a specific reference time such as the start of a moving image, or the number of frames. This specific image frame may be specified by recording whether or not. In other words, it is not necessary to separately record specific image frames from those recorded as continuous frame images such as moving images and continuous shot frame images. Here too, it is easier to handle if there is a moving image of the examination, and there are many cases where other image information can be left as evidence.
 ステップS31において、重要画像を記録すると、またはステップS25における判定の結果、医師の所見等が入力されない場合には、診察が終了したか否かを判定する(S33)。医師はステップS21における患者について診察が終了すると、その旨を院内システム20に、UI部22を通じて入力するので、この入力があったか否かに基づいて判定する。この判定の結果、診察が終了していない場合には、ステップS23に戻り、前述の動作を実行する。 If the important image is recorded in step S31, or if the doctor's findings are not input as a result of the determination in step S25, it is determined whether or not the medical examination has ended (S33). When the doctor completes the examination of the patient in step S21, the doctor inputs that effect to the hospital system 20 through the UI unit 22, so determination is made based on whether or not this input has been made. As a result of this determination, if the medical examination has not ended, the process returns to step S23 and the above-described operations are executed.
 一方、ステップS33における判定の結果、診察が終了した場合には、会計、予約処方案内等を行う(S35)。診察が終了したことから、患者は診察料等の会計を済ませ、次回の診察が必要であれば予約をとり、また処方箋があれば、処方箋を受け取る。これらの手続等を実行すると、院内システムのフローを終了する。 On the other hand, if the result of determination in step S33 is that the medical examination has ended, accounting, reservation prescription information, etc. are performed (S35). Since the medical examination is completed, the patient completes the payment of the medical examination fee, etc., makes an appointment for the next medical examination if necessary, and receives the prescription, if any. After executing these procedures, the flow of the hospital system ends.
 次に、図6に示すフローチャートを用いて、内視鏡等の機器10における動作について説明する。このフローは、機器10内の制御部11が、制御部内のメモリに記憶されたプログラムに従って、機器10内の各部を制御することによって実現する。 Next, the operation of the device 10 such as an endoscope will be described using the flowchart shown in FIG. This flow is realized by the controller 11 in the device 10 controlling each part in the device 10 according to the program stored in the memory in the controller.
 図6のフローが開始すると、まず、院内システムと連携するか否かを判定する(S41)。前述したように、内視鏡等の機器10と院内システム20は連携可能である。このステップでは、機器10が連携する院内システム20があるか否かを判定する。予め、医療施設において、院内システム20と連携すべき機器があれば、これらの間で連携できるように設定処理を行っておけばよい。 When the flow in FIG. 6 starts, it is first determined whether or not to cooperate with the hospital system (S41). As described above, the equipment 10 such as an endoscope and the hospital system 20 can cooperate with each other. In this step, it is determined whether or not there is an in-hospital system 20 with which the device 10 cooperates. If there are devices to be linked with the hospital system 20 in the medical facility, setting processing may be performed in advance so that they can be linked.
 ステップS41における判定の結果、院内システムと連携できる場合には、連携、情報を共有する(S43)。院内システム20と機器10の間は、無線通信または有線通信によって通信可能としておけば、このステップにおいて、両者の間で連携をとり、情報を共有する。連携としては、例えば、機器10において取得した画像データや関連情報を、院内システム20に送信し、院内システム20が管理サーバ30から受信した既存推論モデルを機器10に送信する。また、既存推論モデル14aを用いて行う推論は、必ずしも機器10内において行わなくても良く、院内システム20内に推論部を配置し、機器10から院内システム20に送信した画像に基づいて推論を行い、推論結果を機器10に返信してもよい。 If the result of determination in step S41 is that cooperation with the hospital system is possible, cooperation and information are shared (S43). If communication between the hospital system 20 and the device 10 is enabled by wireless communication or wired communication, then in this step the two will cooperate and share information. As for cooperation, for example, image data and related information acquired by the device 10 are transmitted to the hospital system 20 , and the existing inference model received by the hospital system 20 from the management server 30 is transmitted to the device 10 . In addition, the inference performed using the existing inference model 14a does not necessarily have to be performed in the device 10. An inference unit is arranged in the hospital system 20, and inference is performed based on the image transmitted from the device 10 to the hospital system 20. may be performed, and the inference result may be sent back to the device 10 .
 ステップS43において連携、情報共有を行うと、またはステップS41における判定の結果、連携がない場合には、次に、患者情報等を入力する(S45)。ここでは、機器10のUI部(不図示)を通じて、患者の氏名や検査項目等の患者情報を入力する。情報としては、患者に関するすべての情報を入力してもよく、また必要な情報に限定して入力してもよい。また、院内システム20が、ステップS21(図5参照)において入力した情報を、機器10に転送するようにしてもい。 If cooperation and information sharing are performed in step S43, or if the result of determination in step S41 is that there is no cooperation, then patient information and the like are input (S45). Here, patient information such as the patient's name and examination items is input through the UI unit (not shown) of the device 10 . As the information, all information regarding the patient may be entered, or only necessary information may be entered. Further, the hospital system 20 may transfer the information input in step S21 (see FIG. 5) to the device 10. FIG.
 続いて、検査準備を行う(S47)。ここでは、内視鏡等の機器10を用いた検査の準備を行う。例えば、内視鏡検査であれば、検査部位に応じた内視鏡の準備や、患者に消泡剤、麻酔薬を飲んでもらい、検査室に入ってもらう等を行う。検査の準備が終わると、次に、検査中であるか否かを判定する(S49)。ここでは、制御部11が、医師等が内視鏡等の機器10を使用していれば、検査中と判定する。 Next, prepare for inspection (S47). Here, preparations are made for examination using the device 10 such as an endoscope. For example, in the case of endoscopic examination, an endoscope is prepared according to the examination site, and the patient is asked to drink an antifoaming agent and an anesthetic and enter the examination room. When the preparation for the inspection is finished, it is next determined whether or not the inspection is in progress (S49). Here, if the doctor or the like is using the device 10 such as an endoscope, the control unit 11 determines that the examination is being performed.
 ステップS49における判定の結果、検査が始まると、まず、診断ガイド、診断補助判定等を行い、エビデンス記録等を行う(S51)。ここでは、医師が内視鏡等の機器10を用いて診断を行う際の診断ガイドや、診断補助判定を行う。例えば、診断ガイドとしては、機器10の操作方法を表示しても良く、診断補助判定としては、取得画像の中に腫瘍等の病変部が有るか否か等を表示してもよい。また、検査時の静止画や動画等、エビデンスとなるデータを記録する。機器10内に記録部があれば、この記録部にデータを記録してもよく、また院内システム20内の記録部25に記録してもよい。 As a result of the determination in step S49, when the examination starts, first, diagnostic guidance, diagnostic assistance determination, etc. are performed, and evidence recording, etc. is performed (S51). Here, a diagnosis guide and a diagnosis assistance determination are performed when a doctor makes a diagnosis using the device 10 such as an endoscope. For example, as a diagnostic guide, a method of operating the device 10 may be displayed, and as diagnostic assistance determination, whether or not a lesion such as a tumor is present in an acquired image may be displayed. In addition, data that serves as evidence, such as still images and videos during examinations, is recorded. If the device 10 has a recording unit, the data may be recorded in this recording unit or may be recorded in the recording unit 25 within the hospital system 20 .
 続いて、重要画像等があれば、画像取得を依頼する(S53)。重要画像であるか否かは、前述の図3に示したフローに従って行う。すなわち、推論モデルによる推論結果と、診断を行う医師のアクションの関係が期待される関係にあるか否かに基づいて重要画像であるか否かを判定する。機器10内において、画像が重要であるか否かを判定できれば、その画像を取得する。機器10内において判定できない場合には、ステップS57において、院内システム20に判定を依頼してもよい。前述したように、ここでの画像取得は、動画の中の特定コマの指定を可能にすることであってもよく、静止画として記録されたものである必要はない。検査を通しての動画が残っていた方が、取り扱いが楽で、その他の画像情報などもエビデンスとして残せて良い場合も多い。証拠性も含めて重要な動画は記録しておき、さらに別に静止画を残して関連付けて管理するのは、面倒な場合も多い。 Next, if there are important images, etc., request image acquisition (S53). Whether or not an image is an important image is determined according to the flow shown in FIG. That is, whether or not the image is an important image is determined based on whether or not there is an expected relationship between the inference result of the inference model and the action of the doctor who makes the diagnosis. If the device 10 can determine whether the image is important, it acquires the image. If the determination cannot be made within the device 10, the in-hospital system 20 may be requested to make the determination in step S57. As described above, the image acquisition here may be to allow designation of a specific frame in a moving image, and does not have to be recorded as a still image. It is easier to handle if there is a video of the examination, and there are many cases where other image information can be saved as evidence. In many cases, it is troublesome to record important videos, including those that serve as evidence, and then leave separate still images and manage them in association with each other.
 続いて、院内システムと連携するか否かを判定する(S55)。ステップS41と同様に、機器10が連携する院内システム20があるか否かを判定する。この判定の結果、院内システムと連携できる場合には、連携、情報を共有する(S57)。ここでは、ステップS43と同様に、院内システム20と機器10の間で連携をとり、情報を共有する。 Next, it is determined whether or not to cooperate with the hospital system (S55). Similar to step S41, it is determined whether or not there is an in-hospital system 20 with which the device 10 cooperates. As a result of this determination, if cooperation with the hospital system is possible, cooperation and information are shared (S57). Here, as in step S43, cooperation is established between the hospital system 20 and the device 10 to share information.
 ステップS57において連携、情報共有を行うと、またはステップS55における判定の結果、院内システムと連携していない場合には、次に、終了か否かを判定する(S59)。医師は、内視鏡等の機器10を用いた検査が終了すると、機器10内のUI部を通じて、その旨を入力するので、この入力があったか否かに基づいて判定する。この判定の結果、検査が終了していない場合には、ステップS49に戻り、前述の動作を実行する。 If cooperation and information sharing are performed in step S57, or if the result of determination in step S55 is that there is no cooperation with the hospital system, it is next determined whether or not to end (S59). When the examination using the device 10 such as an endoscope is completed, the doctor inputs that fact through the UI unit in the device 10, so determination is made based on whether or not this input has been made. If the result of this determination is that the inspection has not ended, the process returns to step S49 and the above-described operations are executed.
 一方、ステップS59における判定の結果、終了した場合には、重要画像等があれば、IC等も併せて記録する(S61)。ここでは、ステップS53における重要画像と共に、ステップS29(図5参照)においてインフォームド・コンセントを行った旨を記録する。インフォームド・コンセント(IC)を行うために、AI開発研究に被検者のデータを使用する旨の説明文書を印刷し、被検者に見てもらう。そして、医師等は被検者の同意・非同意を入力し、被検者が同意ボタンまたは不同意ボタンのアイコンをクリックしてもらう。この方法以外にも、同意書に被検者がサインし、医師等もサインし、このサイン済みの同意書をスキャンするようにしてもよい。また、電子メールやWeb上での記入等によってICを取るようにしてもよい。いずれにしても、十分な説明がなされ、本人が同意したこと、及びこれらの事実が記録されるようにしておけばよい。ICは検査前に行ってよく、検査後でも、鎮静剤が切れて正常な状態であればよい。ステップS61における処理が終わると、図6の機器の動作を終了する。 On the other hand, if the result of determination in step S59 is that it has ended, and if there is an important image or the like, the IC or the like is also recorded (S61). Here, together with the important image in step S53, the fact that informed consent has been given in step S29 (see FIG. 5) is recorded. In order to obtain informed consent (IC), print out an explanatory document to the effect that the subject's data will be used in AI development research, and present it to the subject. Then, the doctor or the like inputs the subject's agreement or non-agreement, and the subject clicks an icon of an agreement button or a disagreement button. In addition to this method, a consent form may be signed by the subject, signed by a doctor or the like, and the signed consent form may be scanned. Alternatively, the IC may be obtained by e-mail, entry on the Web, or the like. In any case, it is sufficient to ensure that sufficient explanation is provided, that the individual agrees, and that these facts are recorded. IC may be performed before the examination, and may be performed after the examination as long as the sedative has worn off and the patient is in a normal state. When the processing in step S61 ends, the operation of the device in FIG. 6 ends.
 このように、上述の図5の院内システムの動作フローおよび図6の機器(内視鏡)の動作フローにおいては、論モデルによる推論結果と、診断を行う医師のアクションの関係が期待される関係にあるか否かに基づいて重要画像であるか否かを判定し、重要画像と判定されれば、その画像を取得し記録している(例えば、S31、S53参照)。なお、図5および図6に示したフローでは、ステップS31、S53において重要画像か否かの判定を行い、重要画像の場合に記録しているが、これらの処理は機器10および院内システム20のいずれか一方で行ってもよく、また両者が協働して行ってもよい。 In this way, in the operation flow of the in-hospital system in FIG. 5 and the operation flow of the device (endoscope) in FIG. If the image is determined to be an important image, the image is acquired and recorded (for example, see S31 and S53). In the flow shown in FIGS. 5 and 6, in steps S31 and S53, it is determined whether or not the image is an important image, and if it is an important image, it is recorded. Either of them may carry out, or both may carry out in cooperation with each other.
 本実施形態においては、重要画像(希少画像)と判定されるのは、いくつかパターンがあるが、例えば、次のようなケースがあり、この場合の対処方法は以下のようなものがある。
(パターン1) AIが腫瘍を発見したにも関わらず、医師等が処置を実施しなかったケース。このケースでは、誤判定なのか、ガン化しない安全なポリープであるため処置をしない判断を行ったかは、電子カルテの情報と紐づけておく。
(パターン2) AIが腫瘍を発見できなかった部位に対して処置を施した場合。このケースでは、処置中の画像および、電子カルテの情報に基づいて、学習対象の重要画像(希少画像)であるか否かの判断が可能である。
In the present embodiment, there are several patterns in which an image is determined to be an important image (rare image). For example, there are the following cases.
(Pattern 1) Cases in which doctors, etc. did not perform treatment even though AI discovered a tumor. In this case, whether it was an erroneous diagnosis or whether it was decided not to treat the polyp because it is a safe polyp that does not turn into cancer is linked to the information in the electronic medical record.
(Pattern 2) When treatment was performed on a site where AI could not detect a tumor. In this case, it is possible to determine whether or not the image is an important image (rare image) to be learned based on the image being treated and the information in the electronic medical record.
 次に、図7を用いて、教師データ用の画像データを選択するにあたって推論モデルを用いて行う方法について説明する。上述した本実施形態においては、医師が内視鏡等の機器10を用いて検査・診断を行う際に、既存推論モデルの推論結果が、医師の診断結果に反しているというアクションを医師が取った場合に、そのときの画像を重要画像として取得していた。これに対して、図7に示す例では、医師が診断した際の画像等を教師データ候補抽出用の推論モデルに入力することによって、重要画像か否かを判定するようにしている。この例では、既存推論モデルの推論結果と医師の診断結果が反するような場合を推論するための推論モデルを生成している(図7(a)参照)。この推論モデルを生成すると、この推論モデルに通常の検査・診断の際に、検査画像を入力し、教師データ候補(アノテーション候補)を抽出することができる(図7(b)参照)。 Next, using FIG. 7, a method of using an inference model to select image data for teacher data will be described. In the above-described embodiment, when a doctor performs an examination/diagnosis using the device 10 such as an endoscope, the doctor takes an action that the inference result of the existing inference model is contrary to the doctor's diagnosis result. In the event that the On the other hand, in the example shown in FIG. 7, whether or not an image is an important image is determined by inputting an image or the like when a doctor diagnoses it into an inference model for extracting teacher data candidates. In this example, an inference model is generated for inferring a case where the inference result of the existing inference model and the doctor's diagnosis result contradict each other (see FIG. 7(a)). When this inference model is generated, a test image can be input to this inference model during normal examination/diagnosis, and training data candidates (annotation candidates) can be extracted (see FIG. 7B).
 図7(a)は、教師データ候補を推論するための推論モデル生成のための深層学習を示す。一連の検査画像群P11~P15は、医師等が内視鏡等の機器10を用いて検査Ex1を行った際に取得した画像であり、これらの画像は所定時間毎(または所定操作がなされたタイミングでもよい)に取得される。検査画像P11、P12は、鑑別部に近づいている際の画像群であり、画像P13は、腫瘍と思われる部位(鑑別部)がある付近での画像であり、画像P14、P15は、鑑別部から遠ざかっている際の画像群である。これらの検査時の画像群P11~P15、P21~P25、P31~P35は、記録部に記録される。なお、画像P22、P33は、鑑別部付近の画像である。 FIG. 7(a) shows deep learning for generating an inference model for inferring training data candidates. A series of inspection image groups P11 to P15 are images acquired when a doctor or the like performs inspection Ex1 using a device 10 such as an endoscope. timing). Inspection images P11 and P12 are a group of images when approaching the discrimination part, image P13 is an image in the vicinity of a site (differentiation part) that is suspected to be a tumor, and images P14 and P15 are images of the discrimination part. This is a group of images when moving away from . These inspection image groups P11 to P15, P21 to P25, and P31 to P35 are recorded in the recording unit. Note that the images P22 and P33 are images near the discrimination portion.
 検査時の画像群P11~P15、P21~P25、P31~P35を取得すると、次に、これらの画像にアノテーションを施し、教師データを作成する。この場合、少なくとも画像P13、P22、P33に腫瘍等の病変部(鑑別部)が有り、その位置が分かるようにアノテーションを施した教師データT13、T22、T33を作成する。教師データが作成されると、推論モデルを生成するためのニューラル・ネットワークNNWの入力層INに教師データを入力し、病変部(鑑別部)の位置が鑑別結果Output1として出力層OUTから出力されるように、中間層の重み付けを決定する。こういう傾向の画像は教師データになるはずという推論を行うための学習がなされる。すなわち、学習が終了すると、検査画像を入力した場合に、既存推論モデルの推論結果と医師の診断結果が反する場合に、その画像を教師データ候補として出力する推論モデルが出来上がる。 After acquiring the image groups P11 to P15, P21 to P25, and P31 to P35 during the inspection, these images are then annotated to create training data. In this case, at least images P13, P22, and P33 have lesions (discrimination portions) such as tumors, and teacher data T13, T22, and T33 annotated so that their positions can be identified are created. When the teacher data is created, the teacher data is input to the input layer IN of the neural network NNW for generating the inference model, and the position of the lesion (discrimination part) is output from the output layer OUT as the discrimination result Output1. , determine the weighting of the intermediate layers. Learning is performed to make an inference that an image with such a tendency should be training data. That is, when the learning is completed, an inference model is created that outputs the image as a training data candidate when the inference result of the existing inference model contradicts the doctor's diagnosis result when an inspection image is input.
 前述したように、ここでの教師データT13、T22、T33のような、いわば、ここで求められる学習における重要画像は、動画の中の特定コマであって、静止画として記録されたものである必要はない。検査中の常時画像は記録しておき、重要な画像コマに対応する部分が、どのコマであったかを、例えば動画開始など特定の基準時間からの何分何秒の時点とか、あるいは、何コマ目か等を記録できるようにして、この特定の画像コマを指定してもよい。つまり、動画など連続コマ画像、連写コマ画像として記録されたものから、教師データの特定画像コマをいちいち別々に記録しなければならないわけではない。ここでも、検査を通しての動画が残っていた方が、取り扱いが楽で、その他の画像情報などもエビデンスとして残せて良い場合も多い。このような考え方で、図2等においても、重要画像を動画から抜き出して記録する必要はなく、動画のような画像群のどのコマであるかを指定可能にしておけばよい。 As described above, the important images in the learning required here, such as the teacher data T13, T22, and T33, are specific frames in the moving image recorded as still images. No need. Images are recorded at all times during the inspection, and it is possible to determine which frames correspond to the important image frames, such as how many minutes and seconds from a specific reference time such as the start of a moving image, or how many frames. This specific image frame may be specified by recording whether or not. In other words, it is not necessary to separately record the specific image frames of the teacher data one by one from those recorded as continuous frame images such as moving images and continuous shot frame images. Here too, it is easier to handle if there is a moving image of the examination, and there are many cases where other image information can be left as evidence. Based on this idea, it is not necessary to extract and record the important image from the moving image in FIG.
 図7(b)は、図7(a)において生成した推論モデルを用いて、取得した検査画像を入力すると、アノテーション候補となる画像を出力する推論動作を示す。この例では、医師が検査Exaを行い、検査画像Px1~Px5を取得する。この取得した検査画像Px1~Px5を、ニューラル・ネットワークNNWを有する推論エンジンの入力層Inに入力すると、出力層Outからアノテーション候補Output2が出力される。この推論を行うことによって、これまでのAI(既存推論モデルを用いた推論)では、何故か、ここぞという時に、アノテーション候補となる画像が出力されなかったとしても、この図7(a)で生成した推論モデルを使用することによって、教師データにすべき画像を取得することができる。ここでの取得画像においても、動画の中の特定コマであって、静止画として記録されたものである必要はない。例えば動画開始など特定の基準時間からの何分何秒の時点とか、あるいは、何コマ目か等を特定できるようにすればよく、必要に応じて、動画とは別々に記録できるようにしてもよいし、動画の特定の画像コマを指定する信号を記録してもよい。 FIG. 7(b) shows an inference operation of outputting an annotation candidate image when an acquired inspection image is input using the inference model generated in FIG. 7(a). In this example, a doctor performs an examination Exa and acquires examination images Px1 to Px5. When the obtained inspection images Px1 to Px5 are input to the input layer In of the inference engine having the neural network NNW, the annotation candidate Output2 is output from the output layer Out. By performing this inference, even if the conventional AI (inference using an existing inference model) does not output an image as an annotation candidate at the critical moment, By using the inference model, it is possible to obtain an image to be used as training data. The acquired image here is also a specific frame in a moving image, and does not need to be recorded as a still image. For example, it is possible to specify the number of minutes and seconds from a specific reference time such as the start of a movie, or the number of frames, etc. If necessary, it may be possible to record separately from the movie. Alternatively, a signal specifying a specific image frame of a moving image may be recorded.
 なお、図7(a)において、検査画像群P31~P35を取得する際に、操作情報Op1~Op3を合わせて取得し、検査画像群に関連付けて記録しておいてもよい。図7(a)の検査Ex1、Ex2では、操作情報の記載を省略しているが、これらの検査画像群を取得する際にも操作情報を取得し、画像群P11~P15、P21~P25に操作情報を関連付けて記録しておく。教師データを作成する際に、操作情報も関連付けておき、この教師データを用いて、深層学習を行うことによって、医師の操作情報を考慮した推論モデルが生成される。図7(b)の推論時には、検査画像Px1~Px5に加えて、操作情報Opx1~Opx3をニューラル・ネットワークNNWの入力層Inに入力することによって、医師の操作を考慮してアノテーション候補を推論することができる。 In FIG. 7A, when acquiring the inspection image group P31 to P35, the operation information Op1 to Op3 may be acquired together and recorded in association with the inspection image group. Operation information is omitted in the examinations Ex1 and Ex2 of FIG. Operation information is associated and recorded. Operation information is also associated when creating teacher data, and by performing deep learning using this teacher data, an inference model that takes into account the doctor's operation information is generated. At the time of inference in FIG. 7(b), in addition to the inspection images Px1 to Px5, operation information Opx1 to Opx3 are input to the input layer In of the neural network NNW to infer annotation candidates in consideration of the doctor's operation. be able to.
 このように、図7に示す例では、教師データ候補抽出用推論モデルを生成すると、以後、この推論モデルに検査画像を入力すれば、既存推論モデルの推論結果と医師の診断結果が反するか否かを推論し、推論結果と診断結果が一致しないと推論された場合に、その画像を教師データ候補として出力することができる。このため、内視鏡等の機器10において取得した画像を表示しなくても、教師データ候補となる画像であるか否かを判定することができる。すなわち、推論部における推論結果は表示しない状態において、医師がとったアクションに基づいて、期待される関係であったか否かを判定することができる。 In this way, in the example shown in FIG. 7, once the inference model for extracting teacher data candidates is generated, if a test image is subsequently input to this inference model, it is possible to determine whether the inference result of the existing inference model contradicts the doctor's diagnosis result. If it is inferred that the inference result and the diagnosis result do not match, the image can be output as a teacher data candidate. Therefore, it is possible to determine whether or not an image is an image to be a teacher data candidate without displaying the image acquired by the device 10 such as an endoscope. In other words, it is possible to determine whether or not the relationship was as expected based on the action taken by the doctor in a state where the inference result of the inference unit is not displayed.
 以上説明したように、本発明の一実施形態においては、画像から病変部を推論して表示するための推論モデルを用いて推論し(例えば、図2のS1等参照)、推論の結果と、病変部について診断を行う医師のアクションの関係が期待される関係にない場合に(例えば、図2のS3Yes、S5等参照)、表示を行った時の画像データを取得している(例えば、図2のS5等参照)。このため、推論部に入力される希少(重要)なデータを取得できる。この取得したデータは、新推論モデルの生成や、また新推論モデルの評価や信頼性を算出するために、効率的にデータを取得することができる。また取得したデータは、新推論モデル生成等の際に使用することができる。 As described above, in one embodiment of the present invention, inference is performed using an inference model for inferring and displaying a lesion from an image (see, for example, S1 in FIG. 2), and the inference result and When the action relationship of the doctor diagnosing the lesion is not in the expected relationship (for example, see S3 Yes, S5, etc. in FIG. 2), the image data at the time of display is acquired (for example, FIG. 2, S5, etc.). Therefore, rare (important) data to be input to the inference unit can be obtained. This acquired data can be efficiently acquired for generating a new inference model, and for calculating the evaluation and reliability of the new inference model. In addition, the acquired data can be used when generating a new inference model.
 また、本発明の一実施形態においては、内視鏡の操作ガイドを表示するための推論モデルを用いて推論し(例えば、図2のS1、図7(b)等参照)、操作ガイド表示と、内視鏡の操作を行う医師のアクションの関係が期待される関係にない場合に、操作ガイド表示を行った時の操作関連データを取得している(例えば、図2のS3Yes、S5、図7等参照)。この取得したデータは、操作ガイド用の新推論モデルの生成や、また操作ガイド用の新推論モデルの評価や信頼性を算出するために、効率的にデータを取得することができる。また取得したデータは、新推論モデル生成等の際に使用することができる。 Further, in one embodiment of the present invention, inference is performed using an inference model for displaying an operation guide for an endoscope (see, for example, S1 in FIG. 2, FIG. 7B, etc.), and operation guide display and , the operation-related data when the operation guide is displayed is acquired when the relationship between the actions of the doctor who operates the endoscope is not the expected relationship (for example, S3 Yes in FIG. 2, S5 in FIG. 7 etc.). The acquired data can be efficiently acquired for generating a new inference model for operation guidance, and for calculating the evaluation and reliability of the new inference model for operation guidance. In addition, the acquired data can be used when generating a new inference model.
 また、本発明の一実施形態においては、医療機器に設けられたセンサデータ、もしくは医師の医療機器に対する操作データを推論モデルに入力して推論し(例えば、図2のS1、図7(b)等参照)、推論の結果と、医療機器を使用する医師のアクションの関係が期待される関係にない場合に、該期待される関係になかった時のセンサデータもしくは上記操作データに識別符号を付与可能にしている(例えば、図2のS3Yes、S5、図7等参照)。この識別符号を付与したデータは、新推論モデルの生成や、また新推論モデルの評価や信頼性を算出するために、効率的にデータを収集する際に役立てることができる。また識別符号を付与したデータは、新推論モデル生成等の際に使用することができる。医師等の専門家が使用する医療機器の場合、医療機器用の推論モデルを生成しても、実際に使用してみると、推論モデルが適切な推論を行うことができない場合も想定される。このような場合であっても、本実施形態によれば、医師等の専門家の見解に反したデータを見つけ出すことができる。また、迅速に効率的に収集したデータを用いて推論モデルを修正や生成することが可能になる。 Further, in one embodiment of the present invention, sensor data provided in the medical device or operation data of the medical device by the doctor is input to the inference model for inference (for example, S1 in FIG. 2, FIG. 7B) etc.), and when the relationship between the result of inference and the action of the doctor using the medical device does not meet the expected relationship, an identification code is given to the sensor data or the operation data when the expected relationship does not exist. (See, for example, S3 Yes, S5 in FIG. 2, FIG. 7, etc.). The data to which this identification code is assigned can be used for efficiently collecting data for generating a new inference model and calculating the evaluation and reliability of the new inference model. Also, the data with the identification code can be used when generating a new inference model. In the case of a medical device used by a specialist such as a doctor, even if an inference model for the medical device is generated, there may be a case where the inference model cannot perform appropriate inference when actually used. Even in such a case, according to this embodiment, it is possible to find out data contrary to the opinion of experts such as doctors. In addition, it is possible to modify and generate inference models using collected data quickly and efficiently.
 なお、本発明の一実施形態においては、内視鏡画像を中心に説明したが、内視鏡画像以外にも、種々の検査装置の画像を用いて推論モデルを生成する情報処理装置にも適用できる。つまり、時系列画像コマの中から教師データ候補を選ぶ技術は、様々な分野で活用が期待される。本実施形態においては、内視鏡特有の体腔内において取得した画像に対する推論結果と、画像の評価者の表が一致しない場合に教師データ候補等として選択する例を説明した。しかし、このようないわば正規化された手順というものは、どのような分野でもあり得るため、内視鏡画像に限ったものではない。 In addition, in one embodiment of the present invention, the explanation was focused on the endoscopic image, but the present invention can also be applied to an information processing device that generates an inference model using images of various inspection devices other than the endoscopic image. can. In other words, the technique of selecting training data candidates from time-series image frames is expected to be used in various fields. In the present embodiment, an example has been described in which an inference result for an image acquired in a body cavity unique to an endoscope and an evaluator's table of the image do not match, and are selected as teaching data candidates or the like. However, such a so-called normalized procedure is not limited to endoscopic imaging, as it can be in any field.
 また、本発明の一実施形態においては、機器10、院内システム20、管理サーバ30は、別体として説明したが、これら3つを一体に構成してもよく、またこれら3つの内の2つを一体に構成してもよい。また、本発明の一実施形態においては、制御部11、21、31は、CPUやメモリ等から構成されている機器として説明した。しかし、CPUとプログラムによってソフトウエア的に構成する以外にも、各部の一部または全部をハードウエア回路で構成してもよく、ヴェリログ(Verilog)やVHDL(Verilog Hardware Description Language)等によって記述されたプログラム言語に基づいて生成されたゲート回路等のハードウエア構成でもよく、またDSP(Digital Signal Processor)等のソフトを利用したハードウエア構成を利用してもよい。これらは適宜組み合わせてもよいことは勿論である。 Further, in one embodiment of the present invention, the device 10, the hospital system 20, and the management server 30 are described as separate entities, but these three may be configured integrally. may be configured integrally. Also, in one embodiment of the present invention, the controllers 11, 21, and 31 have been described as devices configured from CPUs, memories, and the like. However, in addition to being configured as software by a CPU and a program, part or all of each part may be configured as a hardware circuit, and is described in Verilog, VHDL (Verilog Hardware Description Language), etc. A hardware configuration such as a gate circuit generated based on a program language may be used, or a hardware configuration using software such as a DSP (Digital Signal Processor) may be used. Of course, these may be combined as appropriate.
 また、制御部11、21、31は、CPUに限らず、コントローラとしての機能を果たす素子であればよく、上述した各部の処理は、ハードウエアとして構成された1つ以上のプロセッサが行ってもよい。例えば、各部は、それぞれが電子回路として構成されたプロセッサであっても構わないし、FPGA(Field Programmable Gate Array)等の集積回路で構成されたプロセッサにおける各回路部であってもよい。または、1つ以上のCPUで構成されるプロセッサが、記録媒体に記録されたコンピュータプログラムを読み込んで実行することによって、各部としての機能を実行しても構わない。 In addition, the control units 11, 21, and 31 are not limited to CPUs, and may be elements that function as controllers. good. For example, each unit may be a processor configured as an electronic circuit, or may be each circuit unit in a processor configured with an integrated circuit such as an FPGA (Field Programmable Gate Array). Alternatively, a processor composed of one or more CPUs may read and execute a computer program recorded on a recording medium, thereby executing the function of each unit.
 また、本発明の一実施形態においては、機器10は、制御部11、表示部12、画像入力部13、推論部14、操作結果反映部15を有しているものとして説明した。しかし、これらは一体の装置内に設けられている必要はなく、例えば、インターネット等の通信網によって接続されていれば、上述の各部は分散されていても構わない。同様に、院内システム20は、制御部11、UI部22、矛盾判定部23、通信部24、記録部25を有しているものとして説明した。しかし、これらは一体の装置内に設けられている必要はなく、例えば、インターネット等の通信網によって接続されていれば、上述の各部は分散されていても構わない。さらに、管理サーバ30は、制御部31、アノテーション・学習部32、記録制御部34、通信部35を有しているものとして説明した。しかし、これらは一体の装置内に設けられている必要はなく、例えば、インターネット等の通信網によって接続されていれば、上述の各部は分散されていても構わない。 Also, in one embodiment of the present invention, the device 10 has been described as having the control unit 11, the display unit 12, the image input unit 13, the inference unit 14, and the operation result reflection unit 15. However, they do not need to be provided in an integrated device, and the above-described units may be distributed as long as they are connected by a communication network such as the Internet. Similarly, the hospital system 20 has been described as having the control unit 11 , the UI unit 22 , the contradiction determination unit 23 , the communication unit 24 and the recording unit 25 . However, they do not need to be provided in an integrated device, and the above-described units may be distributed as long as they are connected by a communication network such as the Internet. Furthermore, the management server 30 has been described as having a control unit 31 , an annotation/learning unit 32 , a recording control unit 34 and a communication unit 35 . However, they do not need to be provided in an integrated device, and the above-described units may be distributed as long as they are connected by a communication network such as the Internet.
 また、近年は、様々な判断基準を一括して判定できるような人工知能が用いられる事が多く、ここで示したフローチャートの各分岐などを一括して行うような改良もまた、本発明の範疇に入るものであることは言うまでもない。そうした制御に対して、ユーザが善し悪しを入力可能であれば、ユーザの嗜好を学習して、そのユーザにふさわしい方向に、本願で示した実施形態はカスタマイズすることが可能である。 In addition, in recent years, artificial intelligence that can collectively determine various judgment criteria is often used, and improvements such as collectively performing each branch of the flow chart shown here are also within the scope of the present invention. It goes without saying that the If the user can input good or bad for such control, it is possible to learn the user's preference and customize the embodiment shown in the present application in a direction suitable for the user.
 また、本明細書において説明した技術のうち、主にフローチャートで説明した制御に関しては、プログラムで設定可能であることが多く、記録媒体や記録部に収められる場合もある。この記録媒体、記録部への記録の仕方は、製品出荷時に記録してもよく、配布された記録媒体を利用してもよく、インターネットを通じてダウンロードしたものでもよい。 In addition, among the techniques described in this specification, the control described mainly in the flowcharts can often be set by a program, and may be stored in a recording medium or recording unit. The method of recording in the recording medium and the recording unit may be recorded at the time of product shipment, using a distributed recording medium, or downloading via the Internet.
 また、本発明の一実施形態においては、フローチャートを用いて、本実施形態における動作を説明したが、処理手順は、順番を変えてもよく、また、いずれかのステップを省略してもよく、ステップを追加してもよく、さらに各ステップ内における具体的な処理内容を変更してもよい。 In addition, in one embodiment of the present invention, the operation in this embodiment was explained using a flowchart, but the order of the processing procedure may be changed, or any step may be omitted. Steps may be added, and specific processing contents within each step may be changed.
 また、特許請求の範囲、明細書、および図面中の動作フローに関して、便宜上「まず」、「次に」等の順番を表現する言葉を用いて説明したとしても、特に説明していない箇所では、この順で実施することが必須であることを意味するものではない。 In addition, even if the operation flow in the claims, the specification, and the drawings is explained using words expressing the order such as "first" and "next" for convenience, in places not specifically explained, It does not mean that it is essential to carry out in this order.
 本発明は、上記実施形態にそのまま限定されるものではなく、実施段階ではその要旨を逸脱しない範囲で構成要素を変形して具体化できる。また、上記実施形態に開示されている複数の構成要素の適宜な組み合わせによって、種々の発明を形成できる。例えば、実施形態に示される全構成要素の幾つかの構成要素を削除してもよい。さらに、異なる実施形態にわたる構成要素を適宜組み合わせてもよい。 The present invention is not limited to the above-described embodiment as it is, and can be embodied by modifying the constituent elements without departing from the spirit of the present invention at the implementation stage. Also, various inventions can be formed by appropriate combinations of the plurality of constituent elements disclosed in the above embodiments. For example, some components of all components shown in the embodiments may be deleted. Furthermore, components across different embodiments may be combined as appropriate.
10・・・機器、11・・・制御部、12・・・表示部、12a・・・推論結果、12b・・・信頼性、13・・・画像取得部、14・・・推論部、14a・・・既存推論モデル、14b・・・カテゴリ判定部、14c・・・画像類似判定部、15・・・操作結果反映部、20・・・院内システム、21・・・制御部、22・・・UI部、23・・・矛盾判定部、24・・・通信部、25・・・記録部、30・・・管理サーバ、31・・・制御部、32・・・アノテーション・学習部、32b・・・既存推論モデル、33・・・記録部、33a・・・既存教師データA、33b・・・既存教師データB、33c・・・新推論モデル用データ、34・・・記録制御部、35・・・通信部 DESCRIPTION OF SYMBOLS 10... Equipment, 11... Control part, 12... Display part, 12a... Inference result, 12b... Reliability, 13... Image acquisition part, 14... Inference part, 14a Existing inference model 14b Category determination unit 14c Image similarity determination unit 15 Operation result reflection unit 20 In-hospital system 21 Control unit 22 UI unit 23 contradiction determination unit 24 communication unit 25 recording unit 30 management server 31 control unit 32 annotation/learning unit 32b . 35... Communication unit

Claims (12)

  1.  画像から病変部を推論して表示するための推論モデルを有する推論部と、
     上記推論部における推論結果と、上記病変部について診断を行う医師のアクションの関係が期待される関係にない場合に、上記表示を行った時の画像データを取得する取得部と、
     を有することを特徴とするデータ取得システム。
    an inference unit having an inference model for inferring and displaying lesions from images;
    an acquisition unit that acquires image data when the display is performed when the relationship between the inference result of the inference unit and the action of a doctor who diagnoses the lesion is not in the expected relationship;
    A data acquisition system comprising:
  2.  上記取得部が上記表示を行った時に取得する画像データは、上記表示時の画像コマを含む動画であり、上記画像コマを特定可能な情報を含んだデータであることを特徴とする請求項1に記載のデータ取得システム。 2. The image data acquired by the acquisition unit when the display is performed is a moving image including image frames at the time of the display, and is data including information capable of specifying the image frames. The data acquisition system described in .
  3.  上記取得部によって取得された画像に、画像の稀少度に関する情報を付して記録する記録部を有することを特徴とする請求項1に記載のデータ取得システム。  The data acquisition system according to claim 1, further comprising a recording unit that adds information about the rarity of the image to the image acquired by the acquisition unit and records the image.
  4.  上記取得部によって取得された上記画像データを教師データ候補とする教師データ候補選定部を有することを特徴とする請求項1に記載のデータ取得システム。 3. The data acquisition system according to claim 1, further comprising a training data candidate selection unit that selects the image data acquired by the acquisition unit as training data candidates.
  5.  上記取得部が上記表示を行った時に取得する画像データは、上記表示時の画像コマを含む時系列画像群であり、上記画像コマを特定可能な情報を含んだデータであることを特徴とする請求項1に記載のデータ取得システム。 The image data acquired by the acquisition unit when the display is performed is a time-series image group including the image frames at the time of the display, and is data containing information capable of specifying the image frames. The data acquisition system of Claim 1.
  6.  上記推論モデルを用いて行った推論結果と、上記診断を行う医師のアクションの関係が期待される関係を記録したデータベースを有することを特徴とする請求項1に記載のデータ取得システム。  The data acquisition system according to claim 1, characterized by having a database that records an expected relationship between the inference result obtained using the inference model and the action of the doctor who makes the diagnosis.
  7.  上記病変部を表示した時の上記画像と、上記推論モデルを学習した時の教師データ群との類似度に従って、上記教師データ候補とする画像類似判定部を有し、
     上記取得部は、上記画像類似判定部によって上記教師データ候補とされたときの上記画像を取得することを特徴とする請求項1に記載のデータ取得システム。
    an image similarity determination unit that determines the training data candidate according to the degree of similarity between the image when the lesion is displayed and the training data group when the inference model is learned;
    2. The data acquisition system according to claim 1, wherein the acquisition unit acquires the image determined as the teacher data candidate by the image similarity determination unit.
  8.  上記推論部における推論結果は表示しない状態において、上記医師がとったアクションに基づいて、上記期待される関係であったか否かを判定することを特徴とする請求項1に記載のデータ取得システム。  The data acquisition system according to claim 1, characterized in that, in a state in which the inference results of the inference unit are not displayed, it is determined whether or not the relationship is the expected relationship based on the action taken by the doctor.
  9.  画像から病変部を推論して表示するための推論モデルを用いて推論し、
     上記推論の結果と、上記病変部について診断を行う医師のアクションの関係が期待される関係にない場合に、上記表示を行った時の画像データを取得する、
     ことを特徴とするデータ取得方法。
    Inference using an inference model for inferring and displaying lesions from images,
    Acquiring the image data when the display is performed when the relationship between the result of the inference and the action of the doctor who diagnoses the lesion is not in the expected relationship.
    A data acquisition method characterized by:
  10.  内視鏡の操作ガイドを表示するための推論モデルを用いて推論し、
     上記操作ガイド表示と、上記内視鏡の操作を行う医師のアクションの関係が期待される関係にない場合に、上記操作ガイド表示を行った時の操作関連データを取得する、
     ことを特徴とするデータ取得方法。
    inference using an inference model for displaying an endoscope operation guide,
    Acquiring operation-related data when the operation guide display is performed when the relationship between the operation guide display and the action of the doctor who operates the endoscope is not in an expected relationship;
    A data acquisition method characterized by:
  11.  医療機器に設けられたセンサデータ、もしくは医師の上記医療機器に対する操作データを推論モデルに入力して推論し、
     上記推論の結果と、上記医療機器を使用する医師のアクションの関係が期待される関係にない場合に、該期待される関係になかった時の上記センサデータもしくは上記操作データに識別符号を付与可能にする、
     ことを特徴とするデータ処理方法。
    Inference by inputting sensor data provided in the medical device or operation data of the medical device by the doctor into the inference model,
    When the relationship between the result of the inference and the action of the doctor using the medical device does not meet the expected relationship, an identification code can be assigned to the sensor data or the operation data when the expected relationship does not exist. to make
    A data processing method characterized by:
  12.  医療機器に設けられたセンサデータ、もしくは医師の上記医療機器に対する操作データを推論モデルに入力して推論する推論部と、
     上記推論の結果と、上記医療機器を使用する医師のアクションの関係が期待される関係にない場合に、該期待される関係になかった時の上記センサデータもしくは上記操作データに識別符号を付与可能にするデータ処理部と、
     を有することを特徴とするデータ処理システム。
    an inference unit that inputs sensor data provided in a medical device or operation data of a doctor on the medical device into an inference model and makes an inference;
    When the relationship between the result of the inference and the action of the doctor using the medical device does not meet the expected relationship, an identification code can be assigned to the sensor data or the operation data when the expected relationship does not exist. a data processing unit for
    A data processing system comprising:
PCT/JP2021/032865 2021-09-07 2021-09-07 Data acquisition system and data acquisition method WO2023037413A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/032865 WO2023037413A1 (en) 2021-09-07 2021-09-07 Data acquisition system and data acquisition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/032865 WO2023037413A1 (en) 2021-09-07 2021-09-07 Data acquisition system and data acquisition method

Publications (1)

Publication Number Publication Date
WO2023037413A1 true WO2023037413A1 (en) 2023-03-16

Family

ID=85507292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/032865 WO2023037413A1 (en) 2021-09-07 2021-09-07 Data acquisition system and data acquisition method

Country Status (1)

Country Link
WO (1) WO2023037413A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02196334A (en) * 1989-01-26 1990-08-02 Toshiba Corp Medical information processing system
WO2019008726A1 (en) * 2017-07-06 2019-01-10 オリンパス株式会社 Tubular insertion apparatus
WO2019130390A1 (en) * 2017-12-25 2019-07-04 オリンパス株式会社 Recommended operation presenting system, recommended operation presenting control apparatus, and recommended operation presenting control method
US20210090718A1 (en) * 2019-09-25 2021-03-25 Vingroup Joint Stock Company Labeling apparatus and method, and machine learning system using the labeling apparatus
WO2021111879A1 (en) * 2019-12-05 2021-06-10 Hoya株式会社 Learning model generation method, program, skill assistance system, information processing device, information processing method, and endoscope processor
KR102274564B1 (en) * 2018-07-03 2021-07-07 (주) 프로큐라티오 Device for diagnosing cancer using bia data analysis

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02196334A (en) * 1989-01-26 1990-08-02 Toshiba Corp Medical information processing system
WO2019008726A1 (en) * 2017-07-06 2019-01-10 オリンパス株式会社 Tubular insertion apparatus
WO2019130390A1 (en) * 2017-12-25 2019-07-04 オリンパス株式会社 Recommended operation presenting system, recommended operation presenting control apparatus, and recommended operation presenting control method
KR102274564B1 (en) * 2018-07-03 2021-07-07 (주) 프로큐라티오 Device for diagnosing cancer using bia data analysis
US20210090718A1 (en) * 2019-09-25 2021-03-25 Vingroup Joint Stock Company Labeling apparatus and method, and machine learning system using the labeling apparatus
WO2021111879A1 (en) * 2019-12-05 2021-06-10 Hoya株式会社 Learning model generation method, program, skill assistance system, information processing device, information processing method, and endoscope processor

Similar Documents

Publication Publication Date Title
KR102140402B1 (en) Apparatus for quality managment of medical image interpretation usnig machine learning, and method thereof
KR102222011B1 (en) Medical image analyzing apparatus and method based on medical use artificial neural network evaluating analysis result thereof
US20210202072A1 (en) Medical image diagnosis assistance apparatus and method for providing user-preferred style based on medical artificial neural network
KR102531400B1 (en) Artificial intelligence-based colonoscopy diagnosis supporting system and method
US11996182B2 (en) Apparatus and method for medical image reading assistant providing representative image based on medical use artificial neural network
WO2021153648A1 (en) Storage medium, diagnostic assistance device, and diagnostic assistance method
US20230206435A1 (en) Artificial intelligence-based gastroscopy diagnosis supporting system and method for improving gastrointestinal disease detection rate
US20220277445A1 (en) Artificial intelligence-based gastroscopic image diagnosis assisting system and method
WO2020170533A1 (en) Measurement method for displaying degree of process through ai determination in medical image
US11756673B2 (en) Medical information processing apparatus and medical information processing method
KR20190049524A (en) Method for transmitting a medical image and medical imaging aparatus thereof
CN117524402A (en) Method for analyzing endoscope image and automatically generating diagnostic report
JP2023526412A (en) Information processing method, electronic device, and computer storage medium
WO2023037413A1 (en) Data acquisition system and data acquisition method
CN105578964A (en) Processing apparatus, processing method and system for processing a physiological signal
US20220361739A1 (en) Image processing apparatus, image processing method, and endoscope apparatus
WO2020174863A1 (en) Diagnosis support program, diagnosis support system, and diagnosis support method
WO2023095208A1 (en) Endoscope insertion guide device, endoscope insertion guide method, endoscope information acquisition method, guide server device, and image inference model learning method
KR102360615B1 (en) Medical image diagnosis assistance apparatus and method using a plurality of medical image diagnosis algorithm for endoscope images
TWI774982B (en) Medical resource integration system, computer device and medical resource integration method
US20240112450A1 (en) Information processing device and information processing method
WO2023218523A1 (en) Second endoscopic system, first endoscopic system, and endoscopic inspection method
JP7097035B2 (en) Image diagnosis support device, image diagnosis support method and image diagnosis support program
Asgary Artificial Intelligence in Endodontics: A Scoping Review
US20230260114A1 (en) Computer aided assistance system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21956711

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE