WO2021060076A1 - Information processing device, information processing system, information processing method, and program - Google Patents

Information processing device, information processing system, information processing method, and program Download PDF

Info

Publication number
WO2021060076A1
WO2021060076A1 PCT/JP2020/034919 JP2020034919W WO2021060076A1 WO 2021060076 A1 WO2021060076 A1 WO 2021060076A1 JP 2020034919 W JP2020034919 W JP 2020034919W WO 2021060076 A1 WO2021060076 A1 WO 2021060076A1
Authority
WO
WIPO (PCT)
Prior art keywords
information processing
unit
feature amount
processing device
image
Prior art date
Application number
PCT/JP2020/034919
Other languages
French (fr)
Japanese (ja)
Inventor
達也 原田
修輔 高濱
優介 黒瀬
喜連川 優
正久 深山
阿部 浩幸
昌伸 北川
明彦 吉澤
Original Assignee
国立大学法人 東京大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人 東京大学 filed Critical 国立大学法人 東京大学
Publication of WO2021060076A1 publication Critical patent/WO2021060076A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an information processing device, an information processing system, an information processing method and a program.
  • Non-Patent Document 1 uses a method of dividing WSI into small images called patches and inputting them into the model.
  • Non-Patent Document 1 has a problem that only local information limited to the patch size can be considered.
  • the present invention has decided to provide a technique for identifying an image in consideration of both local features and global features.
  • an information processing device has a cutting unit, a feature amount acquisition unit, a generation unit, and an identification unit.
  • the cutout portion cuts out a plurality of partial images from the image.
  • the feature amount acquisition unit inputs a plurality of partial images cut out by the cutout unit into the feature extraction model and acquires the feature amount.
  • the generation unit generates a feature amount map in which the feature amounts acquired by the feature amount acquisition unit are arranged based on the position information of the corresponding partial image.
  • the identification unit inputs a feature map into the segmentation model and identifies each of the plurality of partial images.
  • FIG. 1 is a diagram showing an example of a system configuration of an information processing system.
  • FIG. 2 is a diagram showing an example of the hardware configuration of the server device.
  • FIG. 3 is a diagram showing an example of the functional configuration of the server device.
  • FIG. 4 is an activity diagram showing an example of information processing of the server device.
  • FIG. 5 is a diagram showing an example of a pipeline.
  • FIG. 6 is a diagram showing an example of arrangement when creating a feature map.
  • FIG. 7 is a diagram showing an example of a model equivalent to the first embodiment.
  • FIG. 8 is a diagram showing an example of performance evaluation.
  • the "part" may include, for example, a combination of hardware resources implemented by a circuit in a broad sense and software information processing that can be concretely realized by these hardware resources. Further, in the first embodiment, various information is handled, and these information are represented by high and low signal values as a bit set of binary numbers composed of 0 or 1, and communication / calculation is executed on a circuit in a broad sense. Can be done.
  • a circuit in a broad sense is a circuit realized by at least appropriately combining a circuit, a circuit, a processor, a memory, and the like. That is, an integrated circuit for a specific application (Application Special Integrated Circuit: ASIC), a programmable logic device (for example, a simple programmable logic device (Simple Programmable Logical Device: SPLD), a composite programmable logic device (Complex Program)) It includes a programmable gate array (Field Programmable Gate Array: FPGA) and the like.
  • FIG. 1 is a diagram showing an example of a system configuration of an information processing system.
  • the information processing system includes a server device 100 and a client device 110.
  • the server device 100 communicates with the client device 110 via the network 120.
  • one client device 110 is connected to the server device 100 via the network 120.
  • a plurality of client devices may be connected to the server device 100 via the network 120.
  • the server device 100 may be configured not as one but as a plurality of server devices, so-called clouds. Further, the server device 100 may be connected to another server device, system, or the like via the network 120.
  • the server device 100 When the server device 100 receives the WSI from the image system based on the request from the client device 110 or the like, the server device 100 cuts out a plurality of partial images from the WSI, inputs the cut out plurality of partial images into the feature extraction model, and receives each of the plurality of partial images. Get the linear feature vector corresponding to. Then, the server device 100 generates a feature map in which the feature vectors are arranged based on the position information of the partial image in the WSI, inputs the generated feature map to the segmentation model, and outputs the prediction map. For example, the server device 100 transmits a prediction map to the requesting client device 110.
  • WSI is a digitized image of a tissue fragment and is an example of a pathological image.
  • FIG. 2 is a diagram showing an example of the hardware configuration of the server device 100.
  • the server device 100 includes a control unit 201, a storage unit 202, and a communication unit 203 as a hardware configuration.
  • the control unit 201 is a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or the like, and controls the entire server device 100 or controls image processing.
  • the storage unit 202 is an HDD (Hard Disk Drive), a ROM (Read Only Memory), a RAM (Random Access Memory), or the like, and stores data or the like used when the program and the control unit 201 execute processing based on the program.
  • HDD Hard Disk Drive
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the control unit 201 executes processing based on the program stored in the storage unit 202, thereby performing the functional configuration of the server device 100 of FIG. 3 described later and the activity diagram of FIG. 4 described later. Processing is realized.
  • the communication unit 203 is a NIC (Network Interface Card) or the like, and connects the server device 100 to the network 120.
  • the storage unit 202 is an example of a storage medium.
  • FIG. 3 is a diagram showing an example of the functional configuration of the server device 100.
  • the server device 100 (information processing device) includes a cutting unit 301, a feature amount acquisition unit 302, a generation unit 303, an identification unit 304, an output unit 305, and a learning unit 306 as functional configurations.
  • the cutout unit 301 cuts out a partial image from the WSI.
  • WSI is an example of an image.
  • the feature amount acquisition unit 302 inputs a plurality of partial images cut out by the cutout unit 301 into the feature extraction model, and acquires the feature amount.
  • the generation unit 303 generates a feature amount map in which the feature amounts acquired by the feature amount acquisition unit 302 are arranged based on the position information of the corresponding partial image.
  • the identification unit 304 inputs the feature amount map into the segmentation model and identifies each of the plurality of partial images.
  • the output unit 305 outputs the identification result by the identification unit 304.
  • the learning unit 306 learns the
  • FIG. 4 is an activity diagram showing an example of information processing of the server device 100.
  • the cutting unit 301 cuts out a partial image from the WSI received from the image system based on a request from the client device 110 or the like.
  • the feature amount acquisition unit 302 inputs a plurality of partial images cut out by the cutout unit 301 into the feature extraction model, and acquires the feature amount.
  • An example of a feature extraction model is GoogleLeNet.
  • the generation unit 303 generates a feature amount map in which the feature amounts acquired by the feature amount acquisition unit 302 are arranged based on the position information of the corresponding partial image.
  • the identification unit 304 inputs the feature amount map into the segmentation model and discriminates whether each of the plurality of partial images is normal or abnormal.
  • An example of a segmentation model is U-Net.
  • the output unit 305 outputs a prediction map in which the partial image identified as normal by the identification unit 304 and the partial image identified as abnormal by the identification unit 304 are colored differently as the result of identification.
  • the output unit 305 transmits the prediction map to the requesting client device 110.
  • the output unit 305 may output the prediction map to the display of the server device 100, if requested. Further, depending on the request, the output unit 305 may output the prediction map to the storage unit 202, the storage unit of the external device, or the like.
  • FIG. 5 is a diagram showing an example of a pipeline.
  • the cutting unit 301 cuts out a partial image from the WSI received from the image system based on a request from the client device 110 or the like.
  • the feature amount acquisition unit 302 inputs a plurality of partial images cut out by the cutout unit 301 into the feature extraction model, and acquires the feature amount.
  • the generation unit 303 generates a feature amount map in which the feature amounts acquired by the feature amount acquisition unit 302 are arranged based on the position information of the corresponding partial image.
  • the identification unit 304 inputs the feature amount map into the segmentation model and discriminates whether each of the plurality of partial images is normal or abnormal.
  • the output unit 305 outputs, as a result of the identification, a prediction map in which the partial image identified as normal by the identification unit 304 and the partial image identified as abnormal by the identification unit 304 are colored differently.
  • the identification unit 304 and the output unit 305 are integrated, and the identification unit 304 inputs a feature amount map into the segmentation model to identify whether each of the plurality of partial images is normal or abnormal.
  • a predicted map in which the partial image identified as normal and the partial image identified as abnormal may be colored may be output.
  • the first individual learning optimization method is a method in which the learning unit 306 learns the feature extraction model and the segmentation model separately.
  • the learning unit 306 first learns the feature extraction model of the first half.
  • the input is a partial image cut from the WSI
  • the output is a two-dimensional vector that determines whether the partial image is normal or abnormal and represents the probability.
  • the partial image used for the training data is labeled as normal or abnormal based on the annotation of the doctor, and each element of the output two-dimensional vector is a value from 0 to 1.
  • the learning unit 306 learns the feature extraction model using this training data until the performance becomes stable.
  • the performance of the feature extraction model is obtained by examining the generalization to the test data prepared separately from the training data.
  • the learning unit 306 fixes the weights of the feature extraction model for which learning has been completed, inputs partial images of all the training data and test data, and extracts an intermediate feature amount before the identification result.
  • WSI identification information and coordinate information are attached to each partial image, and the learning unit 306 creates a feature map of the entire WSI by arranging intermediate feature quantities based on the position information.
  • all feature maps must be in fixed dimensions, but it is assumed that the size of WSI differs depending on the image. Therefore, as shown in FIG. 6, a map filled with sufficient 0s is prepared in advance, and the learning unit 306 arranges the features so that the WSI comes to the center of the map.
  • FIG. 6 a map filled with sufficient 0s is prepared in advance, and the learning unit 306 arranges the features so that the WSI comes to the center of the map.
  • WSI is arranged for the sake of clarity, but what is actually arranged is the feature vector of each partial image.
  • the entire WSI having the size of a partial image of L h in the vertical direction and L w in the horizontal direction.
  • the i-th partial image from the left and the j-th partial image from the top are referred to as partial images [i, j].
  • the learning unit 306 sets the feature amount of the partial image [i, j]. By arranging it at the position of [i', j'], it is arranged in the center of the feature map as a whole.
  • the learning unit 306 creates a correct answer map in which the label information of the partial image is arranged.
  • a dataset in which all or part of the tissue in WSI is annotated by a doctor there are four classes defined in the correct data. The first is a normal class, the second is an abnormal class, the third is an unannotated organizational part, an unlabeled class, and the fourth is a blank area where there is no partial image on the feature map. It is a class. Enter the teacher label given to each partial image at the position corresponding to the feature amount of the partial image.
  • the learning unit 306 learns the segmentation model.
  • the learning unit 306 performs supervised learning using the feature map that generated the input and the teacher data as the correct answer map.
  • the objects to be correctly identified are two of the four classes, the normal class and the abnormal class, and the unlabeled class and the background class do not need to be learned. Therefore, the learning unit 306 calculates the error for updating the model only for the normal class and the abnormal class, and sets the error to 0 for the other classes.
  • the learning unit 306 uses this model to perform learning until the performance becomes stable. Discrimination performance is evaluated using test data. At that time, the learning unit 306 outputs the identification prediction map of the test data.
  • the learning unit 306 uses the same test data as that used when evaluating the first feature extraction model.
  • this pipeline is different, it can be said that it is essentially equivalent to segmenting the entire WSI. Specifically, it can be regarded as a state in which a feature amount of a partial image size is extracted by performing convolution and pooling with a feature extraction model, and an expected map having a size smaller than the original WSI is output.
  • the optimization method by individual learning uses the first half of the segmentation encoder as shown in FIG. It can be regarded as a fixed state after learning first. That is, it can be said that the segmentation of the entire WSI is realized by fixing the gradient of a part of the model.
  • the second of the optimization methods is a method in which the learning unit 306 learns from the feature extraction model to the segmentation model end-to-end.
  • the structure and learning method of each model are basically the same as those described in individual learning.
  • the output of the first model is used as the input of the second model, and it is the same as learning with one model from the last output to the first input.
  • the error can be propagated to.
  • information of hundreds to thousands of partial images is required to extract the feature amount of one WSI. That is, in order to give one input to the segmentation model in the latter half, it is necessary to give thousands of inputs to the feature extraction model in the first half in many cases.
  • one set of divided images constituting WSI is divided into realistic sizes, and the feature extraction model is updated in a plurality of times. That is, the learning unit 306 divides the total number of partial images N into r batches and learns them in order. By updating in several times in this way, the memory consumption becomes NM / r. The learning unit 306 adjusts r so that M / r has a computable size. This makes it possible to learn the feature extraction model. Not updating the feature extraction model at once requires a structure that holds information once between the two models.
  • the learning unit 306 holds the intermediate feature quantities output from the feature extraction model in the first half together with the position information in the same manner as in the individual learning, and when all the feature quantities on the feature map are available. Fill in the segmentation model. With respect to backpropagation, the error with the teacher label is defined between the final output of the segmentation model, so the learning unit 306 must also obtain the gradient of the feature extraction model in the first half by differentiating the final error. In order to update the feature extraction model, it is necessary to retain the error information calculated from the segmentation model. In order to hold the error information with a small memory consumption, the learning unit 306 adopts the following method.
  • ⁇ L / ⁇ x can be calculated. Use ⁇ L / ⁇ x to calculate the error L'of the feature extraction model. If you define it again ⁇ L / ⁇ W can be obtained as.
  • the learning unit 306 calculates and holds ⁇ L / ⁇ x for the latter half of the segmentation model, and takes the inner product of the output x of the model and the held value when learning the feature extraction model. , Can be treated as equivalent to the error of the feature extraction model.
  • the learning unit 306 can update the feature extraction model in this way.
  • the output of the feature extraction model may be up to the intermediate features representing the partial image. Therefore, the final layer of the feature extraction model is removed during batch learning. The following is a summary of the batch learning optimization procedure.
  • Step 1 The learning unit 306 sequentially calculates the feature extraction model with the partial image as an input, and extracts the feature amount for each partial image.
  • the learning unit 306 arranges the extracted feature amounts based on the position information given to the partial image, and creates a feature map and a correct answer map of the entire WSI.
  • Step 2 The learning unit 306 learns and updates the segmentation model using the feature map and the correct answer map after the forward calculation of all partial images is completed.
  • Step 3 The learning unit 306 calculates [Equation 4] using the error L and the output x of the feature extraction model.
  • Step 4 The learning unit 306 learns the feature extraction model of the first half using the calculated L'. By following such a procedure, the two models can be trained end-to-end.
  • step 4 since the number of intermediate layer outputs of the feature extraction model for all partial images is too large to be retained, only the features are extracted in step 1 without retaining the output of each layer, and in step 4, the order is again in a realistic batch size. Calculation and error back propagation calculation are performed. In this way, memory consumption is suppressed and learning on the scale of the entire WSI is realized.
  • the result of the process described in the first embodiment is shown in FIG.
  • the "Identifier only” line shows the performance evaluation of only GoogleLeNet, which is a feature extraction model.
  • the “Segmentation only” line shows the performance evaluation of the segmentation model only.
  • the line of “optimization method 1: individual learning” shows the performance evaluation of the pipeline of the first embodiment learned by the above-mentioned individual learning.
  • the line of “optimization method 2: batch learning” shows the performance evaluation of the pipeline of the first embodiment learned by the above-mentioned batch learning. For the evaluation, the correct answer rate of identification and the Area Under Curve (AUC) of the Precision Real (PR) curve are used. As shown in FIG.
  • a pathological image has been used as an example for explanation.
  • the image is not limited to the pathological image.
  • an aerial image may be used as the image, and the aerial image may be identified with high accuracy.
  • the information processing system has been described with a configuration including a server device 100 and a client device 110.
  • the client device 110 may have the function of the server device 100 as a single feature.
  • An information processing device that further includes an output unit, and the output unit outputs the result of identification by the identification unit.
  • the information processing device wherein the identification unit inputs the feature amount map into a segmentation model and discriminates whether each partial image is normal or abnormal.
  • the output unit outputs a predicted map in which a partial image identified as normal by the identification unit and a partial image identified as abnormal by the identification unit are colored as a result of the identification.
  • Information processing equipment An information processing device that further includes a learning unit, and the learning unit separately learns the feature extraction model and the segmentation model. An information processing device that further includes a learning unit, and the learning unit collectively learns from the feature extraction model to the segmentation model.
  • the information processing device wherein the image is a pathological image.
  • the information processing device wherein the image is an aerial image. It is an information processing system and has a cutting unit, a feature amount acquisition unit, a generation unit, an identification unit, and an output unit.
  • the cutting unit cuts out a plurality of partial images from an image and the feature amount.
  • the acquisition unit inputs a plurality of partial images cut out by the cutout unit into the feature extraction model to acquire the feature amount, and the generation unit obtains the feature amount acquired by the feature amount acquisition unit as a corresponding portion.
  • a feature amount map arranged based on the position information of the image is generated, the identification unit inputs the feature amount map into the segmentation model, identifies each partial image, and the output unit is the result of identification by the identification unit.
  • An information processing system that outputs.
  • An information processing method executed by an information processing apparatus which includes a first step, a second step, a third step, and a fourth step.
  • a plurality of partial images cut out in the first step are input to the feature extraction model to acquire the feature amount
  • the second step is performed.
  • a feature amount map is generated in which the feature amounts acquired in the above step are arranged based on the position information of the corresponding partial image, and in the fourth step, the feature amount map is input to the segmentation model for each partial image.
  • a computer is made to execute a first step, a second step, a third step, and a fourth step, and in the first step, a plurality of partial images are obtained from the images.
  • the plurality of partial images cut out in the first step are input to the feature extraction model to acquire the feature amount
  • the partial images are obtained in the second step.
  • a feature amount map in which the obtained feature amounts are arranged based on the position information of the corresponding partial image is generated, and in the fourth step, the feature amount map is input to the segmentation model, and normality and abnormality for each partial image are determined.
  • a program to identify Of course, this is not the case.
  • it may be provided as a computer-readable non-temporary storage medium for storing the above-mentioned program.
  • Server device 110 Client device 120: Network 201: Control unit 202: Storage unit 203: Communication unit 301: Cutout unit 302: Feature amount acquisition unit 303: Generation unit 304: Identification unit 305: Output unit 306: Learning unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

One embodiment of the present invention is an information processing device. This information processing device has an extraction unit, a feature amount acquisition unit, a generation unit, and an identification unit. The extraction unit extracts a plurality of partial images from an image. The feature amount acquisition unit inputs the plurality of partial images extracted by the extraction unit to a feature extraction model, and obtains feature amounts. The generation unit generates a feature amount map in which the feature amounts obtained by the feature amount acquisition unit are arranged on the basis of position information about the corresponding partial images. The identification unit inputs the feature amount map to a segmentation model, and identifies each of the plurality of partial images.

Description

情報処理装置、情報処理システム、情報処理方法及びプログラムInformation processing equipment, information processing system, information processing method and program
 本発明は、情報処理装置、情報処理システム、情報処理方法及びプログラムに関する。 The present invention relates to an information processing device, an information processing system, an information processing method and a program.
 組織の断片の画像をデジタル化したものはWhole Slide Image(WSI)と言われる。医師の診断を補助し負担を減らすため、WSIに対して深層学習を応用して病理画像の自動診断を実現する研究が行われている。
 WSIは高解像度であることが特徴である。WSIの解像度を落とすことなく深層モデルの学習に用いるため、非特許文献1では、WSIをパッチと呼ばれる小画像に分割し、モデルに入力する方法が用いられている。
A digitized image of a tissue fragment is called a Whole Slide Image (WSI). In order to assist doctors in diagnosis and reduce the burden, research is being conducted to realize automatic diagnosis of pathological images by applying deep learning to WSI.
WSI is characterized by high resolution. In order to use it for learning a deep model without lowering the resolution of WSI, Non-Patent Document 1 uses a method of dividing WSI into small images called patches and inputting them into the model.
 しかし、非特許文献1の方法ではパッチサイズに制限された局所的な情報しか考慮することができない問題があった。 However, the method of Non-Patent Document 1 has a problem that only local information limited to the patch size can be considered.
 本発明では上記事情を鑑み、局所特徴と大域特徴との両方を考慮し、画像を識別する技術を提供することとした。 In view of the above circumstances, the present invention has decided to provide a technique for identifying an image in consideration of both local features and global features.
 本発明の一態様によれば、情報処理装置が提供される。この情報処理装置は、切り出し部と、特徴量取得部と、生成部と、識別部と、を有する。切り出し部は、画像から複数の部分画像を切り出す。特徴量取得部は、切り出し部によって切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得する。生成部は、特徴量取得部によって取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成する。識別部は、特徴量マップをセグメンテーションモデルに入力し、複数の部分画像それぞれを識別する。 According to one aspect of the present invention, an information processing device is provided. This information processing device has a cutting unit, a feature amount acquisition unit, a generation unit, and an identification unit. The cutout portion cuts out a plurality of partial images from the image. The feature amount acquisition unit inputs a plurality of partial images cut out by the cutout unit into the feature extraction model and acquires the feature amount. The generation unit generates a feature amount map in which the feature amounts acquired by the feature amount acquisition unit are arranged based on the position information of the corresponding partial image. The identification unit inputs a feature map into the segmentation model and identifies each of the plurality of partial images.
 本発明の一つによれば、局所特徴と大域特徴との両方を考慮し、画像を識別する技術を提供することができる。 According to one of the present inventions, it is possible to provide a technique for identifying an image in consideration of both local features and global features.
図1は、情報処理システムのシステム構成の一例を示す図である。FIG. 1 is a diagram showing an example of a system configuration of an information processing system. 図2は、サーバ装置のハードウェア構成の一例を示す図である。FIG. 2 is a diagram showing an example of the hardware configuration of the server device. 図3は、サーバ装置の機能構成の一例を示す図である。FIG. 3 is a diagram showing an example of the functional configuration of the server device. 図4は、サーバ装置の情報処理の一例を示すアクティビティ図である。FIG. 4 is an activity diagram showing an example of information processing of the server device. 図5は、パイプラインの一例を示す図である。FIG. 5 is a diagram showing an example of a pipeline. 図6は、特徴マップを作成する際の配置の一例を示す図である。FIG. 6 is a diagram showing an example of arrangement when creating a feature map. 図7は、実施形態1と等価なモデルの一例を示す図である。FIG. 7 is a diagram showing an example of a model equivalent to the first embodiment. 図8は、性能評価の一例を示す図である。FIG. 8 is a diagram showing an example of performance evaluation.
 以下、図面を用いて本発明の実施形態について説明する。以下に示す実施形態中で示した各種特徴事項は、互いに組み合わせ可能である。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. The various features shown in the embodiments shown below can be combined with each other.
 本明細書において「部」とは、例えば、広義の回路によって実施されるハードウェア資源と、これらのハードウェア資源によって具体的に実現されうるソフトウェアの情報処理とを合わせたものも含みうる。また、実施形態1においては様々な情報を取り扱うが、これら情報は、0又は1で構成される2進数のビット集合体として信号値の高低によって表され、広義の回路上で通信・演算が実行されうる。 In the present specification, the "part" may include, for example, a combination of hardware resources implemented by a circuit in a broad sense and software information processing that can be concretely realized by these hardware resources. Further, in the first embodiment, various information is handled, and these information are represented by high and low signal values as a bit set of binary numbers composed of 0 or 1, and communication / calculation is executed on a circuit in a broad sense. Can be done.
 また、広義の回路とは、回路(Circuit)、回路類(Circuitry)、プロセッサ(Processor)、及びメモリ(Memory)等を少なくとも適当に組み合わせることによって実現される回路である。すなわち、特定用途向け集積回路(Application Specific Integrated Circuit:ASIC)、プログラマブル論理デバイス(例えば、単純プログラマブル論理デバイス(Simple Programmable Logic Device:SPLD)、複合プログラマブル論理デバイス(Complex Programmable Logic Device:CPLD)、及びフィールドプログラマブルゲートアレイ(Field Programmable Gate Array:FPGA))等を含むものである。 Further, a circuit in a broad sense is a circuit realized by at least appropriately combining a circuit, a circuit, a processor, a memory, and the like. That is, an integrated circuit for a specific application (Application Special Integrated Circuit: ASIC), a programmable logic device (for example, a simple programmable logic device (Simple Programmable Logical Device: SPLD), a composite programmable logic device (Complex Program)) It includes a programmable gate array (Field Programmable Gate Array: FPGA) and the like.
<実施形態1>
1.システム構成
 図1は、情報処理システムのシステム構成の一例を示す図である。情報処理システムは、サーバ装置100と、クライアント装置110と、を含む。サーバ装置100は、ネットワーク120を介してクライアント装置110と通信を行う。図1では説明の簡略化のため、1台のクライアント装置110がネットワーク120を介してサーバ装置100に接続されている。しかし、複数のクライアント装置がネットワーク120を介してサーバ装置100に接続されていてもよい。また、サーバ装置100は1台ではなく複数のサーバ装置、いわゆるクラウドとして、構成されてもよい。また、サーバ装置100は、ネットワーク120を介して、他のサーバ装置、システム等と接続されていてもよい。
(処理の概要)
 サーバ装置100は、クライアント装置110からの要求等に基づき画像システムよりWSIを受け取ると、WSIから複数の部分画像を切り出し、切り出した複数の部分画像を特徴抽出モデルに入力し、複数の部分画像それぞれに対応する線形の特徴ベクトルを取得する。そして、サーバ装置100は、WSI中の部分画像の位置情報に基づき、特徴ベクトルを並べた特徴マップを生成し、生成した特徴マップをセグメンテーションモデルに入力し、予測マップを出力する。例えば、サーバ装置100は、要求元のクライアント装置110に予測マップを送信する。WSIは、上述したように、組織の断片の画像をデジタル化したものであり、病理画像の一例である。
<Embodiment 1>
1. 1. System Configuration FIG. 1 is a diagram showing an example of a system configuration of an information processing system. The information processing system includes a server device 100 and a client device 110. The server device 100 communicates with the client device 110 via the network 120. In FIG. 1, for simplification of the description, one client device 110 is connected to the server device 100 via the network 120. However, a plurality of client devices may be connected to the server device 100 via the network 120. Further, the server device 100 may be configured not as one but as a plurality of server devices, so-called clouds. Further, the server device 100 may be connected to another server device, system, or the like via the network 120.
(Outline of processing)
When the server device 100 receives the WSI from the image system based on the request from the client device 110 or the like, the server device 100 cuts out a plurality of partial images from the WSI, inputs the cut out plurality of partial images into the feature extraction model, and receives each of the plurality of partial images. Get the linear feature vector corresponding to. Then, the server device 100 generates a feature map in which the feature vectors are arranged based on the position information of the partial image in the WSI, inputs the generated feature map to the segmentation model, and outputs the prediction map. For example, the server device 100 transmits a prediction map to the requesting client device 110. As described above, WSI is a digitized image of a tissue fragment and is an example of a pathological image.
2.ハードウェア構成
 図2は、サーバ装置100のハードウェア構成の一例を示す図である。サーバ装置100は、ハードウェア構成として、制御部201と、記憶部202と、通信部203と、を含む。制御部201は、CPU(Central Processing Unit)及びGPU(Graphics Processing Unit)等であって、サーバ装置100の全体を制御したり、画像処理を制御したりする。記憶部202は、HDD(Hard Disk Drive)、ROM(Read Only Memory)、RAM(Random Access Memory)等であって、プログラム及び制御部201がプログラムに基づき処理を実行する際に利用するデータ等を記憶する。制御部201、より具体的にはGPUが、記憶部202に記憶されているプログラムに基づき、処理を実行することによって、後述する図3のサーバ装置100の機能構成及び後述する図4のアクティビティ図の処理が実現される。通信部203は、NIC(Network Interface Card)等であって、サーバ装置100をネットワーク120に接続する。記憶部202は、記憶媒体の一例である。
2. Hardware Configuration FIG. 2 is a diagram showing an example of the hardware configuration of the server device 100. The server device 100 includes a control unit 201, a storage unit 202, and a communication unit 203 as a hardware configuration. The control unit 201 is a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or the like, and controls the entire server device 100 or controls image processing. The storage unit 202 is an HDD (Hard Disk Drive), a ROM (Read Only Memory), a RAM (Random Access Memory), or the like, and stores data or the like used when the program and the control unit 201 execute processing based on the program. Remember. The control unit 201, more specifically the GPU, executes processing based on the program stored in the storage unit 202, thereby performing the functional configuration of the server device 100 of FIG. 3 described later and the activity diagram of FIG. 4 described later. Processing is realized. The communication unit 203 is a NIC (Network Interface Card) or the like, and connects the server device 100 to the network 120. The storage unit 202 is an example of a storage medium.
3.機能構成
 図3は、サーバ装置100の機能構成の一例を示す図である。サーバ装置100(情報処理装置)は、機能構成として、切り出し部301と、特徴量取得部302と、生成部303と、識別部304と、出力部305と、学習部306と、を含む。切り出し部301は、WSIから部分画像を切り出す。WSIは、画像の一例である。特徴量取得部302は、切り出し部301によって切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得する。生成部303は、特徴量取得部302によって取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成する。識別部304は、特徴量マップをセグメンテーションモデルに入力し、複数の部分画像のそれぞれを識別する。出力部305は、識別部304による識別結果を出力する。学習部306は、ネットワークを学習する。
3. 3. Functional configuration FIG. 3 is a diagram showing an example of the functional configuration of the server device 100. The server device 100 (information processing device) includes a cutting unit 301, a feature amount acquisition unit 302, a generation unit 303, an identification unit 304, an output unit 305, and a learning unit 306 as functional configurations. The cutout unit 301 cuts out a partial image from the WSI. WSI is an example of an image. The feature amount acquisition unit 302 inputs a plurality of partial images cut out by the cutout unit 301 into the feature extraction model, and acquires the feature amount. The generation unit 303 generates a feature amount map in which the feature amounts acquired by the feature amount acquisition unit 302 are arranged based on the position information of the corresponding partial image. The identification unit 304 inputs the feature amount map into the segmentation model and identifies each of the plurality of partial images. The output unit 305 outputs the identification result by the identification unit 304. The learning unit 306 learns the network.
4.情報処理
 図4は、サーバ装置100の情報処理の一例を示すアクティビティ図である。
 A401において、切り出し部301は、クライアント装置110からの要求等に基づき画像システムより受け取ったWSIから部分画像を切り出す。
 A402において、特徴量取得部302は、切り出し部301によって切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得する。特徴抽出モデルの一例は、GoogLeNetである。
 A403において、生成部303は、特徴量取得部302によって取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成する。
4. Information processing FIG. 4 is an activity diagram showing an example of information processing of the server device 100.
In A401, the cutting unit 301 cuts out a partial image from the WSI received from the image system based on a request from the client device 110 or the like.
In A402, the feature amount acquisition unit 302 inputs a plurality of partial images cut out by the cutout unit 301 into the feature extraction model, and acquires the feature amount. An example of a feature extraction model is GoogleLeNet.
In A403, the generation unit 303 generates a feature amount map in which the feature amounts acquired by the feature amount acquisition unit 302 are arranged based on the position information of the corresponding partial image.
 A404において、識別部304は、特徴量マップをセグメンテーションモデルに入力し、複数の部分画像それぞれが正常か、異常かを識別する。セグメンテーションモデルの一例は、U-Netである。
 A405において、出力部305は、識別部304によって正常と識別された部分画像及び識別部304によって異常と識別された部分画像にそれぞれ異なる色を付けた予測マップを識別の結果として出力する。実施形態1の例では、出力部305は、要求元のクライアント装置110に予測マップを送信する。他の例として、サーバ装置100がディスプレイ等の表示部を有していた場合、要求に応じて、出力部305は、予測マップをサーバ装置100のディスプレイに出力してもよい。また、要求に応じて、出力部305は、予測マップを記憶部202、又は外部装置の記憶部等に出力するようにしてもよい。
In A404, the identification unit 304 inputs the feature amount map into the segmentation model and discriminates whether each of the plurality of partial images is normal or abnormal. An example of a segmentation model is U-Net.
In A405, the output unit 305 outputs a prediction map in which the partial image identified as normal by the identification unit 304 and the partial image identified as abnormal by the identification unit 304 are colored differently as the result of identification. In the example of the first embodiment, the output unit 305 transmits the prediction map to the requesting client device 110. As another example, when the server device 100 has a display unit such as a display, the output unit 305 may output the prediction map to the display of the server device 100, if requested. Further, depending on the request, the output unit 305 may output the prediction map to the storage unit 202, the storage unit of the external device, or the like.
5.パイプライン
 図5は、パイプラインの一例を示す図である。
 切り出し部301は、クライアント装置110からの要求等に基づき画像システムより受け取ったWSIから部分画像を切り出す。特徴量取得部302は、切り出し部301によって切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得する。生成部303は、特徴量取得部302によって取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成する。識別部304は、特徴量マップをセグメンテーションモデルに入力し、複数の部分画像それぞれが正常か、異常かを識別する。出力部305は、識別部304によって正常と識別された部分画像及び識別部304によって異常と識別された部分画像にそれぞれ異なる色を付けた予測マップを識別の結果として出力する。
 なお、他の例として識別部304と、出力部305とは、一体となり、識別部304が、特徴量マップをセグメンテーションモデルに入力し、複数の部分画像それぞれが正常か、異常かを識別し、正常と識別された部分画像及び異常と識別された部分画像に色を付けた予測マップを出力するようにしてもよい。
5. Pipeline FIG. 5 is a diagram showing an example of a pipeline.
The cutting unit 301 cuts out a partial image from the WSI received from the image system based on a request from the client device 110 or the like. The feature amount acquisition unit 302 inputs a plurality of partial images cut out by the cutout unit 301 into the feature extraction model, and acquires the feature amount. The generation unit 303 generates a feature amount map in which the feature amounts acquired by the feature amount acquisition unit 302 are arranged based on the position information of the corresponding partial image. The identification unit 304 inputs the feature amount map into the segmentation model and discriminates whether each of the plurality of partial images is normal or abnormal. The output unit 305 outputs, as a result of the identification, a prediction map in which the partial image identified as normal by the identification unit 304 and the partial image identified as abnormal by the identification unit 304 are colored differently.
As another example, the identification unit 304 and the output unit 305 are integrated, and the identification unit 304 inputs a feature amount map into the segmentation model to identify whether each of the plurality of partial images is normal or abnormal. A predicted map in which the partial image identified as normal and the partial image identified as abnormal may be colored may be output.
6.最適化手法
 以下、学習部306によるネットワークの最適化手法について2通りの学習方法を説明する。
(1)個別学習
 最適化手法の1つ目は、学習部306が特徴抽出モデルとセグメンテーションモデルを別々に学習する方法である。この方法では、学習部306は、まず前半の特徴抽出モデルを学習する。入力はWSIから切り取られた部分画像であり、出力は部分画像が正常か異常かを判定してその確率を表す2次元のベクトルである。訓練データに用いられる部分画像には医師のアノテーションに基づいて正常か異常かの教師ラベルが付されており、出力の2次元ベクトルの各要素は0から1までの値である。学習部306は、この訓練データを用いて性能が安定するまで特徴抽出モデルを学習する。ここで、特徴抽出モデルの性能は訓練データとは別に用意されているテストデータに対する汎化性を調べることによって求められる。
6. Optimization method Two types of learning methods will be described below for the network optimization method by the learning unit 306.
(1) The first individual learning optimization method is a method in which the learning unit 306 learns the feature extraction model and the segmentation model separately. In this method, the learning unit 306 first learns the feature extraction model of the first half. The input is a partial image cut from the WSI, and the output is a two-dimensional vector that determines whether the partial image is normal or abnormal and represents the probability. The partial image used for the training data is labeled as normal or abnormal based on the annotation of the doctor, and each element of the output two-dimensional vector is a value from 0 to 1. The learning unit 306 learns the feature extraction model using this training data until the performance becomes stable. Here, the performance of the feature extraction model is obtained by examining the generalization to the test data prepared separately from the training data.
 次に、学習部306は、学習が終わった特徴抽出モデルの重みを固定して訓練データ、テストデータすべての部分画像を入力し、識別結果の手前の中間特徴量を抽出する。各部分画像にはWSIの識別情報と座標情報が付随しており、学習部306は、位置情報に基づいて中間特徴量を並べてWSI全体の特徴マップを作成する。なお、セグメンテーションモデルに入力する際は全ての特徴マップが固定次元でなくてはならないが、WSIは大きさが画像によって異なることが想定される。したがって、図6のように、予め十分な0で埋められたマップを用意し、学習部306は、マップの中心にWSIが来るように特徴量を並べることとする。図6ではわかり易さのためWSIを配置しているが、実際に配置されるのは、各部分画像の特徴量ベクトルである。より具体的な例として縦L個、横L個の部分画像分の大きさがあるWSI全体を考える。その中で左からi個目、上からj個目に当たる部分画像を部分画像[i,j]とする。固定長の特徴マップを部分画像特徴量L×L(L>L、L>L)個分の大きさとすると、学習部306は、部分画像[i,j]の特徴量を
Figure JPOXMLDOC01-appb-M000001

となる[i’,j’]の位置に配置することで、全体としての特徴マップの中央に配置する。
Next, the learning unit 306 fixes the weights of the feature extraction model for which learning has been completed, inputs partial images of all the training data and test data, and extracts an intermediate feature amount before the identification result. WSI identification information and coordinate information are attached to each partial image, and the learning unit 306 creates a feature map of the entire WSI by arranging intermediate feature quantities based on the position information. When inputting to the segmentation model, all feature maps must be in fixed dimensions, but it is assumed that the size of WSI differs depending on the image. Therefore, as shown in FIG. 6, a map filled with sufficient 0s is prepared in advance, and the learning unit 306 arranges the features so that the WSI comes to the center of the map. In FIG. 6, WSI is arranged for the sake of clarity, but what is actually arranged is the feature vector of each partial image. As a more specific example, consider the entire WSI having the size of a partial image of L h in the vertical direction and L w in the horizontal direction. Among them, the i-th partial image from the left and the j-th partial image from the top are referred to as partial images [i, j]. Assuming that the fixed-length feature map is the size of the partial image feature amount L × L (L> L h , L> L w ), the learning unit 306 sets the feature amount of the partial image [i, j].
Figure JPOXMLDOC01-appb-M000001

By arranging it at the position of [i', j'], it is arranged in the center of the feature map as a whole.
 また、同時に学習部306は、部分画像のラベル情報を並べた正解マップを作成する。WSI中の組織の全域又は一部分に医師によるアノテーションが付けられているデータセットを想定すると、正解データにおいて定義されるクラスは4つになる。1つ目は正常クラス、2つ目は異常クラス、3つ目はアノテーションが付けられていない組織部分であるラベルなしクラス、4つ目は特徴マップ上で部分画像が存在しない空白エリアである背景クラスである。部分画像の特徴量に対応する位置に、各部分画像に与えられている教師ラベルを入力する。 At the same time, the learning unit 306 creates a correct answer map in which the label information of the partial image is arranged. Assuming a dataset in which all or part of the tissue in WSI is annotated by a doctor, there are four classes defined in the correct data. The first is a normal class, the second is an abnormal class, the third is an unannotated organizational part, an unlabeled class, and the fourth is a blank area where there is no partial image on the feature map. It is a class. Enter the teacher label given to each partial image at the position corresponding to the feature amount of the partial image.
 その後、学習部306は、セグメンテーションモデルを学習する。学習部306は、入力を生成した特徴マップ、教師データを正解マップとして教師あり学習を行う。この際、正しく識別したい対象は4つあるクラスのうち、正常クラス、異常クラスの2つであり、ラベルなしクラスと背景クラスは学習する必要がない。したがって、学習部306は、正常クラス、異常クラスのみに関してモデル更新のための誤差を計算し、その他のクラスに関しては誤差を0とする。学習部306は、このモデルを用いて性能が安定するまで学習を行う。識別性能はテストデータを用いて評価される。その際に、学習部306は、テストデータの識別予想マップを出力する。学習部306は、テストデータは、はじめの特徴抽出モデルを評価した際に用いたものと同じデータを使う。 After that, the learning unit 306 learns the segmentation model. The learning unit 306 performs supervised learning using the feature map that generated the input and the teacher data as the correct answer map. At this time, the objects to be correctly identified are two of the four classes, the normal class and the abnormal class, and the unlabeled class and the background class do not need to be learned. Therefore, the learning unit 306 calculates the error for updating the model only for the normal class and the abnormal class, and sets the error to 0 for the other classes. The learning unit 306 uses this model to perform learning until the performance becomes stable. Discrimination performance is evaluated using test data. At that time, the learning unit 306 outputs the identification prediction map of the test data. The learning unit 306 uses the same test data as that used when evaluating the first feature extraction model.
 このパイプラインを全体としてみると、モデル構造は異なるものの本質的にはWSI全体をセグメンテーションしていることと等価であるといえる。具体的には、特徴抽出モデルで畳み込みとプーリング等を行って部分画像サイズの特徴量を取り出し、元のWSIよりも小さいサイズの予想マップを出力している状態と捉えることができる。本来WSI全体をセグメンテーションしようとするとサーバ装置100のGPUのメモリ及び処理時間の制約が大きくなることを考えると、個別学習による最適化手法は、図7に示すように、セグメンテーションのエンコーダの前半部分をはじめに学習して固定した状態と捉えることができる。即ち、WSI全体のセグメンテーションをモデルの一部の勾配を固定することで実現しているといえる。 Looking at this pipeline as a whole, although the model structure is different, it can be said that it is essentially equivalent to segmenting the entire WSI. Specifically, it can be regarded as a state in which a feature amount of a partial image size is extracted by performing convolution and pooling with a feature extraction model, and an expected map having a size smaller than the original WSI is output. Considering that the restrictions on the memory and processing time of the GPU of the server device 100 become large when trying to segment the entire WSI, the optimization method by individual learning uses the first half of the segmentation encoder as shown in FIG. It can be regarded as a fixed state after learning first. That is, it can be said that the segmentation of the entire WSI is realized by fixing the gradient of a part of the model.
(2)一括学習
 最適化手法の2つ目は、学習部306が特徴抽出モデルからセグメンテーションモデルまでをエンドツーエンドで学習する方法である。各モデルの構造及び学習方法は、個別学習で述べたものと基本的には同じである。
 一般的にモデルを2つ以上直列に並べたネットワークでは、1つ目のモデルの出力を2つ目のモデルの入力として、最後の出力からはじめての入力まで一つのモデルで学習するのと同じように誤差を伝播することができる。しかし、実施形態1の条件では、一枚のWSIの特徴量を取り出すために数百から数千の部分画像の情報が必要となる。即ち、後半のセグメンテーションモデルに1つ入力を与えるためには前半の特徴抽出モデルに多い場合で数千の入力を与える必要がある。数千枚の部分画像を一括で入力しようとすると、誤差逆伝播のためにそれら全ての中間層出力を記憶していなければならない。具体的には、ある一枚のWSIから切り出される部分画像をNとし、1部分画像あたりの学習に必要な特徴抽出モデルの消費メモリをMとすると、1枚のWSIをセグメンテーションモデルで学習するために特徴抽出モデルでMNのメモリを消費することになり、これは結局、全WSIを、セグメンテーションモデルを使って学習するのに必要なメモリとほぼ同等であって、一括でWSI1枚分の部分画像を入力することはメモリ容量を考えると現実的ではない。
(2) Batch learning The second of the optimization methods is a method in which the learning unit 306 learns from the feature extraction model to the segmentation model end-to-end. The structure and learning method of each model are basically the same as those described in individual learning.
Generally, in a network in which two or more models are arranged in series, the output of the first model is used as the input of the second model, and it is the same as learning with one model from the last output to the first input. The error can be propagated to. However, under the condition of the first embodiment, information of hundreds to thousands of partial images is required to extract the feature amount of one WSI. That is, in order to give one input to the segmentation model in the latter half, it is necessary to give thousands of inputs to the feature extraction model in the first half in many cases. If you try to input thousands of partial images at once, you have to store all the intermediate layer outputs due to error backpropagation. Specifically, assuming that the partial image cut out from one WSI is N and the memory consumption of the feature extraction model required for learning per partial image is M, one WSI is learned by the segmentation model. In the end, the feature extraction model consumes MN memory, which is almost the same as the memory required to learn all WSI using the segmentation model, and is a partial image for one WSI at a time. Is not realistic considering the memory capacity.
 それを解決するため、WSIを構成する1セットの分割画像を現実的なサイズに分割し、複数回に分けて特徴抽出モデルを更新する。つまり、学習部306は、全部分画像数Nをr個のバッチに分割し順に学習する。このように数回に分けて更新することでメモリ消費量はNM/rとなる。学習部306は、M/rが計算可能なサイズになるようにrを調整する。このことで特徴抽出モデルの学習が可能となる。
 特徴抽出モデルを一度に更新しないということは、2つのモデルの間で情報を一度保持しておく構造が必要になる。学習部306は、順計算の際には、個別学習と同じように前半の特徴抽出モデルから出力された中間特徴量を位置情報と共に保持し、特徴マップ上の全ての特徴量が出揃った時点でセグメンテーションモデルに入力する。逆伝播に関して、教師ラベルとの誤差はセグメンテーションモデルの最終出力との間に定義されるので、学習部306は、前半の特徴抽出モデルの勾配も最終誤差を微分して求めなくてはならず、特徴抽出モデルの更新のためにはセグメンテーションモデルから計算された誤差情報を保持して置く必要がある。誤差情報を少ないメモリ消費で保持するため、学習部306は、以下の方法をとる。
In order to solve this, one set of divided images constituting WSI is divided into realistic sizes, and the feature extraction model is updated in a plurality of times. That is, the learning unit 306 divides the total number of partial images N into r batches and learns them in order. By updating in several times in this way, the memory consumption becomes NM / r. The learning unit 306 adjusts r so that M / r has a computable size. This makes it possible to learn the feature extraction model.
Not updating the feature extraction model at once requires a structure that holds information once between the two models. At the time of forward calculation, the learning unit 306 holds the intermediate feature quantities output from the feature extraction model in the first half together with the position information in the same manner as in the individual learning, and when all the feature quantities on the feature map are available. Fill in the segmentation model. With respect to backpropagation, the error with the teacher label is defined between the final output of the segmentation model, so the learning unit 306 must also obtain the gradient of the feature extraction model in the first half by differentiating the final error. In order to update the feature extraction model, it is necessary to retain the error information calculated from the segmentation model. In order to hold the error information with a small memory consumption, the learning unit 306 adopts the following method.
 モデルに対して誤差Lが定義されたとき、特徴抽出モデルの主にwについて、重みを更新する際の式は、
Figure JPOXMLDOC01-appb-M000002

で表される。ηは学習係数である。
When the error L is defined for the model, the formula for updating the weights mainly for w in the feature extraction model is
Figure JPOXMLDOC01-appb-M000002

It is represented by. η is a learning coefficient.
 ∂L/∂Wを求めることができれば重みを更新し、学習することができるが、特徴抽出モデルのwについてはモデルが別であるため単純には∂L/∂Wを計算できない。
 ここで、前半の特徴抽出モデルの出力である中間特徴量をxとすると、連鎖律を用いて∂L/∂Wは、
Figure JPOXMLDOC01-appb-M000003

と書ける。
If ∂L / ∂W can be obtained, the weight can be updated and learned, but ∂L / ∂W cannot be simply calculated for w of the feature extraction model because the model is different.
Here, assuming that the intermediate feature amount, which is the output of the feature extraction model in the first half, is x, ∂L / ∂W is
Figure JPOXMLDOC01-appb-M000003

Can be written.
 xはセグメンテーションモデルの入力でもあるので、∂L/∂xは計算することができる。∂L/∂xを使って、特徴抽出モデルの誤差L’を
Figure JPOXMLDOC01-appb-M000004

と改めて定義すれば
Figure JPOXMLDOC01-appb-M000005

として∂L/∂Wを求めることができる。
Since x is also the input of the segmentation model, ∂L / ∂x can be calculated. Use ∂L / ∂x to calculate the error L'of the feature extraction model.
Figure JPOXMLDOC01-appb-M000004

If you define it again
Figure JPOXMLDOC01-appb-M000005

∂L / ∂W can be obtained as.
 つまり、学習部306は、後半のセグメンテーションモデルについて∂L/∂xを計算して保持しておき、特徴抽出モデルを学習する際にモデルの出力xと保持していた値との内積をとれば、特徴抽出モデルの誤差と同等のものとして扱うことができる。学習部306は、このようにして特徴抽出モデルを更新することができる。 That is, if the learning unit 306 calculates and holds ∂L / ∂x for the latter half of the segmentation model, and takes the inner product of the output x of the model and the held value when learning the feature extraction model. , Can be treated as equivalent to the error of the feature extraction model. The learning unit 306 can update the feature extraction model in this way.
 このように誤差としてセグメンテーションモデルの勾配と特徴抽出モデルの中間特徴量の内積を考えたとき、特徴抽出モデルの出力は部分画像を表す中間特徴量まででよい。したがって、一括学習の際は特徴抽出モデルの最終層は取り除かれている。
 一括学習の最適化手順を整理すると次のようになる。
When the gradient of the segmentation model and the inner product of the intermediate features of the feature extraction model are considered as errors in this way, the output of the feature extraction model may be up to the intermediate features representing the partial image. Therefore, the final layer of the feature extraction model is removed during batch learning.
The following is a summary of the batch learning optimization procedure.
 ステップ1:学習部306は、部分画像を入力として特徴抽出モデルを順計算し、部分画像ごとの特徴量を取り出す。学習部306は、取り出した特徴量を、部分画像に与えられている位置情報を基に並べ、WSI全体の特徴マップと正解マップを作成する。
 ステップ2:学習部306は、全部分画像の順計算が終わったら、特徴マップと正解マップを用いてセグメンテーションモデルを学習し、更新する。
 ステップ3:学習部306は、誤差Lと特徴抽出モデルの出力xを用いて[数4]を計算する。
 ステップ4:学習部306は、計算したL’を用いて前半の特徴抽出モデルを学習する。
 このような手順を踏むことで2つのモデルをエンドツーエンドで学習させることができる。また全部分画像に対する特徴抽出モデルの中間層出力は数が多すぎて保持できないため、ステップ1では各層の出力は保持せずに特徴量だけを取り出し、ステップ4で改めて現実的なバッチサイズで順計算と誤差逆伝播計算を行っている。このようにしてメモリ消費を抑え、WSI全体のスケールでの学習を実現している。
Step 1: The learning unit 306 sequentially calculates the feature extraction model with the partial image as an input, and extracts the feature amount for each partial image. The learning unit 306 arranges the extracted feature amounts based on the position information given to the partial image, and creates a feature map and a correct answer map of the entire WSI.
Step 2: The learning unit 306 learns and updates the segmentation model using the feature map and the correct answer map after the forward calculation of all partial images is completed.
Step 3: The learning unit 306 calculates [Equation 4] using the error L and the output x of the feature extraction model.
Step 4: The learning unit 306 learns the feature extraction model of the first half using the calculated L'.
By following such a procedure, the two models can be trained end-to-end. In addition, since the number of intermediate layer outputs of the feature extraction model for all partial images is too large to be retained, only the features are extracted in step 1 without retaining the output of each layer, and in step 4, the order is again in a realistic batch size. Calculation and error back propagation calculation are performed. In this way, memory consumption is suppressed and learning on the scale of the entire WSI is realized.
 実施形態1で説明した処理の結果を図8に示す。
 「識別器のみ」の行は、特徴抽出モデルであるGoogLeNetのみの性能評価を示す。「セグメンテーションのみ」の行は、セグメンテーションモデルのみの性能評価を示す。「最適化手法1:個別学習」の行は、上述した個別学習で学習した実施形態1のパイプラインの性能評価を示す。「最適化手法2:一括学習」の行は、上述した一括学習で学習した実施形態1のパイプラインの性能評価を示す。なお、評価には、識別の正答率とPrecision Reall(PR)曲線のArea Under Curve(AUC)を用いている。
 図8に示されるように、「識別器のみ」及び「セグメンテーションのみ」に比べて、実施形態1で説明した「最適化手法1:個別学習」及び「最適化手法2:一括学習」の方が、正答率及びPR-AUCで上回っている。
 即ち、実施形態1によれば、GPUのメモリのハードウェア的制約の中で、局所特徴と大域特徴との両方を考慮し、画像を識別する技術を提供することによって、精度の高い識別を行うことができる。
The result of the process described in the first embodiment is shown in FIG.
The "Identifier only" line shows the performance evaluation of only GoogleLeNet, which is a feature extraction model. The "Segmentation only" line shows the performance evaluation of the segmentation model only. The line of "optimization method 1: individual learning" shows the performance evaluation of the pipeline of the first embodiment learned by the above-mentioned individual learning. The line of "optimization method 2: batch learning" shows the performance evaluation of the pipeline of the first embodiment learned by the above-mentioned batch learning. For the evaluation, the correct answer rate of identification and the Area Under Curve (AUC) of the Precision Real (PR) curve are used.
As shown in FIG. 8, "optimization method 1: individual learning" and "optimization method 2: batch learning" described in the first embodiment are more than "discriminator only" and "segmentation only". , Correct answer rate and PR-AUC are higher.
That is, according to the first embodiment, highly accurate identification is performed by providing a technique for identifying an image in consideration of both local features and global features within the hardware constraints of the GPU memory. be able to.
<変形例>
 実施形態1では、病理画像を例に説明を行った。しかし、画像は病理画像に限られない。例えば、画像として、航空画像を用い、航空画像について高精度な識別を行うようにしてもよい。上述した処理を実行することにより、航空画像についても精度の高い識別を行うことができる。
 また、実施形態1では、情報処理システムは、サーバ装置100と、クライアント装置110と、を含む構成で説明した。しかし、例えば、サーバ装置100の機能をクライアント装置110が単特で有していてもよい。
<Modification example>
In the first embodiment, a pathological image has been used as an example for explanation. However, the image is not limited to the pathological image. For example, an aerial image may be used as the image, and the aerial image may be identified with high accuracy. By executing the above-mentioned processing, it is possible to identify the aerial image with high accuracy.
Further, in the first embodiment, the information processing system has been described with a configuration including a server device 100 and a client device 110. However, for example, the client device 110 may have the function of the server device 100 as a single feature.
 次に記載の各態様で提供されてもよい。
 前記情報処理装置であって、出力部を更に有し、前記出力部は、前記識別部による識別の結果を出力する、情報処理装置。
 前記情報処理装置であって、前記識別部は、前記特徴量マップをセグメンテーションモデルに入力し、各部分画像が正常か、異常かを識別する、情報処理装置。
 前記情報処理装置であって、前記出力部は、前記識別部によって正常と識別された部分画像及び前記識別部によって異常と識別された部分画像に色を付けた予測マップを前記識別の結果として出力する、情報処理装置。
 前記情報処理装置であって、学習部を更に有し、前記学習部は、前記特徴抽出モデルと、前記セグメンテーションモデルと、を別々に学習する、情報処理装置。
 前記情報処理装置であって、学習部を更に有し、前記学習部は、前記特徴抽出モデルから前記セグメンテーションモデルまでを一括で学習する、情報処理装置。
 前記情報処理装置であって、前記画像は病理画像である、情報処理装置。
 前記情報処理装置であって、前記画像は航空画像である、情報処理装置。
 情報処理システムであって、切り出し部と、特徴量取得部と、生成部と、識別部と、出力部と、を有し、前記切り出し部は、画像から複数の部分画像を切り出し、前記特徴量取得部は、前記切り出し部によって切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得し、前記生成部は、前記特徴量取得部によって取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成し、前記識別部は、前記特徴量マップをセグメンテーションモデルに入力し、各部分画像を識別し、前記出力部は、前記識別部による識別の結果を出力する、情報処理システム。
 情報処理装置が実行する情報処理方法であって、第1の工程と、第2の工程と、第3の工程と、第4の工程と、を含み、前記第1の工程では、画像から複数の部分画像を切り出し、前記第2の工程では、前記第1の工程において切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得し、前記第3の工程では、前記第2の工程において取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成し、前記第4の工程では、前記特徴量マップをセグメンテーションモデルに入力し、各部分画像に対する正常、異常を識別する、情報処理方法。
 プログラムであって、コンピュータに、第1の工程と、第2の工程と、第3の工程と、第4の工程と、を実行させ、前記第1の工程では、画像から複数の部分画像を切り出し、前記第2の工程では、前記第1の工程において切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得し、前記第3の工程では、前記第2の工程において取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成し、前記第4の工程では、前記特徴量マップをセグメンテーションモデルに入力し、各部分画像に対する正常、異常を識別する、プログラム。
 もちろん、この限りではない。
 例えば、上述のプログラムを記憶する、コンピュータ読み取り可能な非一時的な記憶媒体として提供してもよい。
It may be provided in each of the following aspects.
An information processing device that further includes an output unit, and the output unit outputs the result of identification by the identification unit.
The information processing device, wherein the identification unit inputs the feature amount map into a segmentation model and discriminates whether each partial image is normal or abnormal.
In the information processing device, the output unit outputs a predicted map in which a partial image identified as normal by the identification unit and a partial image identified as abnormal by the identification unit are colored as a result of the identification. Information processing equipment.
An information processing device that further includes a learning unit, and the learning unit separately learns the feature extraction model and the segmentation model.
An information processing device that further includes a learning unit, and the learning unit collectively learns from the feature extraction model to the segmentation model.
The information processing device, wherein the image is a pathological image.
The information processing device, wherein the image is an aerial image.
It is an information processing system and has a cutting unit, a feature amount acquisition unit, a generation unit, an identification unit, and an output unit. The cutting unit cuts out a plurality of partial images from an image and the feature amount. The acquisition unit inputs a plurality of partial images cut out by the cutout unit into the feature extraction model to acquire the feature amount, and the generation unit obtains the feature amount acquired by the feature amount acquisition unit as a corresponding portion. A feature amount map arranged based on the position information of the image is generated, the identification unit inputs the feature amount map into the segmentation model, identifies each partial image, and the output unit is the result of identification by the identification unit. An information processing system that outputs.
An information processing method executed by an information processing apparatus, which includes a first step, a second step, a third step, and a fourth step. In the second step, a plurality of partial images cut out in the first step are input to the feature extraction model to acquire the feature amount, and in the third step, the second step is performed. A feature amount map is generated in which the feature amounts acquired in the above step are arranged based on the position information of the corresponding partial image, and in the fourth step, the feature amount map is input to the segmentation model for each partial image. An information processing method that identifies normal and abnormal.
In the program, a computer is made to execute a first step, a second step, a third step, and a fourth step, and in the first step, a plurality of partial images are obtained from the images. In the second step, the plurality of partial images cut out in the first step are input to the feature extraction model to acquire the feature amount, and in the third step, the partial images are obtained in the second step. A feature amount map in which the obtained feature amounts are arranged based on the position information of the corresponding partial image is generated, and in the fourth step, the feature amount map is input to the segmentation model, and normality and abnormality for each partial image are determined. A program to identify.
Of course, this is not the case.
For example, it may be provided as a computer-readable non-temporary storage medium for storing the above-mentioned program.
 最後に、本発明に係る種々の実施形態を説明したが、これらは、例として提示したものであり、発明の範囲を限定することは意図していない。新規な実施形態は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、変更を行うことができる。実施形態やその変形は、発明の範囲や要旨に含まれると共に、特許請求の範囲に記載された発明とその均等の範囲に含まれるものである。 Finally, various embodiments according to the present invention have been described, but these are presented as examples and are not intended to limit the scope of the invention. The novel embodiment can be implemented in various other embodiments, and various omissions, replacements, and changes can be made without departing from the gist of the invention. The embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the scope of the invention described in the claims and the equivalent scope thereof.
100 :サーバ装置
110 :クライアント装置
120 :ネットワーク
201 :制御部
202 :記憶部
203 :通信部
301 :切り出し部
302 :特徴量取得部
303 :生成部
304 :識別部
305 :出力部
306 :学習部
100: Server device 110: Client device 120: Network 201: Control unit 202: Storage unit 203: Communication unit 301: Cutout unit 302: Feature amount acquisition unit 303: Generation unit 304: Identification unit 305: Output unit 306: Learning unit

Claims (11)

  1.  情報処理装置であって、
     切り出し部と、特徴量取得部と、生成部と、識別部と、を有し、
     前記切り出し部は、画像から複数の部分画像を切り出し、
     前記特徴量取得部は、前記切り出し部によって切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得し、
     前記生成部は、前記特徴量取得部によって取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成し、
     前記識別部は、前記特徴量マップをセグメンテーションモデルに入力し、前記複数の部分画像それぞれを識別する、
    情報処理装置。
    It is an information processing device
    It has a cutting unit, a feature amount acquisition unit, a generation unit, and an identification unit.
    The cutout portion cuts out a plurality of partial images from the image and cuts out a plurality of partial images.
    The feature amount acquisition unit inputs a plurality of partial images cut out by the cutout unit into the feature extraction model, acquires the feature amount, and obtains the feature amount.
    The generation unit generates a feature amount map in which the feature amounts acquired by the feature amount acquisition unit are arranged based on the position information of the corresponding partial image.
    The identification unit inputs the feature amount map into the segmentation model and identifies each of the plurality of partial images.
    Information processing device.
  2.  請求項1に記載の情報処理装置であって、
     出力部を更に有し、
     前記出力部は、前記識別部による識別の結果を出力する、
    情報処理装置。
    The information processing device according to claim 1.
    It also has an output section
    The output unit outputs the result of identification by the identification unit.
    Information processing device.
  3.  請求項2に記載の情報処理装置であって、
     前記識別部は、前記特徴量マップをセグメンテーションモデルに入力し、各部分画像が正常か、異常かを識別する、
    情報処理装置。
    The information processing device according to claim 2.
    The identification unit inputs the feature amount map into the segmentation model and discriminates whether each partial image is normal or abnormal.
    Information processing device.
  4.  請求項3に記載の情報処理装置であって、
     前記出力部は、前記識別部によって正常と識別された部分画像及び前記識別部によって異常と識別された部分画像に色を付けた予測マップを前記識別の結果として出力する、
    情報処理装置。
    The information processing device according to claim 3.
    The output unit outputs, as a result of the identification, a predicted map in which the partial image identified as normal by the identification unit and the partial image identified as abnormal by the identification unit are colored.
    Information processing device.
  5.  請求項1乃至請求項4の何れか1項に記載の情報処理装置であって、
     学習部を更に有し、
     前記学習部は、前記特徴抽出モデルと、前記セグメンテーションモデルと、を別々に学習する、
    情報処理装置。
    The information processing device according to any one of claims 1 to 4.
    Has a learning department
    The learning unit separately learns the feature extraction model and the segmentation model.
    Information processing device.
  6.  請求項1乃至請求項4の何れか1項に記載の情報処理装置であって、
     学習部を更に有し、
     前記学習部は、前記特徴抽出モデルから前記セグメンテーションモデルまでを一括で学習する、
    情報処理装置。
    The information processing device according to any one of claims 1 to 4.
    Has a learning department
    The learning unit collectively learns from the feature extraction model to the segmentation model.
    Information processing device.
  7.  請求項1乃至請求項6の何れか1項に記載の情報処理装置であって、
     前記画像は病理画像である、
    情報処理装置。
    The information processing device according to any one of claims 1 to 6.
    The image is a pathological image,
    Information processing device.
  8.  請求項1に記載の情報処理装置であって、
     前記画像は航空画像である、
    情報処理装置。
    The information processing device according to claim 1.
    The image is an aerial image,
    Information processing device.
  9.  情報処理システムであって、
     切り出し部と、特徴量取得部と、生成部と、識別部と、出力部と、を有し、
     前記切り出し部は、画像から複数の部分画像を切り出し、
     前記特徴量取得部は、前記切り出し部によって切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得し、
     前記生成部は、前記特徴量取得部によって取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成し、
     前記識別部は、前記特徴量マップをセグメンテーションモデルに入力し、各部分画像を識別し、
     前記出力部は、前記識別部による識別の結果を出力する、
    情報処理システム。
    It is an information processing system
    It has a cutting unit, a feature amount acquisition unit, a generation unit, an identification unit, and an output unit.
    The cutout portion cuts out a plurality of partial images from the image and cuts out a plurality of partial images.
    The feature amount acquisition unit inputs a plurality of partial images cut out by the cutout unit into the feature extraction model, acquires the feature amount, and obtains the feature amount.
    The generation unit generates a feature amount map in which the feature amounts acquired by the feature amount acquisition unit are arranged based on the position information of the corresponding partial image.
    The identification unit inputs the feature amount map into the segmentation model, identifies each partial image, and then identifies the partial image.
    The output unit outputs the result of identification by the identification unit.
    Information processing system.
  10.  情報処理装置が実行する情報処理方法であって、
     第1の工程と、第2の工程と、第3の工程と、第4の工程と、を含み、
     前記第1の工程では、画像から複数の部分画像を切り出し、
     前記第2の工程では、前記第1の工程において切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得し、
     前記第3の工程では、前記第2の工程において取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成し、
     前記第4の工程では、前記特徴量マップをセグメンテーションモデルに入力し、各部分画像に対する正常、異常を識別する、
    情報処理方法。
    It is an information processing method executed by an information processing device.
    The first step, the second step, the third step, and the fourth step are included.
    In the first step, a plurality of partial images are cut out from the image.
    In the second step, a plurality of partial images cut out in the first step are input to the feature extraction model, and the feature amount is acquired.
    In the third step, a feature amount map in which the feature amounts acquired in the second step are arranged based on the position information of the corresponding partial image is generated.
    In the fourth step, the feature amount map is input to the segmentation model to discriminate between normal and abnormal for each partial image.
    Information processing method.
  11.  プログラムであって、
     コンピュータに、
     第1の工程と、第2の工程と、第3の工程と、第4の工程と、を実行させ、
     前記第1の工程では、画像から複数の部分画像を切り出し、
     前記第2の工程では、前記第1の工程において切り出された複数の部分画像を特徴抽出モデルに入力し、特徴量を取得し、
     前記第3の工程では、前記第2の工程において取得された特徴量を、該当する部分画像の位置情報に基づき並べた特徴量マップを生成し、
     前記第4の工程では、前記特徴量マップをセグメンテーションモデルに入力し、各部分画像に対する正常、異常を識別する、
    プログラム。
    It ’s a program
    On the computer
    The first step, the second step, the third step, and the fourth step are executed.
    In the first step, a plurality of partial images are cut out from the image.
    In the second step, a plurality of partial images cut out in the first step are input to the feature extraction model, and the feature amount is acquired.
    In the third step, a feature amount map in which the feature amounts acquired in the second step are arranged based on the position information of the corresponding partial image is generated.
    In the fourth step, the feature amount map is input to the segmentation model to discriminate between normal and abnormal for each partial image.
    program.
PCT/JP2020/034919 2019-09-27 2020-09-15 Information processing device, information processing system, information processing method, and program WO2021060076A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-176544 2019-09-27
JP2019176544A JP7544338B2 (en) 2019-09-27 2019-09-27 Information processing device, information processing system, information processing method, and program

Publications (1)

Publication Number Publication Date
WO2021060076A1 true WO2021060076A1 (en) 2021-04-01

Family

ID=75166983

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/034919 WO2021060076A1 (en) 2019-09-27 2020-09-15 Information processing device, information processing system, information processing method, and program

Country Status (2)

Country Link
JP (1) JP7544338B2 (en)
WO (1) WO2021060076A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10321728B1 (en) * 2018-04-20 2019-06-18 Bodygram, Inc. Systems and methods for full body measurements extraction
US20190206056A1 (en) * 2017-12-29 2019-07-04 Leica Biosystems Imaging, Inc. Processing of histology images with a convolutional neural network to identify tumors
JP2019153235A (en) * 2018-03-06 2019-09-12 株式会社東芝 Object area identification apparatus, object area identification method, and program

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02195934A (en) * 1989-01-26 1990-08-02 Toshiba Corp Medical information processing system
JP5100717B2 (en) * 2009-07-30 2012-12-19 キヤノン株式会社 Image processing apparatus and image processing method
JP6330385B2 (en) * 2014-03-13 2018-05-30 オムロン株式会社 Image processing apparatus, image processing method, and program
JP6968177B2 (en) 2016-12-22 2021-11-17 ベンタナ メディカル システムズ, インコーポレイテッド Computer scoring based on primary staining and immunohistochemical images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190206056A1 (en) * 2017-12-29 2019-07-04 Leica Biosystems Imaging, Inc. Processing of histology images with a convolutional neural network to identify tumors
JP2019153235A (en) * 2018-03-06 2019-09-12 株式会社東芝 Object area identification apparatus, object area identification method, and program
US10321728B1 (en) * 2018-04-20 2019-06-18 Bodygram, Inc. Systems and methods for full body measurements extraction

Also Published As

Publication number Publication date
JP2021056571A (en) 2021-04-08
JP7544338B2 (en) 2024-09-04

Similar Documents

Publication Publication Date Title
CN110163260B (en) Residual network-based image identification method, device, equipment and storage medium
CN109313940B (en) Virtual assessment of medical device implantation path
KR102460257B1 (en) Method or apparatus for providing diagnostic results
KR102179090B1 (en) Method for medical diagnosis by using neural network
CN113096137B (en) Adaptive segmentation method and system for OCT (optical coherence tomography) retinal image field
CN109447096B (en) Glance path prediction method and device based on machine learning
CN114463605B (en) Continuous learning image classification method and device based on deep learning
US12119117B2 (en) Method and system for disease quantification of anatomical structures
JP7536893B2 (en) Image Processing Using Self-Attention Based Neural Networks
JP2024500938A (en) Automatic annotation of state features in medical images
CN114782394A (en) Cataract postoperative vision prediction system based on multi-mode fusion network
CN110599444B (en) Device, system and non-transitory readable storage medium for predicting fractional flow reserve of a vessel tree
EP4327333A1 (en) Methods and systems for automated follow-up reading of medical image data
CN113269849B (en) Method and apparatus for reconstructing magnetic resonance
CN117616467A (en) Method for training and using deep learning algorithm to compare medical images based on reduced dimension representation
WO2021060076A1 (en) Information processing device, information processing system, information processing method, and program
US12094147B2 (en) Estimating a thickness of cortical region by extracting a plurality of interfaces as mesh data
KR102521777B1 (en) Image processing method
US20220172370A1 (en) Method for detecting white matter lesions based on medical image
JP7261884B2 (en) Learning device, method and program, graph structure extraction device, method and program, and learned extraction model
EP3864620B1 (en) Correcting segmentation of medical images using a statistical analysis of historic corrections
WO2021191438A1 (en) Determining image similarity by analysing registrations
KR20240018186A (en) Method for estimating sinusitis by learning facial image based on machine learning and computing device using the same
KR20230126083A (en) Method for analyzing medical image
CN118379735A (en) Improving ease of image annotation via curved plane reformatting

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20870113

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20870113

Country of ref document: EP

Kind code of ref document: A1