WO2022249892A1 - 特徴抽出装置、特徴抽出方法、プログラム、ならびに、情報記録媒体 - Google Patents
特徴抽出装置、特徴抽出方法、プログラム、ならびに、情報記録媒体 Download PDFInfo
- Publication number
- WO2022249892A1 WO2022249892A1 PCT/JP2022/020038 JP2022020038W WO2022249892A1 WO 2022249892 A1 WO2022249892 A1 WO 2022249892A1 JP 2022020038 W JP2022020038 W JP 2022020038W WO 2022249892 A1 WO2022249892 A1 WO 2022249892A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature
- target
- input
- processing unit
- Prior art date
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 72
- 238000012545 processing Methods 0.000 claims abstract description 101
- 238000013145 classification model Methods 0.000 claims abstract description 31
- 238000012549 training Methods 0.000 claims description 55
- 206010060862 Prostate cancer Diseases 0.000 claims description 15
- 208000000236 Prostatic Neoplasms Diseases 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 9
- 238000001574 biopsy Methods 0.000 claims description 7
- 238000012706 support-vector machine Methods 0.000 claims description 7
- 210000002307 prostate Anatomy 0.000 claims description 4
- 238000007476 Maximum Likelihood Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 238000012417 linear regression Methods 0.000 claims description 3
- 238000007477 logistic regression Methods 0.000 claims description 3
- 238000012285 ultrasound imaging Methods 0.000 claims description 2
- 230000002596 correlated effect Effects 0.000 abstract description 4
- 239000000284 extract Substances 0.000 abstract description 3
- 238000000034 method Methods 0.000 description 31
- 206010028980 Neoplasm Diseases 0.000 description 13
- 201000011510 cancer Diseases 0.000 description 13
- 238000002604 ultrasonography Methods 0.000 description 9
- 238000007796 conventional method Methods 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 7
- 102000007066 Prostate-Specific Antigen Human genes 0.000 description 6
- 108010072866 Prostate-Specific Antigen Proteins 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000010365 information processing Effects 0.000 description 4
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000036210 malignancy Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0833—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
- A61B8/085—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the present invention relates to a feature extraction device, a feature extraction method, a program, and an information recording medium for extracting features of an object from a plurality of images of the object.
- Patent Document 1 when a target image in which a target is photographed and one or more attribute parameters associated with the target are received, and the target is classified by a neural network, each element of a given feature map and one or more received attribute parameters are disclosed.
- a target site resected from a subject is used as a specimen, and a pathological photograph of the specimen is used.
- a certain region is narrowed down from other regions (normal regions) and enclosed.
- the Gleason score which indicates the degree of malignancy, is determined by further examining the histological morphology of the cancer. .
- An object of the present invention is to solve the above problems, and to provide a feature extraction device, a feature extraction method, a program, and an information recording medium for extracting features of an object from a plurality of images of the object. do.
- a feature extracting device includes: an image processing unit that calculates, when an image is input, a likelihood that the input image belongs to a first image class and a feature parameter of the input image by an image model; Once the image group is input, inputting images included in the input image group into the image processing unit to calculate likelihood and feature parameters; selecting a predetermined number of representative images from the input image group based on the calculated likelihood; a feature processing unit that outputs the feature parameter calculated for the selected predetermined number of representative images as the feature of the target.
- a feature extraction device for extracting features of an object from a plurality of images of the object.
- FIG. 4 is a flow chart showing the control flow of a learning process for training an image model
- 4 is a flow chart showing the control flow of a learning process for training a classification model
- 4 is a flow chart showing the control flow of image processing for obtaining feature information from an image group.
- 4 is a flowchart showing the control flow of feature extraction processing
- 10 is a flow chart showing the control flow of classification processing. It is a graph of experimental results of classification according to the conventional method. It is a graph of the experimental results of classification according to the present embodiment. It is explanatory drawing which overlaps and compares the graph of the experimental result of the classification which concerns on this embodiment, and the graph of the experimental result of the classification which concerns on a conventional method.
- the feature extraction device is typically implemented by a computer executing a program.
- the computer is connected to various output devices and input devices, and exchanges information with these devices.
- Programs run on a computer can be distributed and sold by a server to which the computer is communicatively connected, as well as CD-ROM (Compact Disk Read Only Memory), flash memory, EEPROM (Electrically Erasable Programmable ROM). After recording on a non-transitory information recording medium such as the above, it is also possible to distribute and sell the information recording medium.
- CD-ROM Compact Disk Read Only Memory
- flash memory flash memory
- EEPROM Electrically Erasable Programmable ROM
- the program is installed in a non-temporary information recording medium such as a hard disk, solid state drive, flash memory, EEPROM, etc. of the computer. Then, the computer realizes the information processing apparatus according to the present embodiment.
- a computer's CPU Central Processing Unit
- RAM Random Access Memory
- OS Operating System
- Various information required in the process of program execution can be temporarily recorded in the RAM.
- the computer preferably has a GPU (Graphics Processing Unit) for performing various image processing calculations at high speed.
- GPUs and libraries such as TensorFlow and PyTorch, it becomes possible to use learning functions and classification functions in various artificial intelligence processes under the control of CPUs.
- the program can also be used as material for generating wiring diagrams, timing charts, and the like of electronic circuits.
- an electronic circuit that satisfies the specifications defined in the program is configured by FPGA (Field Programmable Gate Array) or ASIC (Application Specific Integrated Circuit), and the electronic circuit performs the functions defined in the program.
- the information processing apparatus of this embodiment is realized by functioning as a dedicated device that fulfills the functions.
- FIG. 1 is an explanatory diagram showing a schematic configuration of a feature extraction device according to an embodiment of the present invention.
- the feature extraction device 101 includes an image processing unit 111 and a feature processing unit 112. As shown in FIG.
- the feature extraction device 101 can also include a classification processing unit 113 as an optional element.
- the image processing unit 111 refers to the image model 151 .
- the feature extraction device 101 may comprise an image training unit 131 for training the image model 151 as an optional element. For example, if an image model 151 that has already been trained is used, the image training unit 131 can be omitted.
- the feature processing unit 112 refers to the classification model 153.
- the feature extraction device 101 can include a classification training unit 133 for training the classification model 153 as an optional element. For example, if a trained model is used as the classification model 153, the classification training unit 133 can be omitted.
- the image training unit 131 and the classification training unit 133 can be implemented as devices independent of the feature extraction device 101.
- the learned parameters constituting the trained image model 151 and the classification model 153, and the inference program using the learned parameters can be transferred to the image training unit 131 and the classification model via an information recording medium, a computer communication network, or the like. It is handed over from the training unit 133 to the feature extraction device 101 .
- learning parameters of models such as the image model 151 and the classification model 153 may be expressed as model training, learning, updating, and the like.
- the image processing unit 111 uses the image model 151 to calculate the likelihood that the input image belongs to the first image class and the feature parameters of the input image. Therefore, when a plurality of images are sequentially (or in parallel or collectively) input to the image processing unit 111, the image processing unit 111 sequentially (or , in parallel or collectively).
- the image model 151 various models such as a model related to a deep convolutional neural network can be adopted.
- the image processing unit 111 calculates the likelihood from the vector of pixel values, it can be considered that the dimensionality of the vector values is reduced. If the image model 151 is related to a neural network or the like, information is exchanged across multiple layers for dimensionality reduction. Therefore, the information output in the intermediate layer can be used as the feature parameter. That is, an intermediate vector in the middle of dimensionality reduction in the image model 151 can be used as a feature parameter.
- the likelihood associated with the image can be used as it is as a feature parameter. That is, the likelihood, which is the final result of dimensionality reduction in the image model 151, can be used as a feature parameter.
- the feature processing unit 112 outputs feature information of the image group.
- the feature processing unit 112 inputs images included in the input image group to the image processing unit 111 to calculate likelihood and feature parameters.
- the feature processing unit 112 selects a predetermined number of representative images from the input image group based on the calculated likelihood.
- the feature processing unit 112 outputs feature parameters calculated for the predetermined number of selected representative images as feature information of the image group.
- the number of representative images selected for one image group can be any number of 1 or more.
- the feature information when selecting one representative image, is the feature parameter of the representative image, and when the feature parameter is the likelihood itself, the feature information is a scalar value consisting of the likelihood. .
- the feature parameter is the likelihood itself
- the feature information is a vector value obtained by arranging three such likelihoods.
- the feature information output from the feature processing unit 112 for one image group is N ⁇ M-dimensional. vector.
- the simplest method for selecting representative images based on likelihood is to select a predetermined number of representative images in descending order of likelihood.
- the feature information emphasizes the features of the image group that are suitable for the first image class.
- the next conceivable method is to select a predetermined number of representative images in descending order of the absolute value of the difference between the likelihood and the predetermined reference value. For example, assuming the likelihood that an image belongs to the first image class, the likelihood is a value between 0 and 1, and the predetermined reference value is the boundary for determining whether or not the image belongs to the first image class. value, which can be 0.5.
- the feature information emphasizes the contrast of whether or not the image group conforms to the first image class, compared to the above method.
- the feature information emphasizes the extent to which the image group is dispersed with respect to the first image class.
- the classification processing unit 113 inputs the input target image group to the feature processing unit 112, and from the feature information output from the feature processing unit 112, the target belongs to the first object class is estimated by the classification model 153 .
- the feature information output from the feature processing unit 112 is obtained by combining the target and the first image. It expresses the relationship with the class.
- the first image class and the first image class are correlated so that the belonging of the images included in the target image group related to the object to the first image class is correlated with the belonging of the object to the first object class as the first image class. If the feature processing unit 112 selects a representative image so as to set the target class and emphasize the features of the image group, the feature information output from the feature processing unit 112 is used to classify the target. you will be able to do it properly.
- the classification processing unit 113 can be configured to receive additional data related to the target in addition to the target image group related to the target.
- the classification processing unit 113 feature information output from the feature processing unit 112 by inputting the input target image group to the feature processing unit 112; the input additional data; , the classification model 153 infers whether the subject belongs to the first subject class.
- classification model 153 various models such as linear regression, logistic regression, ridge regression, Lasso regression, or models related to support vector machines can be adopted.
- the image training unit 131 an image; a label indicating whether the image belongs to the first image class;
- the image model 151 is updated using training data consisting of a set of , and learning proceeds.
- the classification training unit 133 feature information of a target image group related to the target; If there is additional data related to the target, the additional data; a label indicating whether the target belongs to the first target class; training data consisting of the set of is used to update the classification model 153 and proceed with learning.
- the subject is a subject or patient diagnosed with prostate cancer.
- the first subject class is the class that represents that the subject is (likely) afflicted with prostate cancer.
- the target image group a plurality of images captured by ultrasound or a large number of images obtained by dividing the captured photograph into predetermined sizes are adopted.
- PSA Prostate Specific Antigen
- TPV Total Prostate Volume
- PSAD PSA Density
- the simplest first image class is a class that indicates that the subject in the image is suffering from prostate cancer.
- the training data necessary for advancing the learning of the image model 151 are: a single image of the subject, and a label indicating whether the subject had prostate cancer, i.e., whether the subject belonged to the first subject class; A large number of sets of are prepared.
- the first image class is a class that indicates that the Gleason score given to the specimen part corresponding to the image part depicted in the image in the biopsy specimen is a predetermined value or more can also be adopted.
- the training data necessary for advancing the learning of the image model 151 are: A single image of the target site, a label indicating whether the Gleason score assigned to the site based on the biopsy sample is equal to or greater than a predetermined value; Image training data with a large number of pairs of are prepared.
- the classification model 153 can be trained.
- the data required to advance the learning of the classification model 153 are: Feature information obtained by the image model 151 from a target image group related to the target (a photographic image of the subject photographed by ultrasound or the like, or an image obtained by dividing the photographed photograph into predetermined sizes); If available, additional data such as target age, PSA value, TPV value, PSAD value, etc. a label representing the final diagnosis result of whether or not the subject is positive for prostate cancer; Classification training data with many pairs of are prepared.
- FIG. 2 is a flowchart showing the control flow of learning processing for training an image model. Description will be made below with reference to this figure.
- the image training unit 131 first receives input of image training data (step S201).
- step S202 the image training unit 131 repeats the following processing until the training of the image model 151 is completed (step S202; No).
- the image training unit 131 repeats the following process for each set included in the image training data (step S204).
- the image training unit 131 obtains the images and labels included in the set (step S205), provides the obtained images as inputs to the neural network related to the image model 151 (step S206), and outputs the The output result is obtained (step S207), and the difference between the output result and the label is obtained (step S208).
- the image training unit 131 calculates the value of the evaluation function based on the difference obtained for each pair, updates the image model 151 (step S210), and controls the is returned to step S202.
- step S202 After completing the training of the image model 151 (step S202; Yes), the image training unit 131 terminates this process.
- FIG. 3 is a flow chart showing the control flow of learning processing for training a classification model. Description will be made below with reference to this figure.
- the classification training unit 133 first receives input of classification training data (step S301).
- the image training unit 131 repeats the following process for each set included in the image training data (step S304).
- the image training unit 131 acquires the image group included in the set, the additional data (if additional data is included), and the label (step S305), and converts the image group to the learned images. It is given as an input to the image processing unit 111 that operates based on the model 151 (step S306).
- step S307 the image processing unit 111 and the feature extraction unit 112 execute image processing.
- the image processing unit 111 can be implemented in the feature extraction device 101, or implemented by referring to the same image model 151 in a device independent of the feature extraction device 101.
- FIG. 4 is a flow chart showing the control flow of image processing for obtaining feature information from an image group. Description will be made below with reference to this figure.
- the image processing unit 111 accepts input of an image group sequentially, in parallel, or collectively (step S401), and performs the following processing for each image included in the input image group. is repeated (step S402).
- the image processing unit 111 gives the image to the neural network related to the image model 151 (step S403), and obtains the likelihood and feature parameters output from the neural network (step S404).
- the feature extraction unit 112 selects a predetermined number of representative images based on the obtained likelihoods (step S406).
- the feature extraction unit 112 puts together the feature parameters obtained for the selected representative image, outputs them as feature information (step S407), and ends this processing.
- the classification training unit 133 acquires feature information output from the image processing unit 111 (step S308).
- the classification training unit 133 provides the acquired feature information and, if input, the additional data as inputs to the classifier related to the classification model 153 (step S309), and outputs the result output from the classifier. (step S310), and the difference between the output result and the label is obtained (step S311).
- the classification training unit 133 calculates the value of the evaluation function based on the difference obtained for each pair, updates the classification model 153 (step S313), and controls the is returned to step S302.
- step S302 When the classification model training is completed (step S302; Yes), the classification training unit 133 terminates this process.
- feature extraction and classification learning processing can be executed in parallel at high speed by using a library.
- the training of the image model 151 and the classification model 153 may be completed when the number of times the model updates are repeated reaches a predetermined number, or may be completed when a predetermined convergence condition is satisfied.
- FIG. 5 is a flowchart showing the control flow of feature extraction processing. Description will be made below with reference to this figure.
- the feature extraction device 101 receives an input of an image group related to an object (step S501).
- the feature extraction device 101 gives the input image group to the image processing unit 111 (step S502), and causes the image processing unit 111 and the feature extraction unit 112 to execute the image processing described above (step S503).
- the image processing unit 111 calculates the likelihood and feature parameters of each image in the image group
- the feature extraction unit 112 selects a predetermined number of representative images from the image group based on the likelihood, and extracts the features of the representative images. The parameters are put together and output as feature information of the image group.
- the feature extraction device 112 acquires feature information of the image group output from the feature processing unit 111 (step S504).
- the feature processing unit 112 outputs the acquired feature information as feature information related to the object (step S505), and terminates this process.
- FIG. 6 is a flow chart showing the control flow of the classification process. Description will be made below with reference to this figure.
- the classification processing unit 113 of the feature extraction device 101 receives input of an image group related to the object and additional data (if any) (step S601).
- the input image group is given as an input to the feature processing unit 112 (step S602).
- the feature processing unit 112 executes the feature extraction process described above (step S603).
- the classification processing unit 113 acquires the feature information output from the feature processing unit 112 (step S604), and converts the acquired feature information and additional data (if input) into the classification model 153. (step S605).
- the classification processing unit 113 causes the classifier to estimate whether or not the object belongs to the first class based on the classification model 153 (step S606), and outputs the result ( Step S607), this process is terminated.
- the output result may include information on whether or not the subject belongs to the first class, as well as the probability.
- each image is normalized to 256 ⁇ 256 pixels.
- Each image is accompanied by information on whether the subject was affected or not.
- a Gleason score assigned by an expert by microscopic observation of a biopsy specimen separately obtained from the subject is associated with the image.
- the first image class is a method based on whether the subject was affected (Cancer classification), a method based on whether the Gleason score attached to the image is 8 or more (High-grade cancer classification), We experimented on two types.
- classification model 153 we used three types: Ridge, Lasso, and Support Vector Machine (SVM).
- SVM Support Vector Machine
- the accuracy of the best performing SVM was 0.722 (confidence interval 95% range 0.620-0.824 ), and it can be seen that the application of the feature extraction device 101 of the present embodiment significantly improves the accuracy.
- FIG. 7 is a graph of experimental results of classification according to the conventional method.
- FIG. 8 is a graph of experimental results of classification according to the present embodiment.
- FIG. 9 is an explanatory diagram for superimposing and comparing a graph of experimental results of classification according to the present embodiment and a graph of experimental results of classification according to the conventional method.
- These figures show two types of ROC curves, the ROC curve based only on the clinical data of the conventional method and the ROC curve according to the present embodiment.
- the ROC curve moves to the upper left in this embodiment as compared to the conventional method, and the lower area is wider in this embodiment than in the conventional method. Therefore, it can be seen that the method according to the present embodiment is more effective than the conventional method.
- this embodiment was used to estimate the presence or absence of prostate cancer from ultrasound images. can also be applied.
- the feature extraction device is an image processing unit that calculates, when an image is input, a likelihood that the input image belongs to a first image class and a feature parameter of the input image by an image model; Once the image group is input, inputting images included in the input image group into the image processing unit to calculate likelihood and feature parameters; selecting a predetermined number of representative images from the input image group based on the calculated likelihood; a feature processing unit that outputs the feature parameters calculated for the selected predetermined number of representative images as feature information of the image group.
- the feature extraction device is When a target image group related to an object is input, the input target image group is input to the feature processing unit, and the target is classified into the first target class based on the feature information output from the feature processing unit. further comprising a classification processing unit for estimating whether or not the classification model belongs, The belonging of the images included in the target image group related to the object to the first image class may be correlated with the belonging of the object to the first object class.
- the classification processing unit further receives additional data related to the target, the output feature information; the input additional data; Therefore, whether or not the object belongs to the first object class can be estimated by the classification model.
- the target image group is composed of a plurality of images of the prostate of the target obtained by ultrasound imaging
- the additional data includes the subject's age, PSA value, TPV value, PSAD value
- the first class of subjects can be configured to be a class that represents that the subject is suffering from prostate cancer.
- the first image class is a class representing that the Gleason score given to the specimen part corresponding to the image part depicted in the image in the biopsy specimen is a predetermined value or more.
- the first image class may be a class representing that the subject associated with the image is suffering from prostate cancer.
- the feature processing section may be configured to select the predetermined number of representative images in descending order of the absolute value of the difference between the likelihood and the predetermined reference value.
- the likelihood is a value of 0 or more and 1 or less
- the predetermined reference value can be configured to be 0.5.
- the feature processing section may be configured to select the predetermined number of representative images in descending order of likelihood.
- the feature processing section may be configured to select images having the minimum, median, and maximum likelihoods as the predetermined number of representative images.
- the feature parameter computed for the image may be configured to be the likelihood computed for the image.
- the feature parameter computed for the image can be configured to be the median vector of the image in the image model.
- the image model can be configured to be a model for a deep convolutional neural network.
- the classification model can be configured to be linear regression, logistic regression, ridge regression, Lasso regression, or a model for support vector machines.
- the feature extraction method includes: a step in which a feature extraction device is input with a group of target images relating to the target; a step of calculating, by the image model, the likelihood that an image included in the input target image group belongs to the first image class and the feature parameters of the image; selecting a predetermined number of representative images from the input target image group based on the calculated likelihood; the feature extraction device outputting the calculated feature parameters for the selected predetermined number of representative images as feature information of the image group.
- the program according to the present embodiment causes a computer to an image processing unit that calculates, when an image is input, a likelihood that the input image belongs to a first image class and a feature parameter of the input image by an image model; Once the image group is input, inputting images included in the input image group into the image processing unit to calculate likelihood and feature parameters; selecting a predetermined number of representative images from the input image group based on the calculated likelihood; It functions as a feature processing unit that outputs the feature parameter calculated for the selected predetermined number of representative images as the feature of the target.
- the above program is recorded in the computer-readable non-temporary information recording medium according to the present embodiment.
- a feature extraction device for extracting features of an object from a plurality of images of the object.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Vascular Medicine (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
画像が入力されると、前記入力された画像が第1画像クラスに属する尤度ならびに前記入力された画像の特徴パラメータを、画像モデルにより算定する画像処理部、
画像群が入力されると、
前記入力された画像群に含まれる画像を前記画像処理部に入力して、尤度ならびに特徴パラメータを算定させ、
前記算定された尤度に基づいて、前記入力された画像群から所定数の代表画像を選択し、
前記選択された所定数の代表画像について前記算定された特徴パラメータを当該対象の特徴として出力
する特徴処理部
を備える。
本実施形態に係る特徴抽出装置は、典型的には、プログラムをコンピュータが実行することによって実現される。当該コンピュータは、各種の出力装置や入力装置に接続され、これらの機器と情報を送受する。
入力された対象画像群を特徴処理部112に入力することにより、特徴処理部112から出力される特徴情報と、
入力された付加データと、
から、対象が第1対象クラスに属するか否かを、分類モデル153により推定する。
画像と、
当該画像が第1画像クラスに属するか否かを表すラベルと、
の組からなる訓練データを用いて、画像モデル151を更新し、学習を進める。
対象に係る対象画像群の特徴情報と、
当該対象に係る付加データがあれば、その付加データと、
当該対象が第1対象クラスに属するか否かを表すラベルと、
の組からなる訓練データを用いて、分類モデル153を更新し、学習を進める。
対象が撮影された1枚の画像と、
当該対象が前立腺癌に罹患していたか否か、すなわち、当該対象が第1対象クラスに属していたか否か表すラベルと、
の組を多数用意することになる。
対象の部位が撮影された1枚の画像と、
生検標本に基づいて当該部位について付されたグリソンスコアが所定値以上であるか否かを表すラベルと、
の組を多数揃えた画像訓練データを用意することになる。
対象に係る対象画像群(被験者を超音波等により撮影した写真画像や、撮影された写真を所定サイズに分割した画像)から画像モデル151により得られた特徴情報と、
用意できるようであれば、対象の年齢、PSA値、TPV値、PSAD値等の付加データと、
当該対象についての前立腺癌が陽性であるか否かを最終的に診断した結果を表すラベルと、
の組を多数揃えた分類訓練データを用意することになる。
図2は、画像モデルを訓練するための学習処理の制御の流れを示すフローチャートである。以下、本図を参照して説明する。
図3は、分類モデルを訓練するための学習処理の制御の流れを示すフローチャートである。以下、本図を参照して説明する。
図5は、特徴抽出処理の制御の流れを示すフローチャートである。以下、本図を参照して説明する。
図6は、分類処理の制御の流れを示すフローチャートである。以下、本図を参照して説明する。
以下では、本実施形態により、前立腺癌の罹患の有無を超音波画像により推定する態様についての実験結果を説明する。
以上説明した通り、本実施形態に係る特徴抽出装置は、
画像が入力されると、前記入力された画像が第1画像クラスに属する尤度ならびに前記入力された画像の特徴パラメータを、画像モデルにより算定する画像処理部、
画像群が入力されると、
前記入力された画像群に含まれる画像を前記画像処理部に入力して、尤度ならびに特徴パラメータを算定させ、
前記算定された尤度に基づいて、前記入力された画像群から所定数の代表画像を選択し、
前記選択された所定数の代表画像について前記算定された特徴パラメータを、当該画像群の特徴情報として出力
する特徴処理部
を備える。
対象に係る対象画像群が入力されると、前記入力された対象画像群を前記特徴処理部に入力することにより、前記特徴処理部から出力される特徴情報から、前記対象が第1対象クラスに属するか否かを、分類モデルにより推定する分類処理部
をさらに備え、
前記対象に係る前記対象画像群に含まれる画像が前記第1画像クラスに属することは、前記対象が前記第1対象クラスに属することに、相関する
ように構成することができる。
前記分類処理部は、当該対象に係る付加データをさらに入力され、
前記出力された特徴情報と、
前記入力された付加データと、
から、前記対象が前記第1対象クラスに属するか否かを、前記分類モデルにより推定する
ように構成することができる。
前記対象画像群は、前記対象の前立腺を超音波により撮影した複数の画像からなり、
前記付加データは、前記対象の年齢、PSA値、TPV値、PSAD値を含み、
前記第1対象クラスは、前記対象が前立腺癌を罹患していることを表すクラスである
ように構成することができる。
前記画像モデルの訓練データにおいて、前記第1画像クラスは、生検標本において、前記画像に描写された画像部位に対応する標本部位に付されたグリソンスコアが、所定値以上であることを表すクラスである
ように構成することができる。
前記画像モデルの訓練データにおいて、前記第1画像クラスは、前記画像に係る対象が前立腺癌を罹患していることを表すクラスである
ように構成することができる。
前記特徴処理部は、前記尤度と所定基準値の差の絶対値が大きい順に前記所定数の代表画像を選択する
ように構成することができる。
前記尤度は0以上1以下の値であり、
前記所定基準値は、0.5である
ように構成することができる。
前記特徴処理部は、前記尤度が大きい順に前記所定数の代表画像を選択する
ように構成することができる。
前記特徴処理部は、前記尤度が最小値、中央値、最大値となる画像を、前記所定数の代表画像として選択する
ように構成することができる。
前記画像について算定される特徴パラメータは、前記画像に対して算定された尤度である
ように構成することができる。
前記画像について算定される特徴パラメータは、前記画像モデルにおける前記画像の中間ベクトルである
ように構成することができる。
前記画像モデルは、深層畳み込みニューラルネットワークに係るモデルである
ように構成することができる。
前記分類モデルは、線形回帰、ロジスティック回帰、リッジ回帰、ラッソ回帰もしくは、サポートベクターマシンに係るモデルである
ように構成することができる。
特徴抽出装置が、対象に係る対象画像群を入力される工程、
前記特徴抽出装置が、前記入力された対象画像群に含まれる画像が第1画像クラスに属する尤度ならびに当該画像の特徴パラメータを、画像モデルにより算定する工程、
前記特徴抽出装置が、前記算定された尤度に基づいて、前記入力された対象画像群から所定数の代表画像を選択する工程、
前記特徴抽出装置が、前記選択された所定数の代表画像について前記算定された特徴パラメータを、当該画像群の特徴情報として出力する工程
を備える。
画像が入力されると、前記入力された画像が第1画像クラスに属する尤度ならびに前記入力された画像の特徴パラメータを、画像モデルにより算定する画像処理部、
画像群が入力されると、
前記入力された画像群に含まれる画像を前記画像処理部に入力して、尤度ならびに特徴パラメータを算定させ、
前記算定された尤度に基づいて、前記入力された画像群から所定数の代表画像を選択し、
前記選択された所定数の代表画像について前記算定された特徴パラメータを、当該対象の特徴として出力
する特徴処理部
として機能させる。
本願においては、日本国に対して令和3年(2021年)5月28日(金)に出願した特許出願特願2021-089721を基礎とする優先権を主張するものとし、指定国の法令が許す限り、当該基礎出願の内容を本願に取り込むものとする。
111 画像処理部
112 特徴処理部
113 分類処理部
131 画像訓練部
133 分類訓練部
151 画像モデル
153 分類モデル
Claims (17)
- 画像が入力されると、前記入力された画像が第1画像クラスに属する尤度ならびに前記入力された画像の特徴パラメータを、画像モデルにより算定する画像処理部、
画像群が入力されると、
前記入力された画像群に含まれる画像を前記画像処理部に入力して、尤度ならびに特徴パラメータを算定させ、
前記算定された尤度に基づいて、前記入力された画像群から所定数の代表画像を選択し、
前記選択された所定数の代表画像について前記算定された特徴パラメータを、当該画像群の特徴情報として出力
する特徴処理部
を備えることを特徴とする特徴抽出装置。 - 対象に係る対象画像群が入力されると、前記入力された対象画像群を前記特徴処理部に入力することにより、前記特徴処理部から出力される特徴情報から、前記対象が第1対象クラスに属するか否かを、分類モデルにより推定する分類処理部
をさらに備え、
前記対象に係る前記対象画像群に含まれる画像が前記第1画像クラスに属することは、前記対象が前記第1対象クラスに属することに、相関する
ことを特徴とする請求項1に記載の特徴抽出装置。 - 前記分類処理部は、当該対象に係る付加データをさらに入力され、
前記出力された特徴情報と、
前記入力された付加データと、
から、前記対象が前記第1対象クラスに属するか否かを、前記分類モデルにより推定する
ことを特徴とする請求項2に記載の特徴抽出装置。 - 前記対象画像群は、前記対象の前立腺を超音波により撮影した複数の画像からなり、
前記付加データは、前記対象の年齢、PSA値、TPV値、PSAD値を含み、
前記第1対象クラスは、前記対象が前立腺癌を罹患していることを表すクラスである
ことを特徴とする請求項3に記載の特徴抽出装置。 - 前記画像モデルの訓練データにおいて、前記第1画像クラスは、生検標本において、前記画像に描写された画像部位に対応する標本部位に付されたグリソンスコアが、所定値以上であることを表すクラスである
ことを特徴とする請求項4に記載の特徴抽出装置。 - 前記画像モデルの訓練データにおいて、前記第1画像クラスは、前記画像に係る対象が前立腺癌を罹患していることを表すクラスである
ことを特徴とする請求項4に記載の特徴抽出装置。 - 前記特徴処理部は、前記尤度と所定基準値の差の絶対値が大きい順に前記所定数の代表画像を選択する
ことを特徴とする請求項1に記載の特徴抽出装置。 - 前記尤度は0以上1以下の値であり、
前記所定基準値は、0.5である
ことを特徴とする請求項7に記載の特徴抽出装置。 - 前記特徴処理部は、前記尤度が大きい順に前記所定数の代表画像を選択する
ことを特徴とする請求項1に記載の特徴抽出装置。 - 前記特徴処理部は、前記尤度が最小値、中央値、最大値となる画像を、前記所定数の代表画像として選択する
ことを特徴とする請求項1に記載の特徴抽出装置。 - 前記画像について算定される特徴パラメータは、前記画像に対して算定された尤度である
ことを特徴とする請求項1に記載の特徴抽出装置。 - 前記画像について算定される特徴パラメータは、前記画像モデルにおける前記画像の中間ベクトルである
ことを特徴とする請求項1に記載の特徴抽出装置。 - 前記画像モデルは、深層畳み込みニューラルネットワークに係るモデルである
ことを特徴とする請求項1に記載の特徴抽出装置。 - 前記分類モデルは、線形回帰、ロジスティック回帰、リッジ回帰、ラッソ回帰もしくは、サポートベクターマシンに係るモデルである
ことを特徴とする請求項1に記載の特徴抽出装置。 - 特徴抽出装置が、対象に係る対象画像群を入力される工程、
前記特徴抽出装置が、前記入力された対象画像群に含まれる画像が第1画像クラスに属する尤度ならびに当該画像の特徴パラメータを、画像モデルにより算定する工程、
前記特徴抽出装置が、前記算定された尤度に基づいて、前記入力された対象画像群から所定数の代表画像を選択する工程、
前記特徴抽出装置が、前記選択された所定数の代表画像について前記算定された特徴パラメータを、当該画像群の特徴情報として出力する工程
を備えることを特徴とする特徴抽出方法。 - コンピュータを、
画像が入力されると、前記入力された画像が第1画像クラスに属する尤度ならびに前記入力された画像の特徴パラメータを、画像モデルにより算定する画像処理部、
画像群が入力されると、
前記入力された画像群に含まれる画像を前記画像処理部に入力して、尤度ならびに特徴パラメータを算定させ、
前記算定された尤度に基づいて、前記入力された画像群から所定数の代表画像を選択し、
前記選択された所定数の代表画像について前記算定された特徴パラメータを、当該対象の特徴として出力
する特徴処理部
として機能させることを特徴とするプログラム。 - 請求項16に記載のプログラムが記録されたコンピュータ読取可能な非一時的な情報記録媒体。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023523407A JPWO2022249892A1 (ja) | 2021-05-28 | 2022-05-12 | |
EP22811165.4A EP4349266A1 (en) | 2021-05-28 | 2022-05-12 | Feature extraction device, feature extraction method, program, and information recording medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021089721 | 2021-05-28 | ||
JP2021-089721 | 2021-05-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022249892A1 true WO2022249892A1 (ja) | 2022-12-01 |
Family
ID=84229895
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/020038 WO2022249892A1 (ja) | 2021-05-28 | 2022-05-12 | 特徴抽出装置、特徴抽出方法、プログラム、ならびに、情報記録媒体 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4349266A1 (ja) |
JP (1) | JPWO2022249892A1 (ja) |
WO (1) | WO2022249892A1 (ja) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6238342B1 (en) * | 1998-05-26 | 2001-05-29 | Riverside Research Institute | Ultrasonic tissue-type classification and imaging methods and apparatus |
JP2015080720A (ja) * | 2013-10-24 | 2015-04-27 | 三星電子株式会社Samsung Electronics Co.,Ltd. | コンピュータ補助診断方法及び装置 |
WO2016194161A1 (ja) * | 2015-06-03 | 2016-12-08 | 株式会社日立製作所 | 超音波診断装置、及び画像処理方法 |
JP6345332B1 (ja) | 2017-11-21 | 2018-06-20 | 国立研究開発法人理化学研究所 | 分類装置、分類方法、プログラム、ならびに、情報記録媒体 |
JP2020089399A (ja) * | 2018-12-03 | 2020-06-11 | コニカミノルタ株式会社 | 制御装置及びプログラム |
US20200395123A1 (en) * | 2019-06-16 | 2020-12-17 | International Business Machines Corporation | Systems and methods for predicting likelihood of malignancy in a target tissue |
JP2021089721A (ja) | 2019-11-08 | 2021-06-10 | チャオス ソフトウェア エルティーディー.Chaos Software Ltd. | 修正多重重点的サンプリングを用いる画像のレンダリング |
-
2022
- 2022-05-12 WO PCT/JP2022/020038 patent/WO2022249892A1/ja active Application Filing
- 2022-05-12 JP JP2023523407A patent/JPWO2022249892A1/ja active Pending
- 2022-05-12 EP EP22811165.4A patent/EP4349266A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6238342B1 (en) * | 1998-05-26 | 2001-05-29 | Riverside Research Institute | Ultrasonic tissue-type classification and imaging methods and apparatus |
JP2015080720A (ja) * | 2013-10-24 | 2015-04-27 | 三星電子株式会社Samsung Electronics Co.,Ltd. | コンピュータ補助診断方法及び装置 |
WO2016194161A1 (ja) * | 2015-06-03 | 2016-12-08 | 株式会社日立製作所 | 超音波診断装置、及び画像処理方法 |
JP6345332B1 (ja) | 2017-11-21 | 2018-06-20 | 国立研究開発法人理化学研究所 | 分類装置、分類方法、プログラム、ならびに、情報記録媒体 |
JP2020089399A (ja) * | 2018-12-03 | 2020-06-11 | コニカミノルタ株式会社 | 制御装置及びプログラム |
US20200395123A1 (en) * | 2019-06-16 | 2020-12-17 | International Business Machines Corporation | Systems and methods for predicting likelihood of malignancy in a target tissue |
JP2021089721A (ja) | 2019-11-08 | 2021-06-10 | チャオス ソフトウェア エルティーディー.Chaos Software Ltd. | 修正多重重点的サンプリングを用いる画像のレンダリング |
Non-Patent Citations (1)
Title |
---|
LUCAS MARIT; JANSEN ILARIA; SAVCI-HEIJINK C. DILARA; MEIJER SYBREN L.; BOER ONNO J. DE; LEEUWEN TON G. VAN; BRUIN DANIEL M. DE; MA: "Deep learning for automatic Gleason pattern classification for grade group determination of prostate biopsies", VIRCHOWS ARCHIV, SPRINGER BERLIN HEIDELBERG, BERLIN/HEIDELBERG, vol. 475, no. 1, 16 May 2019 (2019-05-16), Berlin/Heidelberg, pages 77 - 83, XP036827365, ISSN: 0945-6317, DOI: 10.1007/s00428-019-02577-x * |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022249892A1 (ja) | 2022-12-01 |
EP4349266A1 (en) | 2024-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170249739A1 (en) | Computer analysis of mammograms | |
CN107492099B (zh) | 医学图像分析方法、医学图像分析系统以及存储介质 | |
CN107464250B (zh) | 基于三维mri图像的乳腺肿瘤自动分割方法 | |
CN111784671B (zh) | 基于多尺度深度学习的病理图像病灶区域检测方法 | |
US10249042B2 (en) | Method and apparatus for providing medical information service on basis of disease model | |
US20220199258A1 (en) | Training method for specializing artificial interlligence model in institution for deployment, and apparatus for training artificial intelligence model | |
Zimmermann et al. | Deep learning–based molecular morphometrics for kidney biopsies | |
US11087883B1 (en) | Systems and methods for transfer-to-transfer learning-based training of a machine learning model for detecting medical conditions | |
CN113571203A (zh) | 多中心基于联邦学习的脑肿瘤预后生存期预测方法及系统 | |
JP7294695B2 (ja) | 学習済モデルによるプログラム、情報記録媒体、分類装置、ならびに、分類方法 | |
CN114022718B (zh) | 消化系统病理图像识别方法、系统及计算机存储介质 | |
CN111814868A (zh) | 一种基于影像组学特征选择的模型、构建方法和应用 | |
CN113592797A (zh) | 基于多数据融合及深度学习的乳腺结节风险等级预测系统 | |
CN110916666B (zh) | 一种预测手术切除肝细胞癌复发的影像组学特征处理方法 | |
BenTaieb et al. | Deep learning models for digital pathology | |
CN114445356A (zh) | 基于多分辨率的全视野病理切片图像肿瘤快速定位方法 | |
CN113538344A (zh) | 区分萎缩性胃炎和胃癌的图像识别系统、设备及介质 | |
AU2016201298A1 (en) | Computer analysis of mammograms | |
Guidozzi et al. | The role of artificial intelligence in the endoscopic diagnosis of esophageal cancer: a systematic review and meta-analysis | |
CN113705595A (zh) | 异常细胞转移程度的预测方法、装置和存储介质 | |
Bandaru et al. | A review on advanced methodologies to identify the breast cancer classification using the deep learning techniques | |
WO2022249892A1 (ja) | 特徴抽出装置、特徴抽出方法、プログラム、ならびに、情報記録媒体 | |
Arar et al. | High-quality immunohistochemical stains through computational assay parameter optimization | |
KR102448338B1 (ko) | PatchCore 기법을 응용한, 조직 병리 이상 탐지 모델의 비지도 학습 방법 및 학습 장치, 그리고 이를 이용한 테스트 방법 및 테스트 장치 | |
CN113379770B (zh) | 鼻咽癌mr图像分割网络的构建方法、图像分割方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22811165 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023523407 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022811165 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022811165 Country of ref document: EP Effective date: 20240102 |