WO2022158490A1 - 予測システム、制御方法、および制御プログラム - Google Patents
予測システム、制御方法、および制御プログラム Download PDFInfo
- Publication number
- WO2022158490A1 WO2022158490A1 PCT/JP2022/001798 JP2022001798W WO2022158490A1 WO 2022158490 A1 WO2022158490 A1 WO 2022158490A1 JP 2022001798 W JP2022001798 W JP 2022001798W WO 2022158490 A1 WO2022158490 A1 WO 2022158490A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- prediction
- information
- image
- subject
- intervention
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 46
- 230000000694 effects Effects 0.000 claims description 90
- 201000010099 disease Diseases 0.000 claims description 78
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims description 78
- 238000013528 artificial neural network Methods 0.000 claims description 44
- 238000002560 therapeutic procedure Methods 0.000 claims description 22
- 201000008482 osteoarthritis Diseases 0.000 claims description 15
- 210000003141 lower extremity Anatomy 0.000 claims description 13
- 210000001364 upper extremity Anatomy 0.000 claims description 12
- 238000002651 drug therapy Methods 0.000 claims description 11
- 208000002177 Cataract Diseases 0.000 claims description 8
- 208000028169 periodontal disease Diseases 0.000 claims description 8
- 201000004384 Alopecia Diseases 0.000 claims description 7
- 206010010214 Compression fracture Diseases 0.000 claims description 7
- 206010061159 Foot deformity Diseases 0.000 claims description 7
- 208000001963 Hallux Valgus Diseases 0.000 claims description 7
- 208000008589 Obesity Diseases 0.000 claims description 7
- 206010041591 Spinal osteoarthritis Diseases 0.000 claims description 7
- 231100000360 alopecia Toxicity 0.000 claims description 7
- 238000009207 exercise therapy Methods 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 7
- 235000020824 obesity Nutrition 0.000 claims description 7
- 206010039073 rheumatoid arthritis Diseases 0.000 claims description 7
- 208000001076 sarcopenia Diseases 0.000 claims description 7
- 208000005801 spondylosis Diseases 0.000 claims description 7
- 238000002604 ultrasonography Methods 0.000 claims description 6
- 235000005911 diet Nutrition 0.000 claims description 4
- 206010029469 Nodal osteoarthritis Diseases 0.000 claims description 3
- 230000037213 diet Effects 0.000 claims description 3
- 208000024891 symptom Diseases 0.000 description 42
- 238000012545 processing Methods 0.000 description 33
- 208000002193 Pain Diseases 0.000 description 26
- 230000008961 swelling Effects 0.000 description 20
- 238000012549 training Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 15
- 210000000515 tooth Anatomy 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 10
- 210000001624 hip Anatomy 0.000 description 10
- 230000004044 response Effects 0.000 description 10
- 238000013527 convolutional neural network Methods 0.000 description 9
- 230000037303 wrinkles Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 238000010801 machine learning Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 210000001508 eye Anatomy 0.000 description 6
- 210000004209 hair Anatomy 0.000 description 6
- 210000003128 head Anatomy 0.000 description 6
- 238000011282 treatment Methods 0.000 description 6
- 238000002591 computed tomography Methods 0.000 description 5
- 210000002683 foot Anatomy 0.000 description 5
- 238000002595 magnetic resonance imaging Methods 0.000 description 5
- 210000003205 muscle Anatomy 0.000 description 5
- 238000002600 positron emission tomography Methods 0.000 description 5
- 238000001356 surgical procedure Methods 0.000 description 5
- 238000004383 yellowing Methods 0.000 description 5
- LEHOTFFKMJEONL-UHFFFAOYSA-N Uric Acid Chemical compound N1C(=O)NC(=O)C2=C1NC(=O)N2 LEHOTFFKMJEONL-UHFFFAOYSA-N 0.000 description 4
- TVWHNULVHGKJHS-UHFFFAOYSA-N Uric acid Natural products N1C(=O)NC(=O)C2NC(=O)NC21 TVWHNULVHGKJHS-UHFFFAOYSA-N 0.000 description 4
- 239000008280 blood Substances 0.000 description 4
- 210000004369 blood Anatomy 0.000 description 4
- 230000036772 blood pressure Effects 0.000 description 4
- 208000007565 gingivitis Diseases 0.000 description 4
- 239000003163 gonadal steroid hormone Substances 0.000 description 4
- 210000001596 intra-abdominal fat Anatomy 0.000 description 4
- 150000002632 lipids Chemical class 0.000 description 4
- 230000003908 liver function Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 210000000214 mouth Anatomy 0.000 description 4
- 229940116269 uric acid Drugs 0.000 description 4
- 230000004304 visual acuity Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 206010003694 Atrophy Diseases 0.000 description 3
- 206010018291 Gingival swelling Diseases 0.000 description 3
- 208000023073 Heberden node Diseases 0.000 description 3
- 208000005888 Periodontal Pocket Diseases 0.000 description 3
- 230000037444 atrophy Effects 0.000 description 3
- 210000000845 cartilage Anatomy 0.000 description 3
- 230000036541 health Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000000306 recurrent effect Effects 0.000 description 3
- 210000002303 tibia Anatomy 0.000 description 3
- 210000000689 upper leg Anatomy 0.000 description 3
- 206010018286 Gingival pain Diseases 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000037396 body weight Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 208000030175 lameness Diseases 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 208000011580 syndromic disease Diseases 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 208000004371 toothache Diseases 0.000 description 2
- 208000003947 Knee Osteoarthritis Diseases 0.000 description 1
- 208000001145 Metabolic Syndrome Diseases 0.000 description 1
- 208000001132 Osteoporosis Diseases 0.000 description 1
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000003187 abdominal effect Effects 0.000 description 1
- 201000000690 abdominal obesity-metabolic syndrome Diseases 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 210000000617 arm Anatomy 0.000 description 1
- 239000000090 biomarker Substances 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 210000001217 buttock Anatomy 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 230000003412 degenerative effect Effects 0.000 description 1
- 230000000378 dietary effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000013110 gastrectomy Methods 0.000 description 1
- 230000002496 gastric effect Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 238000009217 hyperthermia therapy Methods 0.000 description 1
- 239000007943 implant Substances 0.000 description 1
- 238000002513 implantation Methods 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 208000018934 joint symptom Diseases 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 210000002414 leg Anatomy 0.000 description 1
- 238000007443 liposuction Methods 0.000 description 1
- 230000003137 locomotive effect Effects 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 230000004630 mental health Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 210000003739 neck Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000001050 pharmacotherapy Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000003491 skin Anatomy 0.000 description 1
- 230000037075 skin appearance Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 239000003826 tablet Substances 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 210000005010 torso Anatomy 0.000 description 1
- 238000002054 transplantation Methods 0.000 description 1
- 230000004382 visual function Effects 0.000 description 1
- 238000004260 weight control Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- the present disclosure relates to a prediction system, control method, and control program for predicting the state of a target part of the human body.
- Patent Document 1 As described in Patent Document 1, it has been devised to support the diagnosis of osteoporosis using a neural network.
- a prediction system includes (a) a subject image showing a target part of a subject at a first time point, and (b) the a prediction information acquiring unit that acquires first prediction information about a target part; and generates and outputs a prediction image that predicts the state of the target part at the second point in time from the first prediction information and the subject image. and a predicted image generator.
- a control method is a control method of a prediction system, comprising: (a) a subject image showing a target part of a subject at a first time point; and (b) the first time point a prediction information acquiring step of acquiring first prediction information about the target part at a second time point after a predetermined period has elapsed since the state of the target part at the second time point from the first prediction information and the subject image and a predicted image generation step of generating and outputting a predicted image predicted by the prediction system, a predicted image generation model capable of generating the predicted image using the subject image and the first prediction information have.
- the prediction system according to each aspect of the present disclosure may be realized by a computer.
- the prediction system is operated by the computer by operating the computer as each part (software element) provided in the prediction system.
- a control program for a prediction system to be realized and a computer-readable recording medium recording it are also included in the scope of the present disclosure.
- the prediction system When the prediction system is realized by a plurality of computers, the prediction system may be realized by a computer by operating each computer as each part (software element) provided in each of the plurality of computers that make up the prediction system. good.
- FIG. 1 is a block diagram illustrating a configuration example of a prediction system according to one aspect of the present disclosure
- FIG. FIG. 11 is a block diagram showing a configuration example of a prediction system according to another aspect of the present disclosure
- 1 is a block diagram illustrating an example configuration of a prediction system according to one aspect of the present disclosure
- FIG. It is a figure which shows an example of a structure of the neural network which a prediction image production
- 4 is a flow chart showing an example of the flow of processing performed by the prediction system according to the first embodiment
- FIG. 4 is a block diagram showing an example of a configuration of a prediction system according to another aspect of the present disclosure
- FIG. 9 is a flow chart showing an example of the flow of processing performed by the prediction system according to the second embodiment
- FIG. 4 is a block diagram showing an example of a configuration of a prediction system according to another aspect of the present disclosure
- FIG. 4 is a flow chart showing an example of the flow of learning processing of a neural network of a prediction information generation unit
- 9 is a flow chart showing another example of the flow of processing performed by the prediction system according to the second embodiment
- FIG. 4 is a block diagram showing an example of a configuration of a prediction system according to another aspect of the present disclosure
- FIG. 4 is a flow chart showing an example of the flow of learning processing of a neural network of an intervention effect prediction unit; 14 is a flow chart showing an example of the flow of processing performed by the prediction system according to the third embodiment;
- a prediction system is a system that generates and outputs a prediction image that predicts a state change of a target part of a subject's body.
- the target part may be any part of the subject's body, for example, the whole body, head, eyes, oral cavity, neck, arms, hands, torso, waist, buttocks, legs. It may be either the part, the foot part, or the like.
- the predicted image may be an image obtained by predicting a state change of any one of skin, hair, eyeballs, teeth, gums, muscles, fat, bones, cartilage, joints, intervertebral discs, etc. of the target site.
- the predicted image may be an image that predicts the changes that will occur in the target part of the subject affected by the disease.
- the predicted image is the shape of the target part of the subject (e.g., waist circumference, chest circumference, height, swelling, atrophy, joint angle and curvature, etc.) and appearance (e.g., posture, wrinkles, spots, redness, turbidity, darkening, yellowing etc.).
- the subject may have a disease in his/her target area.
- the predicted image may be an image obtained by predicting a change in symptoms at the target site of the subject due to the influence of the disease.
- a prediction system that generates and outputs a prediction image that predicts a change in symptoms at a target site of a subject will be described as an example.
- the predicted image is an image showing the effect of the subject's disease on the target site.
- an image showing the effect on the target site is any image that shows changes in the shape of the target site affected by the disease, qualitative or quantitative changes that occur in the tissue of the target site due to the disease, etc. may be an image of
- the disease may comprise at least one of obesity, alopecia, cataracts, periodontal disease, rheumatoid arthritis, Heberden's nodes, hallux valgus, osteoarthritis, spondylosis osteoarthritis, compression fractures and sarcopenia.
- Diseases may include (i) syndromes such as metabolic syndrome, locomotive syndrome, etc., which exhibit a coherent pathological condition formed by a group of various symptoms, and (ii) physical changes such as aging and tooth alignment.
- the prediction system generates a predicted image that predicts the symptoms at the target site at the second time point after a predetermined period from the first time point, based on the subject image showing the subject's target site at the first time point.
- subject image may mean image data representing the subject image.
- the first point in time may be, for example, the point in time at which the subject's image of the subject's target region is acquired.
- the first point in time may typically be the point in time at which a subject image of the subject's current target site condition is acquired. That is, the first point in time may substantially mean the present point in time.
- the predetermined period may be any period that has passed since the first time point, and may be half a year, one year, five years, or ten years. It may be 50 years. That is, the second point in time may be intended to be substantially any point in time in the future.
- the predetermined period is not limited to one period, and may include multiple periods. That is, the predicted image is an image generated by predicting the symptoms of the target part of the subject at a plurality of time points such as half a year, one year, five years, ten years, and fifty years after the first time point. may contain.
- a subject image is an image showing the target part of the subject at the first time point.
- the subject image may be an external image of any one of the subject's whole body, head, upper body, lower body, upper limbs, and lower limbs.
- the subject image may be a medical image of a target portion of the subject for examination of the subject. Medical images include at least one of X-ray images of the subject, CT (Computed Tomography) images, MRI (Magnetic Resonance Imaging) images, PET (Positron Emission Tomography) images and ultrasound images good.
- the subject image includes the shape (e.g., waist circumference, chest circumference, height, swelling, atrophy, joint angle and curvature, etc.) and appearance (e.g., posture, wrinkles, blotches, redness, turbidity, etc.) of the subject's target part. , darkening, yellowing, etc.).
- shape e.g., waist circumference, chest circumference, height, swelling, atrophy, joint angle and curvature, etc.
- appearance e.g., posture, wrinkles, blotches, redness, turbidity, etc.
- the prediction system uses the first prediction information about the target part at the second time point in addition to the subject image to generate the prediction image.
- the first prediction information may be information about the symptoms of the target part of the subject at the second time point.
- the first prediction information includes information indicating the symptoms of the target site of the subject at a plurality of time points such as half a year, one year, five years, ten years, and fifty years after the first time point.
- the first prediction information is, for example, information that includes predictions about symptoms that are likely to occur in the target region of the subject, when the symptoms will occur, and the degree of progress of the symptoms in the target region of the subject.
- the first prediction information may be information about at least one of the shape and appearance of the target part of the subject at the second time point.
- the shape of the subject's target part e.g., waist circumference, chest circumference, height, etc.
- it may be information indicating at least one of appearance (eg, posture, wrinkles, blemishes, etc.).
- the first prediction information may be information related to at least one of the shape and appearance of the target site, which is related to the disease of the target site.
- the first prediction information may include the following information as information with a high possibility that the target part of the subject will change.
- the first prediction information is, for example, (i) information related to obesity, such as body weight, body mass index (BMI), abdominal circumference, visceral fat amount, blood pressure, blood sugar level, lipid, uric acid level, liver function value, etc.
- (ii) as information related to alopecia it may contain information related to the number of hairs, sex hormone values, Norwood classification, Ludwig classification, etc.; (iii) as information related to cataract , visual acuity, field of vision, degree of turbidity, Emery-Little classification, etc., and (iv) information related to periodontal disease, such as degree of pain / swelling, number of remaining teeth, gingivitis (v) information related to rheumatoid arthritis, such as pain level, swelling level, joint angle, joint range of motion, Larsen classification, Stein blocker classification, etc.
- information related to Hebaden's node may contain information related to the degree of pain, degree of swelling, joint range of motion, etc.
- information related to hallux valgus may include information on the degree of pain, degree of swelling, joint range of motion, HV angle, M1-M2 angle, etc.
- information related to osteoarthritis such as the degree of pain and swelling degree, joint angle, joint range of motion, degree of stiffness, thickness of joint cartilage, Kellgren-Laurence (KL) classification, presence or absence of lameness, etc.
- (ix) degenerative spondylosis In the case, it may include information on the degree of pain, the degree of curvature of the spine, the range of motion of the spine, the KL classification, etc.
- Information related to sarcopenia may include information related to muscle mass, walking speed, grip strength, and the like.
- the prediction system is based on the subject image showing the target part of the subject at the first time point and the first prediction information regarding the target part at the second time point after the predetermined period has elapsed from the first time point. Then, a prediction image that predicts the state of the target part at the second time is generated and output.
- the prediction system can output the state of the target part at the second time point as a visually easy-to-understand predicted image.
- the predicted image is generated from the target person image, which is the image of the target person, the predicted image is a realistic image that is persuasive to the target person. Therefore, for example, if a doctor in charge of the subject presents it to the subject, the subject can recognize the state of the target site at the second time point, and the subject can easily understand the necessity of intervention. can be done.
- the predicted image may be an image simulating an exterior image of any one of the subject's whole body, head, upper body, lower body, upper limbs, and lower limbs.
- the predicted image may be an image simulating a medical image of the subject's target region obtained in the process of examining the subject.
- the predicted image is the shape (e.g., waist circumference, chest circumference, height, swelling, atrophy, joint angle and curvature, etc.) and appearance (e.g., posture, wrinkles, spots, redness, turbidity, It may be an image showing at least one of blackening, yellowing, etc.).
- the subject image is an appearance image showing the current appearance of the subject's skin (for example, wrinkles, spots, redness, turbidity, darkening, yellowing, etc.), and the first prediction information is the subject's future skin If the information is about the degree of wrinkles, the degree of blemishes, the degree of redness, the degree of turbidity, the degree of darkening or the degree of yellowing, based on the subject image and the first prediction information, the prediction system predicts the subject's An image showing the future skin appearance can be output as a predicted image.
- the prediction system may generate the following two images: Output as predicted images: (1) A medical image indicating the current joint angles of the subject and a medical image indicating the future joint angles of the subject based on the first prediction information are output as predicted images. (2) An appearance image showing the current joint appearance of the subject and an image showing the future joint appearance of the subject based on the first prediction information can be output as predicted images.
- FIG. 1 shows the configuration of a prediction system 100 including a prediction device 1 that acquires a target person image and first prediction information, and generates and outputs a prediction image from the target person image based on the first prediction information.
- the prediction device 1 of the prediction system 100 can function alone as the prediction system described above.
- FIG. 1 is a block diagram showing a configuration example of a prediction system 100 in a medical facility 5 into which a prediction device 1 has been introduced.
- the prediction system 100 includes a prediction device 1 and one or more terminal devices 2 communicatively connected to the prediction device 1.
- the prediction system 100 may include the prediction device 1 and a device (for example, the terminal device 2) capable of presenting the prediction image output from the prediction device 1.
- FIG. 1 the prediction device 1 and a device (for example, the terminal device 2) capable of presenting the prediction image output from the prediction device 1.
- the prediction device 1 is a computer that acquires a target person image and first prediction information, generates a predicted image from the target person image based on the first prediction information, outputs the predicted image, and transmits the predicted image to the terminal device 2.
- the prediction device 1 may be connected to the LAN of the medical facility 5 as shown in FIG. The configuration of the prediction device 1 will be described later.
- the terminal device 2 receives the predicted image from the prediction device 1 and presents the predicted image.
- the terminal device 2 may be a computer or the like used by medical personnel such as doctors belonging to the medical facility 5 .
- the terminal device 2 may be connected to the LAN of the medical facility 5 as shown in FIG.
- the terminal device 2 may be, for example, a personal computer, a tablet terminal, a smart phone, or the like.
- the terminal device 2 has a communication section for transmitting and receiving data with other devices, an input section such as a keyboard and a microphone, a display section capable of displaying a predicted image, and the like.
- the prediction device 1 and the terminal device 2 are provided separately, but the prediction device 1 and the terminal device 2 may be integrated.
- the prediction device 1 may have the functions of the terminal device 2 by having a display unit capable of displaying a prediction image.
- the prediction system 100 may further include a first prediction information management device 3, a subject image management device 4, and an electronic medical record management device 9.
- the first prediction information management device 3 is a computer that functions as a server for managing first prediction information.
- the first predictive information management device 3 may be connected to the LAN of the medical facility 5, as shown in FIG. In this case, the prediction device 1 may acquire the first prediction information of the subject from the first prediction information management device 3 .
- the subject image management device 4 is a computer that functions as a server for managing subject images.
- the subject image management device 4 may capture an image of a subject who has undergone a medical examination regarding the state of the target site at the medical facility 5 .
- the subject image may be a medical image captured within the medical facility 5 .
- the subject image management device 4 may be communicably connected to an imaging device such as an X-ray imaging device in the medical facility 5, for example.
- the image captured by the image capturing device may be recorded in the subject image management device 4 via, for example, a LAN.
- the subject image management device 4 may be connected to the LAN of the medical facility 5 as shown in FIG.
- the prediction device 1 may acquire the target person image from the target person image management device 4 .
- the electronic medical record management device 9 is a computer that functions as a server for managing electronic medical record information of subjects who have been examined at the medical facility 5 .
- the electronic medical record management device 9 may be connected to the LAN of the medical facility 5 as shown in FIG.
- the prediction device 1 may acquire basic information related to the subject from the electronic medical record management device 9 .
- the basic information is information included in the electronic medical record information, and is at least one of the subject's sex, age, height, weight, and information indicating the state of the subject's target part at the first point in time. may contain
- a LAN local area network
- a prediction device 1 is arranged in a medical facility 5, a prediction device 1, a terminal device 2, a first prediction information management device 3, a subject image management device 4, and an electronic medical record management device.
- the network within the medical facility 5 may employ the Internet, a telephone communication network, an optical fiber communication network, a cable communication network, a satellite communication network, or the like.
- the LAN within the medical facility 5 may be communicably connected to an external communication network.
- the terminal device 2 may be a computer or the like used by the patient.
- the prediction device 1 and at least one of the terminal device 2, the first prediction information management device 3, the subject image management device 4, and the electronic medical record management device 9 are directly connected without via a LAN.
- the number of terminal devices 2, first prediction information management device 3, subject image management device 4, and electronic medical record management device 9 that can communicate with prediction device 1 may be plural.
- multiple prediction devices 1 may be introduced.
- the prediction device 1 may be communicably connected to a LAN installed in each of a plurality of medical facilities 5 via a communication network 6 instead of a computer installed in a predetermined medical facility 5 .
- FIG. 2 is a block diagram showing a configuration example of a prediction system 100a according to another aspect of the present disclosure.
- the medical facility 5a includes a terminal device 2a, a subject image management device 4a, and an electronic medical record management device 9a, which are communicably connected.
- the medical facility 5b includes a terminal device 2b, a subject image management device 4b, and an electronic medical record management device 9b, which are communicably connected.
- terminal device 2 terminal device 2
- medical facility 5" medical facility 5"
- FIG. 2 shows an example in which the LANs of the medical facility 5a and the medical facility 5b are connected to the communication network 6.
- the prediction device 1 is not limited to the configuration shown in FIG.
- the prediction device 1 and the first prediction information management device 3 may be installed in the medical facility 5a or the medical facility 5b.
- the prediction system 100a adopting such a configuration may have a first prediction information management device 3a installed in the medical facility 5a and a first prediction information management device 3b installed in the medical facility 5b.
- the prediction device 1 can acquire the first prediction information and the subject image of the subject Pa from the first prediction information management device 3a and the subject image management device 4a of the medical facility 5a, respectively.
- the prediction device 1 can transmit a prediction image that predicts the state of the target part of the subject Pa to the terminal device 2a installed in the medical facility 5a.
- the prediction device 1 can acquire the first prediction information and the subject image of the subject Pb from the first prediction information management device 3b and the subject image management device 4b of the medical facility 5b, respectively.
- the prediction device 1 can transmit a prediction image that predicts the state of the target part of the subject Pb to the terminal device 2b installed in the medical facility 5b.
- the first prediction information of each subject and the subject image include identification information unique to each medical facility 5 given to each medical facility 5 that examines each subject, and It suffices if the identification information unique to each given subject is included.
- the identification information unique to each medical facility 5 may be, for example, a facility ID. Further, the identification information unique to each subject may be, for example, a patient ID. Based on these pieces of identification information, the prediction device 1 can correctly transmit a predicted image that predicts the state of the target part of the subject to the terminal device 2 of each medical facility 5 where the subject has been examined. .
- FIG. 3 is a block diagram illustrating an example configuration of a prediction system 100, 100a according to one aspect of the present disclosure.
- members having the same functions as the members already explained are denoted by the same reference numerals, and the explanation thereof will not be repeated.
- the prediction systems 100 and 100a shown in FIG. 3 include a prediction device 1, one or more terminal devices 2 communicably connected to the prediction device 1, a first prediction information management device 3, and a subject image management device 4, It has
- the prediction device 1 includes a control unit 7 that controls each unit of the prediction device 1 and a storage unit 8 that stores various data used by the control unit 7 .
- the control unit 7 includes a prediction information acquisition unit 71 , a prediction image generation unit 72 and an output control unit 73 .
- the storage unit 8 stores a control program 81 that is a program for performing various controls of the prediction device 1 .
- the prediction information acquisition unit 71 acquires the target person image from the target person image management device 4 and acquires the first prediction information from the first prediction information management device 3 .
- the target person image and the first prediction information are input data input to the prediction image generator 72 .
- the subject image and the first prediction information will be explained using several diseases as examples.
- the subject image may be an image showing the current subject's whole body or abdomen
- the first prediction information is the subject's weight, BMI, waist circumference, and visceral fat amount.
- blood pressure blood sugar level, lipid, uric acid level or liver function value.
- the subject image is an image showing the current subject's whole body or head
- the first prediction information is the subject's hair count, sex hormone value, Norwood classification, or Ludwig It may be information about classification.
- the subject image is an image showing the subject's current head (face) or eyes
- the first prediction information is the subject's visual acuity, visual field, and eye lens. It may be information about the degree of turbidity or the Emery-Little classification.
- the subject image is an image showing the subject's current head (face) or oral cavity
- the first prediction information is the pain of the subject's teeth or gums. It may be information about extent, extent of tooth or gum swelling, number of remaining teeth, gingivitis index or periodontal pocket depth. If the disease is periodontal disease, the subject image may be an image showing an open mouth or an image showing a closed mouth.
- the subject image is an image showing the subject's current whole body, upper limbs, or lower limbs
- the first prediction information is the degree of pain in the subject's whole body, upper limbs, or lower limbs.
- degree of swelling, joint angle, joint range of motion Larsen classification or Stein blocker classification.
- the subject image is an image showing the subject's current hand
- the first predictive information is the degree of pain, degree of swelling, or degree of joint of the subject's hand.
- Information about the movable range may be used.
- the subject image is an image showing the current foot of the subject
- the first prediction information is the degree of pain, the degree of swelling, and the degree of joint mobility of the subject's foot. It may be information about range, HV angle or M1-M2 angle.
- the subject image is an image showing the subject's current whole body, upper limbs, or lower limbs
- the first predictive information is the pain of the subject's whole body, upper limbs, or lower limbs. degree of swelling, joint angle, joint range of motion or KL classification.
- the subject image is an image showing the subject's current whole body, neck, chest, or waist
- the first prediction information is the degree of curvature of the subject's spine
- the first prediction information is the degree of curvature of the subject's spine
- the first prediction information is the degree of curvature of the subject's spine, the range of motion of the spine, or K It may be information about the -L classification.
- the subject image may be an image showing the subject's current whole body, upper limbs, or lower limbs
- the first prediction information may be information about the subject's muscle mass.
- the subject image may be a medical image taken during diagnosis of each disease.
- the disease is knee osteoarthritis
- the subject image is an X-ray image showing the current subject's knee joint
- the first prediction information is the subject's tibia and femur two years later. It may be information about an angle formed by and.
- the predicted image generation unit 72 Based on the first prediction information, the predicted image generation unit 72 generates and outputs a predicted image that predicts the state of the target part at the second point in time from the subject image.
- the predicted image generation unit 72 may generate an image simulating at least part of the subject image used to generate the predicted image.
- the predicted image generated by the predicted image generation unit 72 may be an image showing the effect of a disease occurring in a target region on the target region.
- the generated predicted images may include images associated with parts of the subject that have not changed from the first time point at the second time point. That is, the predicted image may include an image associated with a part that changed from the first time point to the second time point and an image associated with a part that did not change from the first time point to the second time point.
- the predicted image generation unit 72 may have any known image editing function and video editing function.
- the predicted image generator 72 converts the subject image into an editable file format, and then modifies the subject image based on the first prediction information to generate the predicted image.
- the first prediction information is information about the angle formed by the subject's tibia and femur two years later
- the predicted image generation unit 72 converts the subject image into a predetermined file format.
- the predicted image generator 72 may generate a predicted image by changing the angle formed by the tibia and the femur appearing in the subject image after converting the file format based on the first prediction information.
- the predicted image generation unit 72 may have a predicted image generation model that can generate a predicted image using the subject image and the first prediction information.
- the predictive image generation model may be a neural network trained using a plurality of image data showing target parts as training data.
- a convolutional neural network (CNN), a generative adversarial network (GAN), an autoencoder, or the like may be applied as a predictive image generation model.
- the predicted image generation unit 72 inputs the subject image and the first prediction information to the predicted image generation model and outputs the predicted image.
- the predicted image generation unit 72 outputs the predicted image output from the predicted image generation model (that is, generated by the predicted image generation unit 72).
- a predicted image generation model is a calculation model used when the predicted image generation unit 72 performs calculations based on input data.
- a predicted image generation model is generated by executing machine learning, which will be described later, on the neural network of the predicted image generation unit 72 .
- the output control unit 73 transmits the predicted image output from the predicted image generation unit 72 to the terminal device 2 .
- the output control unit 73 may transmit at least one of the target person image and the first prediction information used to generate the predicted image to the terminal device 2 together with the predicted image.
- the prediction device 1 may be configured to include a display unit (not shown).
- the output control section 73 may cause the display section to display the predicted image.
- the output control unit 73 may cause the display unit to display at least one of the target person image and the first prediction information used to generate the predicted image together with the predicted image.
- the prediction system 100, 100a By providing the predicted image generation unit 72 having the predicted image generation model, the prediction system 100, 100a generates and outputs a realistic predicted image in which the state of the target part at the second point in time is reflected in the image of the subject. be able to. As a result, the prediction systems 100 and 100a can allow the subject to clearly recognize the state of the target part at the second time point.
- a trained prediction image generation model may be installed in the prediction device 1 in advance.
- the prediction device 1 may further include a first learning section 74 that performs learning processing for the predicted image generation section 72 .
- the first learning section 74 controls learning processing for the neural network of the predicted image generating section 72 .
- FIG. 4 is a diagram showing an example of the configuration of a neural network included in the predicted image generation unit 72. As shown in FIG.
- the predictive image generation model applying the adversarial generation network has two networks: a generator network (hereinafter referred to as generator 721) and a discriminator network (hereinafter referred to as discriminator 722). have.
- the generator 721 can generate an image that looks like a real image as a predicted image from the first predicted information and the subject image.
- the discriminator 722 can discriminate between the image data (fake image) from the generator 721 and the real image from the first training data set 82, which will be described later.
- the first learning unit 74 acquires the subject image and the first prediction information from the storage unit 8 and inputs them to the generator 721 .
- a generator 721 generates a predicted image candidate (fake image) from the subject image and the first prediction information.
- the generator 721 may refer to real images included in the first training data set 82 to generate predicted image candidates.
- the first learning data set 82 is data used for machine learning to generate a predictive image generation model.
- the first training data set 82 may contain any real image that the generator 721 aims to reproduce as faithfully as possible.
- the first training data set 82 may contain real medical images captured in the past.
- the medical image includes, for example, at least one of X-ray image data, CT image data, MRI image data, PET image data, and ultrasound image data obtained by imaging target regions of each of a plurality of patients. You can
- the first learning data set 82 may include first learning data and first teacher data.
- the first learning data is, for example, data of the same type as the subject image and data of the same type as the first prediction information.
- Data of the same type as the subject's image is image data of the same target part as the target part shown in the subject's image from the same angle, and image data of the same type, such as medical images or external images.
- Data of the same kind as the first predictive information means that, when the first predictive information is information related to the shape and appearance of the target part related to the disease, it is related to the shape and appearance of the same target part related to the same disease. Information to be intended.
- the first training data is data of the same type as the predicted image, and is data of the same person whose time has passed since the first learning data.
- the first teacher data is data related to "data of the same kind as the first prediction information", which is the first learning data.
- Data of the same type as the predicted image means image data of the same target region as the target region shown in the predicted image from the same angle, and image data of the same type, such as medical images or external images. do.
- the first learning unit 74 inputs the predicted image candidate generated by the generator 721 and the real image included in the first training data set 82 to the classifier 722 .
- the discriminator 722 takes as inputs the real images from the first training data set 82 and the predicted image candidates generated by the generator 721 and outputs for each image the probability that it is the real image. .
- the first learning unit 74 calculates a classification error that indicates how accurate the probability output by the discriminator 722 is.
- the first learning unit 74 iteratively improves the discriminator 722 and the generator 721 using the error backpropagation method.
- the weights and biases of classifier 722 are updated to minimize classification error (ie, to maximize classification performance).
- the generator 721 weights and biases are updated to maximize the classification error (ie, maximize the probability that the classifier 722 mistakes a predicted image candidate for the real image).
- the first learning unit 74 updates the weight and bias of the classifier 722 and the weight and bias of the generator 721 until the probability output by the classifier 722 satisfies a predetermined criterion.
- the predicted image generator 72 can generate a predicted image that is indistinguishable from the real thing.
- FIG. 5 is a flowchart showing an example of the flow of processing performed by the prediction systems 100 and 100a according to this embodiment.
- step S1 the prediction information acquisition unit 71 acquires a subject image and first prediction information (input data) (prediction information acquisition step).
- the predicted image generation unit 72 generates a predicted image and outputs the predicted image in step S2 (predicted image generation step).
- the prediction systems 100 and 100a include the prediction device 1 that acquires the first prediction information from the first prediction information management device 3, but are not limited to this.
- the configuration may be such that the prediction device 1A generates the first prediction information. Configurations of prediction systems 100 and 100a including such a prediction device 1A will be described with reference to FIG.
- FIG. 6 is a block diagram illustrating an example configuration of a prediction system 100, 100a according to another aspect of the present disclosure.
- the prediction device 1A includes a control unit 7A that controls each unit of the prediction device 1A in an integrated manner, and a storage unit 8 that stores various data used by the control unit 7A.
- the control unit 7A further includes a prediction information generation unit 75 in addition to the prediction information acquisition unit 71, the prediction image generation unit 72, and the output control unit 73. FIG.
- FIG. 6 shows an example in which the prediction device 1A includes the first learning unit 74, it is not limited to this.
- a learned predictive image generation model may be pre-installed in the prediction device 1A.
- the prediction information generation unit 75 generates first prediction information about the target part at the second time point after a predetermined period from the first time point from the subject image showing the target part of the subject at the first time point, The first prediction information is output to the prediction information acquisition unit 71 .
- the prediction information generation unit 75 may have a prediction information generation model that can estimate the first prediction information from the subject image.
- the predictive information generation model is a model capable of estimating the first predictive information from the subject's subject image and the subject's basic information.
- the predictive information generation model may be a neural network trained using patient information regarding a patient having a disease of the target site as training data.
- a convolutional neural network CNN: convolutional neural network
- RNN recurrent neural network
- LSTM Long Short-Term Memory
- the patient information includes, for example, state information indicating the state of the target part of each patient acquired at a plurality of times in the past, and indicates the state information for each patient and the time when the state information was acquired.
- Information is information with which information is associated.
- the prediction information generation unit 75 inputs data related to the subject image to the prediction information generation model and outputs the first prediction information.
- the prediction information generation unit 75 outputs the first prediction information output from the prediction information generation model (that is, generated by the prediction information generation unit 75).
- a prediction information generation model is a calculation model used when the prediction information generation unit 75 performs calculations based on input data.
- a prediction information generation model is generated by executing machine learning, which will be described later, on the neural network of the prediction information generation unit 75 .
- FIG. 7 is a diagram showing an example of the configuration of a neural network included in the prediction information generation unit.
- the prediction information generation unit 75 includes an input layer 751 and an output layer 752.
- the prediction information generation unit 75 performs calculations based on the prediction information generation model on input data input to the input layer 751 and outputs prediction information from the output layer 752 .
- the prediction information generation unit 75 in FIG. 7 includes a neural network up to an input layer 751 and an output layer 752.
- the neural network may be any neural network suitable for handling time-series information. For example, LSTM or the like may be used.
- the neural network may be a neural network suitable for combined handling of time-series information and location information. For example, a ConvLSTM network that combines CNN and LSTM may be used.
- the input layer 751 is capable of extracting time-varying feature amounts of input data.
- the output layer 752 can calculate a new feature amount based on the feature amount extracted by the input layer 751, the temporal change of the input data, and the initial value.
- Input layer 751 and output layer 752 have multiple LSTM layers. Each of input layer 751 and output layer 752 may have three or more LSTM layers.
- the input data input to the input layer 751 may be, for example, a parameter indicating a feature amount extracted from the subject's image showing the subject's target part at the first time point.
- the prediction information generating section 75 can output the first prediction information regarding the target part at the second point in time when the predetermined period has elapsed from the first point in time.
- the prediction information generation unit 75 outputs, as the first prediction information, for example, a prediction result of the degree of onset or progression of the disease related to the target site of the subject at the second time point after a predetermined period has elapsed from the first time point. Specifically, the prediction information generation unit 75 generates, as the first prediction information, for example, the degree of symptoms of each disease at the second time point of the subject, the classification of each disease, and the target site requiring invasive treatment. Output information such as the time.
- the first prediction information shown here is an example, and is not limited to these.
- the prediction information generation unit 75 may output information indicating the subject's QOL as the third prediction information based on the above-described first prediction information. Specifically, the prediction information generation unit 75 generates, as the third prediction information, information about the pain occurring in the target site of the subject, information about the subject's catastrophic thinking, information about the exercise ability of the subject, At least one of information indicating the subject's degree of life satisfaction and information such as the degree of stiffness of the target site of the subject is output.
- the information indicating the subject's QOL is information including at least one of the following. ⁇ Information about the subject's catastrophic thinking ⁇ Information about the subject's exercise ability ⁇ Information indicating the subject's life satisfaction level.
- the information indicating the subject's QOL includes the subject's (1) physical function, (2) daily role function (body), (3) body pain, (4) overall health, (5) vitality, ( It may include information on 6) social functioning, (7) daily role functioning (mental), and (8) mental health.
- SF-36 36-Item. Short-Form Health Survey
- VAS Visual analog scale
- NEI VFQ-25 The 25-item National Eye Institute Visual Function Questionnaire
- GOHAI General Oral Health Assessment Index
- WOMAC Western Ontario and McMaster Universities Osteoarthritis Index
- RDQ Raland-Morris Disability Questionnaire
- the prediction information generation unit 75 should generate at least part of the first prediction information used by the prediction image generation unit 72 to generate the prediction image. In that case, the remaining first prediction information may be obtained from the first prediction information management device 3 by the prediction information acquisition unit 71 .
- FIG. 8 is a flow chart showing an example of the flow of processing performed by the prediction systems 100 and 100a of the second embodiment.
- step S11 the prediction information generation unit 75 acquires the subject image (input data) (image acquisition step).
- the prediction information generation unit 75 generates first prediction information in step S12 and outputs the first prediction information to the prediction information acquisition unit 71 (first prediction step).
- the prediction information acquisition unit 71 obtains (a) the first prediction information acquired from the prediction information generation unit 75, and (b) whether the first prediction information was acquired from the subject image management device 4 before acquisition of the first prediction information. , or a target person image (input data) acquired simultaneously with or after acquisition of the first prediction information is input to the prediction image generation unit 72 (not shown).
- the predicted image generation unit 72 generates a predicted image and outputs the predicted image in step S13 (predicted image generation step).
- the prediction information generation unit 75 generates the first prediction information based on the subject image.
- the prediction information generation unit 75B generates the first prediction information based on the basic information in addition to the subject image.
- Configurations of prediction systems 100 and 100a including such a prediction device 1B will be described with reference to FIG.
- FIG. 9 is a block diagram illustrating a variation of the configuration of prediction systems 100, 100a according to another aspect of the present disclosure.
- the prediction device 1B further has a prediction information generation section 75B in the control section 7B.
- the prediction device 1B includes a control unit 7B that controls each unit of the prediction device 1B, and a storage unit 8B that stores various data used by the control unit 7B.
- the control unit 7B further includes a prediction information generation unit 75B and a basic information acquisition unit 76 in addition to the prediction information acquisition unit 71, the prediction image generation unit 72, and the output control unit 73.
- the prediction device 1B obtains information about a symptom that can occur in the target site at the second time point or is closer to the symptom that has occurred, that is, more accurate first prediction information, from the image of the subject captured at the first time point. can be generated based on As a result, the prediction device 1B can generate an image showing a symptom that can occur in the target region at the second time point or that is closer to the symptom that has occurred, that is, a prediction image showing more accurate prediction information.
- the basic information acquisition unit 76 acquires basic information, which is information related to the subject, from the electronic medical record management device 9 .
- the electronic medical record management device 9 is a computer that functions as a server for managing electronic medical record information of subjects who have been examined at the medical facility 5 or medical facilities other than the medical facility 5 .
- the electronic medical record information may include the subject's basic information and interview information.
- the basic information is input data input to the prediction information generation unit 75B in addition to the subject image.
- the basic information is information including at least one of the subject's sex, age, height, weight, and information indicating the state of the target part of the subject at the first time point.
- the basic information includes at least one of the subject's BMI, race, occupational history, exercise history, disease history related to the target site, information regarding the shape and appearance of the target site, biomarker information, and genetic information. It may contain further.
- the basic information may also include, for example, information such as the degree of disease symptoms related to the subject's target site.
- the basic information may include, for example, information included in the subject's electronic medical record information.
- the basic information may be interview information obtained from the subject by interview conducted at the medical facility 5 or the like, and may include, for example, information related to the subject's QOL at the first time point.
- the prediction device 1B acquires the subject's basic information from the electronic medical record management device 9 in addition to the subject's image, and stores the target region of the subject Pa in the terminal device 2a installed in the medical facility 5a. It is possible to transmit a predicted image that predicts the state of The prediction device 1B acquires the subject's basic information from the electronic medical record management device 9 in addition to the subject's image, and predicts the state of the target part of the subject Pb to the terminal device 2b installed in the medical facility 5b. Images can be sent.
- a learned prediction information generation model may be pre-installed in the prediction device 1B.
- the prediction device 1B may further include a second learning section 77 that performs learning processing for the prediction information generating section 75B.
- the second learning section 77 controls learning processing for the neural network of the prediction information generating section 75B.
- a second learning data set 83 which will be described later, is used for this learning.
- a specific example of learning performed by the second learning unit 77 will be described later.
- FIG. 10 is a flow chart showing an example of the flow of learning processing of the neural network of the prediction information generator 75B.
- the second learning unit 77 acquires the second learning data included in the second learning data set 83 from the storage unit 8B (step S21).
- the second learning data includes patient images of multiple patients.
- the second learning unit 77 determines a certain patient (step S22).
- the second learning unit 77 inputs the patient image of a certain patient at time point A, which is included in the second learning data, to the input layer 751 (step S23).
- the input layer 751 may extract parameters representing feature amounts from the input patient image.
- the second learning unit 77 acquires output data related to the symptom of the target part of the patient from the output layer 752 (step S24).
- This output data contains the same content as the second teacher data.
- the second learning unit 77 acquires the second teacher data included in the second learning data set 83. Then, the second learning unit 77 compares the obtained output data with the state information indicating the state of the target part of the patient at time point B, which is included in the second teacher data, and calculates the error (step S25). .
- the second learning unit 77 adjusts the prediction information generation model so that the error becomes small (step S26).
- any known method can be applied to adjust the predictive information generation model.
- an error backpropagation method may be employed as a method of adjusting the prediction information generation model.
- the adjusted prediction information generation model becomes a new prediction information generation model, and the prediction information generation unit 75B uses the new prediction information generation model in subsequent calculations.
- the parameters used in the prediction information generation section 75B can be adjusted.
- the parameters include, for example, parameters used in the input layer 751 and the output layer 752. Specifically, the parameters include weighting factors used in the input layer 751 and the LSTM layers of the output layer 752 . The parameters may also include filter coefficients.
- Second learning unit 77 determines if the error is not within a predetermined range and if patient images of all patients included in second learning data set 83 have not been input (NO in step S27). , the patient is changed (step S28), and the process returns to step S23 to repeat the learning process. Second learning unit 77 determines if the error is within a predetermined range and if patient images of all patients included in second learning data set 83 have been input (YES in step S27). , terminate the learning process.
- the second learning data set 83 is data used for machine learning to generate a prediction information generation model.
- the second training data set 83 may include patient information regarding patients with disease of the target site.
- the patient information includes state information indicating the state of the target part of each patient acquired at a plurality of times in the past, and the state information for each patient and the time point at which the state information was acquired. may be information associated with information indicating
- the second learning data set 83 includes second learning data used as input data and second teacher data for calculating the error between the first prediction information output by the prediction information generating section 75B.
- the second learning data may include, for example, image data showing target regions of each of a plurality of patients.
- the image data used as the second learning data may be image data obtained by imaging any one of the whole body, upper body, lower body, upper limbs, and lower limbs of each of a plurality of patients.
- the image data used as the second learning data may be medical image data showing target regions of a plurality of patients.
- the medical image data includes, for example, at least one of X-ray image data, CT image data, MRI image data, PET image data, and ultrasound image data obtained by imaging target regions of a plurality of subjects. You can
- the second learning data is data of the same kind as the subject image.
- the second teacher data is data of the same kind as the first prediction information.
- the second teacher data may include state information indicating the state of the target region of each patient at the time the patient's image was captured, and symptom information regarding the target region.
- the state information may include information regarding the progression of symptoms of the target site.
- the symptom information may include information regarding the onset time of the disease of the target site.
- the second learning data set 83 may be data in which the second learning data and the second teacher data are integrated. That is, the second learning data set 83 includes patient images obtained from each of a plurality of patients at a plurality of times in the past, and state information indicating the state of the target region at the time when the patient images were captured. It may be associated time-series data.
- the second learning data set 83 may include parameters indicating feature amounts extracted from the following information at a certain point in time and one year after a certain point in time.
- the second learning data set 83 may include body weight, BMI, waist circumference, visceral fat amount, blood pressure, blood sugar level, lipid, uric acid level, or liver function value.
- the second learning data set 83 may include the number of hairs, sex hormone values, Norwood classification, Ludwig classification, and the like.
- the second training data set 83 may include visual acuity, visual field, recommended degree of opacity of the eye, Emery-Little classification, or the like.
- the second learning data set 83 includes the degree of tooth or gingival pain, the degree of tooth or gingival swelling, the number of remaining teeth, the gingivitis index, the periodontal pocket depth, and the like. May contain.
- the second learning data set 83 includes the degree of pain, degree of swelling, joint angle, joint range of motion, Larsen classification, Stein blocker classification, etc. of the subject's whole body, upper limbs or lower limbs. may contain
- the second learning data set 83 may include the degree of pain, the degree of swelling, the range of motion of the joint, and the like of the subject's hand.
- the second learning data set 83 may include the degree of pain, the degree of swelling, the joint range of motion, the HV angle or the M1-M2 angle, etc. of the subject's foot.
- the second learning data set 83 includes the degree of pain, degree of swelling, joint angle, range of joint motion, degree of stiffness, joint cartilage thickness, KL classification or presence or absence of lameness.
- the second learning data set 83 may include the degree of pain, the degree of curvature of the spine, the range of motion of the spine, or the KL classification.
- the second learning data set 83 may include the degree of pain or the range of motion of the spine.
- the disease is sarcopenia, muscle mass, walking speed or grip strength may be included.
- the second learning data set 83 may include parameters indicating attributes of the subject.
- the subject's attributes are, for example, each subject's sex, age, height, and weight.
- the second learning unit 77 uses the target person image at a certain time point as the second learning data, and uses the target person image and the target person image after a predetermined period from a certain time point as the second learning data.
- Information about the symptoms of the target part and information about the subject at the time when the person image was captured may be used as the second teacher data.
- the second learning data set 83 may include time-series data containing information about the QOL of each of a plurality of subjects. For example, information such as SF-36 and VAS may be included.
- the prediction information generation unit 75B having a prediction information generation model generated by machine learning using such a second learning data set 83 obtains information related to the QOL of the subject at the second time point from the subject image of the subject. It is also possible to output information to
- the input data used during learning by the prediction information generation unit 75B is (a) a subject image showing the subject's target part at a certain point A, which is included in the second learning data.
- the prediction information generator 75B Based on the above-described input data, the prediction information generator 75B outputs first prediction information regarding the target site at time B after a predetermined period (for example, three years) has passed from time A as output data.
- the prediction information generation unit 75B outputs data such as the angle around the target site of the subject at time B, the degree of increase and decrease of the target site, the degree of wrinkles and blemishes of the target site, the target site information indicating the timing and degree of onset of pain, and the timing at which invasive treatment is required for the target site.
- the output data shown here is an example, and is not limited to these.
- the second learning unit 77 when performing the learning of the prediction information generation unit 75B, obtains an image of the target person at a certain point A in which the target part of the target person is shown.
- the general subject's symptom information and attribute information may be input to the prediction information generator 75B as second learning data.
- FIG. 11 is a flow chart showing an example of the flow of processing performed by the prediction systems 100 and 100a according to this embodiment.
- step S31 the prediction information generation unit 75B acquires a subject image (input data) and basic information (input data) (image and information acquisition step).
- the prediction information generation unit 75B generates the first prediction information in step S32, and sends the first prediction information to the prediction information acquisition unit 71. output (first prediction step).
- the prediction information acquisition unit 71 acquires (a) the first prediction information acquired from the prediction information generation unit 75B and (b) from the subject image management device 4 before acquiring the first prediction information. , or a target person image (input data) acquired simultaneously with or after acquisition of the first prediction information is input to the prediction image generation unit 72 (not shown).
- the predicted image generation unit 72 generates a predicted image and outputs the predicted image in step S33 (predicted image generation step).
- the prediction system 100, 100a has a function of outputting a prediction image that predicts the state of the target part at the second point in time when the target part is intervened, and the method of intervention for the subject and the effect of the intervention.
- may be A prediction device 1C having such a function will be described with reference to FIG.
- FIG. 12 is a block diagram showing an example configuration of a prediction system 100, 100a according to another aspect of the present disclosure.
- intervention methods may include lifestyle guidance, diet therapy, drug therapy, exercise therapy, surgical therapy (liposuction, gastrectomy, gastric banding, etc.) and the like.
- intervention methods may include lifestyle guidance, dietary therapy, drug therapy, surgical therapy (hair transplantation), wearing a wig, and the like.
- intervention methods may include drug therapy, exercise therapy, surgical therapy (cataract extraction, intraocular lens implantation, etc.), and the like.
- methods of intervention may include oral care instructions, pharmacotherapy, orthodontic therapy, surgical therapy (periodonoplasty, implant therapy, etc.), use of dentures, and the like.
- methods of intervention may include drug therapy, surgical therapy (osteotomy, joint replacement).
- methods of intervention may include drug therapy and the like.
- intervention methods may include shoe instruction, exercise therapy, brace therapy, drug therapy, surgical therapy (osteotomy, fusion, joint replacement, etc.), and the like.
- intervention methods include exercise therapy, brace therapy, drug therapy, rehabilitation, and surgical therapy (intra-articular injection, arthroscopic surgery, osteotomy, fusion surgery, joint replacement, etc.). ) and so on.
- methods of intervention may include exercise therapy, brace therapy, drug therapy, surgical therapy (such as spinal instrumentation surgery), and the like.
- methods of intervention may include brace therapy, drug therapy, surgical therapy (such as spinal instrumentation surgery), and the like.
- intervention methods may include lifestyle guidance, diet therapy, drug therapy, exercise therapy, and the like.
- the prediction device 1C includes a control unit 7C that controls each unit of the prediction device 1 and a storage unit 8C that stores various data used by the control unit 7C.
- the control unit 7C includes a prediction information acquisition unit 71, a prediction image generation unit 72C, an output control unit 73C, a first learning unit 74, a prediction information generation unit 75B, a basic information acquisition unit 76, and a second learning unit 77.
- An effect prediction unit 78 and a third learning unit 79 are provided.
- FIG. 12 shows the prediction device 1C including the first learning unit 74, the second learning unit 77, and the third learning unit 79, it is not limited to this.
- the prediction device 1C may include any (or all) of the first learning unit 74, the second learning unit 77, and the third learning unit 79, or may not include any (or all). good.
- the prediction device 1C does not have to include the first learning unit 74.
- a trained prediction image generation model may be installed in the prediction device 1C in advance.
- the prediction device 1 ⁇ /b>C does not have to include the second learning section 77 .
- a trained prediction information generation model may be installed in the prediction device 1C in advance.
- the prediction device 1 ⁇ /b>C does not have to include the third learning section 79 .
- a learned intervention effect prediction model (described later) may be installed in the prediction device 1C in advance.
- a control program 81 which is a program for performing various controls of the prediction device 1C
- a first learning data set 82 and a second learning data set 83
- third teacher data 84 and Intervention information 85 may be stored.
- FIG. 12 shows a prediction device 1C in which a control program 81, a first learning data set 82, a second learning data set 83, a third teacher data 84, and intervention information 85 are stored in a storage unit 8C. Yes, but not limited to. Any one (or all) of control program 81, first learning data set 82, second learning data set 83, third teacher data 84, and intervention information 85 is stored in storage unit 8C of prediction device 1C. may be stored, or any (or all) may not be stored.
- ⁇ Predicted image generator 72C> The predicted image generation unit 72C inputs the subject image, the first prediction information, and the second prediction information to the predicted image generation model, and outputs the predicted image.
- the predicted image generation unit 72C outputs the predicted image output from the predicted image generation model (that is, generated by the predicted image generation unit 72C).
- the output controller 73C transmits the predicted image output from the predicted image generator 72C to the terminal device 2 . As shown in FIG. 12, the output control unit 73C outputs at least one of the target person image, the first prediction information, and the second prediction information used to generate the prediction image together with the prediction image to the terminal device 2. may be sent to
- the intervention effect prediction unit 78 outputs second prediction information indicating the method of intervention for the subject and the effect of the intervention from the first prediction information regarding the target site at the second time point after the predetermined period has elapsed from the first time point. do.
- the intervention effect prediction unit 78 may have an intervention effect prediction model capable of estimating the second prediction information from the first prediction information.
- the intervention effect prediction model is a calculation model used when the intervention effect prediction unit 78 performs calculations based on input data.
- Other configurations of the intervention effect prediction model are not particularly limited as long as it is a computation model that can estimate the second prediction information from the first prediction information.
- the intervention effect prediction model may be a neural network, for example, a trained neural network with an input layer and an output layer. More specifically, the intervention effect prediction model may be a neural network trained using effect information as teacher data.
- the effect information includes state information indicating the state of the target part of each patient acquired at a plurality of past points in time, and the state information for each patient and the intervention indicating the intervention applied to each patient Information 85 is associated information.
- the efficacy information may include time-series data regarding each patient's condition information obtained at multiple points in the past from each of the multiple patients to whom the intervention has been applied in the past.
- the intervention effect prediction unit 78 performs calculations based on the intervention effect prediction model in response to input of the first prediction information as input data to the input layer. , the second prediction information is output from the output layer as output data.
- the second prediction information is, for example, information indicating the type of intervention and information indicating the effect of the intervention.
- the effect of the intervention is information representing the symptoms of the target area of the subject at the second time point when the intervention is applied.
- the effect of the intervention is the degree to which the application of the intervention improves symptoms or slows down the progression of the disease for the target site of the subject at the second time point compared to not applying the intervention.
- Information indicating the degree may be used.
- the second prediction information may include information indicating when intervention should be applied (intervention time).
- the intervention effect prediction unit 78 may be configured to extract a feature amount from the first prediction information and use it as input data.
- a known algorithm such as the following can be applied to extract the feature quantity.
- CNN Convolutional neural network
- RNN Auto encoder
- LSTM Long Short-Term Memory
- the configuration of the intervention effect prediction unit 78 will be further described below using FIG. 7, taking as an example a case where the intervention effect prediction unit 78 is a neural network as an intervention effect prediction model.
- the configuration shown in FIG. 7 is an example, and the configuration of the intervention effect prediction unit 78 is not limited to this.
- the intervention effect prediction unit 78 includes an input layer 781 and an output layer 782.
- the intervention effect prediction unit 78 acquires the first prediction information from the prediction information generation unit 75B and uses it as input data to be input to the input layer 781 .
- the intervention effect prediction unit 78 may further acquire a subject image and use it as input data.
- the intervention effect prediction unit 78 may acquire basic information from the basic information acquisition unit 76 and use the basic information as input data to be input to the input layer 781 .
- the intervention effect prediction unit 78 performs operations based on the intervention effect prediction model on the input data input to the input layer 781 and outputs a predicted image from the output layer 782 .
- the intervention effect prediction unit 78 includes a neural network up to an input layer 781 and an output layer 782.
- the neural network may be any neural network suitable for handling time-series information.
- LSTM or the like may be used.
- the neural network may be a neural network suitable for combined handling of time-series information and location information.
- a ConvLSTM network that combines CNN and LSTM may be used.
- the input layer 781 is capable of extracting time-varying feature amounts of input data.
- the output layer 782 can calculate a new feature amount based on the feature amount extracted by the input layer 781, the temporal change of the input data, and the initial value.
- the input layer 781 and the output layer 782 have multiple LSTM layers. Each of input layer 781 and output layer 782 may have three or more LSTM layers.
- An intervention effect prediction model is generated by executing machine learning, which will be described later, on the neural network of the intervention effect prediction unit 78.
- the input data to be input to the input layer 781 may be, for example, a parameter indicating the feature amount extracted from the first prediction information regarding the target part at the second point in time when a predetermined period has elapsed from the first point in time.
- the input data may be information indicating an intervention method included in intervention information 85, which will be described later.
- the intervention effect prediction unit 78 uses the intervention information 85 as input data, the intervention effect prediction unit 78 selects at least one of the intervention methods included in the intervention information 85 and predicts the effect of the intervention. Second prediction information can be output.
- the output layer 782 In response to input of the above-described input data to the input layer 781, the output layer 782 outputs the second prediction information indicating the method of intervention for the subject and the effect of the intervention.
- the second prediction information is, for example, information representing the extent to which the symptoms of the disease related to the target site of the subject when the intervention at the second time is applied or the extent to which the progression of the symptoms is suppressed good. More specifically, the intervention effect prediction unit 78 may output information shown below as the second prediction information. - How close the angle around the target site is to a normal angle. • How close to normal size the target area grows and shrinks. - What percentage of wrinkles and blemishes in the target area are alleviated? ⁇ How long the state of the target part is maintained. • How much walking ability (including the ability to climb stairs) improves.
- the second prediction information is the same type of data as the first prediction information.
- the second predictive information may be information regarding the subject's weight, BMI, waist circumference, visceral fat content, blood pressure, blood sugar level, lipids, uric acid level, or liver function value.
- the second predictive information may be information regarding the subject's hair count, sex hormone level, Norwood classification, or Ludwig classification.
- the second predictive information may be information about the subject's visual acuity, visual field, degree of opacity of the crystal of the eye, or Emery-Little classification.
- the second predictive information is information on the degree of tooth or gum pain, degree of tooth or gum swelling, number of remaining teeth, gingivitis index, or periodontal pocket depth of the subject.
- the second predictive information is information on the degree of pain, degree of swelling, joint angle, joint range of motion, Larsen classification or Stein blocker classification of the subject's whole body, upper limbs or lower limbs.
- the second predictive information may be information regarding the degree of pain, the degree of swelling, or the joint range of motion of the subject's hand.
- the second predictive information may be information regarding the degree of pain, degree of swelling, joint range of motion, HV angle or M1-M2 angle of the subject's foot.
- the second predictive information is information about the degree of pain, degree of swelling, joint angle, joint range of motion, or KL classification of the subject's whole body, upper limbs, or lower limbs.
- the second predictive information may be information about the degree of curvature of the spine of the subject, the range of motion of the spine, or the KL classification.
- the second predictive information may be information about the subject's degree of curvature of the spine, range of motion of the spine, or KL classification.
- the second predictive information may be information about the subject's muscle mass.
- the intervention information 85 is information about interventions whose effects are estimated by the intervention effect prediction unit 78 .
- Intervention information 85 whose efficacy is estimated includes, for example, non-invasive treatments such as weight control, hyperthermia therapy, ultrasound therapy, wearing braces, or taking supplements. Also, as the intervention information 85, the effect of invasive treatment such as surgical treatment may be estimated.
- FIG. 13 is a diagram showing an example of the configuration of a neural network included in the predicted image generation section 72C.
- the predictive image generation model applying the adversarial generation network has two networks, a generator 721C and a classifier 722C, as shown in FIG.
- the first learning unit 74 acquires the subject image and the first prediction information from the storage unit 8C and inputs them to the generator 721C.
- the first learning unit 74 also inputs the second prediction information generated by the intervention effect prediction unit 78 to the generator 721C.
- the generator 721C generates predicted image candidates (false images) from the subject image, the first prediction information, and the second prediction information.
- the generator 721C may refer to real images included in the first training data set 82 to generate predicted image candidates.
- the first learning unit 74 inputs the predicted image candidate generated by the generator 721C and the real image included in the first training data set 82 to the classifier 722C.
- the classifier 722C takes as inputs the authentic images from the first training data set 82 and the predicted image candidates generated by the generator 721C and outputs for each image the probability that it is the authentic image. .
- the first learning unit 74 calculates a classification error that indicates how accurate the probability output by the discriminator 722C is.
- the first learning unit 74 iteratively improves the discriminator 722C and the generator 721C using the error back propagation method.
- the first learning unit 74 updates the weight and bias of the classifier 722C and the weight and bias of the generator 721C until the probability output by the classifier 722C satisfies a predetermined criterion.
- the predicted image generation unit 72C can generate a predicted image that is indistinguishable from the real thing.
- the third learning section 79 controls learning processing for the neural network of the intervention effect prediction section 78 .
- Third teacher data 84 is used for this learning.
- the third teacher data 84 is data used for machine learning to generate an intervention effect prediction model.
- the third teacher data 84 includes third learning input data used as input data and third teacher data for calculating the error between the first prediction information output by the intervention effect prediction unit 78. .
- the third learning input data for example, for each of a plurality of patients to whom the intervention was applied, information indicating the time when the intervention was applied, a patient image showing the target site of each patient, and each patient's It may also include symptom information regarding the onset or progression of symptoms at the target site at the time the patient image was captured.
- the third training data is a patient image showing the target region of the patient at a time after the patient image used for the third learning input data was captured (for example, one year later), and It may also include symptom information regarding the onset or progression of symptoms at the target site of the patient.
- the third training data may include symptom information regarding the target region of each patient at the time when the patient's image was captured.
- the symptom information may include information regarding the onset time of the patient's disease or the progression of symptoms.
- the third teacher data 84 includes state information indicating the state of the target part of each patient acquired at a plurality of past points, and indicates the state information for each patient and the intervention applied to each patient. It may be information associated with intervention information, that is, effect information.
- the third teacher data 84 includes patient images obtained at a plurality of time points from each of a plurality of patients to whom intervention has been applied in the past; may be time-series data associated with .
- FIG. 14 is a flow chart showing an example of the flow of learning processing of the neural network of the intervention effect prediction unit 78. As shown in FIG.
- the third learning unit 79 acquires the third learning input data included in the third teacher data 84 from the storage unit 8C (step S41).
- the third learning input data includes, for example, (a) information indicating the time when the intervention was applied to each of a plurality of patients to whom the intervention was applied, and (b) the patient showing the target site of each patient Image pixel data and (c) symptom information regarding the onset or progression of symptoms at the site of interest at the time the patient image was taken for each patient are included.
- the third learning unit 79 determines a certain patient (step S42).
- the third learning unit 79 acquires (a) information indicating the time when the intervention was applied for a certain patient to whom the intervention was applied, (b) the target of the certain patient, and The pixel data of the patient image showing the region and (c) the symptom information regarding the onset or progression of the symptom of the target region at the time the patient image of a certain patient was captured are input to the input layer 781 (step S43).
- the third learning unit 79 acquires output data, which is information indicating at least one of the method of intervention for a certain patient and the effect of the intervention, from the output layer 782 (step S44).
- This output data contains the same content as the third teacher data.
- the third learning unit 79 acquires third teacher data included in the third teacher data 84 . Then, the third learning unit 79 compares the acquired output data with the information indicating the intervention method for a certain patient and the effect of the intervention, which is included in the third teacher data, and calculates the error (step S45).
- the third learning unit 79 adjusts the intervention effect prediction model so that the error becomes smaller (step S46).
- any known method can be applied to adjust the intervention effect prediction model.
- error backpropagation may be employed as a method of adjusting the intervention effect prediction model.
- the intervention effect prediction model after adjustment becomes a new intervention effect prediction model, and the intervention effect prediction unit 78 uses the new intervention effect prediction model in subsequent calculations.
- the parameters used in the intervention effect prediction unit 78 may be adjusted.
- the parameters include, for example, parameters used in the input layer 781 and the output layer 782.
- the parameters include weighting factors used in the input layer 781 and the LSTM layers of the output layer 782 .
- the parameters may also include filter coefficients.
- Third learning unit 79 if the error is not within a predetermined range and if patient images of all patients included in third teacher data 84 have not been input (NO in step S47), the patient is changed (step S48), and the process returns to step S43 to repeat the learning process. Third learning unit 79 learns when the error is within a predetermined range and when patient images of all patients included in third teacher data 84 have been input (YES in step S47). End the process.
- the third teacher data 84 is data used for machine learning to generate an intervention effect prediction model.
- the third teacher data 84 includes state information indicating the state of the target part of each patient acquired at a plurality of past points, and indicates the state information for each patient and the intervention applied to each patient. Information associated with intervention information, ie effect information, may be included.
- the third teacher data 84 includes third learning input data used as input data and third teacher data for calculating the error between the first prediction information output by the intervention effect prediction unit 78. .
- the third learning input data is, for example, (a) information about a patient to whom the intervention was applied, indicating the time when the intervention was applied, and (b) pixel data of a patient image showing the target region of the patient. and (c) symptom information regarding the onset or progression of symptoms at the site of interest at the time the patient image of a patient was taken.
- the image data used as the third learning input data may be medical image data showing target regions of a plurality of patients.
- the medical image data includes, for example, at least one of X-ray image data, CT image data, MRI image data, PET image data, and ultrasound image data obtained by imaging target regions of a plurality of subjects. You can stay.
- the third training data is a patient image showing the target region of the patient at a time after the patient image used for the third learning input data was captured (for example, one year later), and State information indicating the state of the target site of the patient and symptom information regarding the target site may be included.
- the third training data is a patient image obtained at a plurality of times from each of a plurality of patients to whom intervention has been applied in the past, and information about joint symptoms at the time when the patient image was captured. It may be associated time-series data.
- the input data used when the intervention effect prediction unit 78 learns is (a) a subject image showing the subject's target part at a certain point A, which is included in the third learning input data.
- the intervention effect prediction unit 78 outputs, as output data, first prediction information regarding the target site at time B after a predetermined period (for example, three years) has elapsed from time A based on the above-described input data.
- the intervention effect prediction unit 78 outputs, for example, the angle around the target site at time B of the subject, the degree of increase and shrinkage of the target site, the degree of wrinkles and blemishes of the target site, the target site information indicating the timing and degree of onset of pain, and the timing at which invasive treatment is required for the target site.
- the output data shown here is an example, and is not limited to these.
- the input data used when the intervention effect prediction unit 78 learns is included in the information indicating the degree of onset or progress of the disease regarding the target site of the patient at a certain point B included in the third teacher data 84 and the intervention information. information indicating how interventions should be implemented.
- the intervention effect prediction unit 78 outputs, as output data, information indicating the method of intervention for the subject and the effect of the intervention.
- the intervention effect prediction unit 78 outputs as output data, for example, the extent to which the symptoms of the disease related to the target site of the patient are improved or the progression of the symptoms when the intervention at time point B is applied. Outputs information indicating the degree of suppression. More specifically, the intervention effect prediction unit 78 may output the aforementioned second prediction information as the output data.
- FIG. 15 is a flow chart showing an example of the flow of processing performed by the prediction systems 100 and 100a according to this embodiment.
- the prediction information acquisition unit 71 acquires a subject image.
- the basic information acquisition unit 76 acquires basic information (step S51: acquisition step).
- the prediction information generation unit 75B generates first prediction information in response to the input of the subject image and the basic information, and transmits the first prediction information to the prediction information acquisition unit 71 and the intervention effect prediction unit. 78 (step S52: first information prediction step).
- the intervention effect prediction unit 78 refers to the intervention information 85 and selects at least one of the intervention methods included in the intervention information 85 (step S53: intervention method selection step).
- the intervention effect prediction unit 78 generates second prediction information about the selected intervention method in response to the input of the first prediction information, and transmits the second prediction information to the prediction image generation unit 72C and the output control unit 72C. Output to the section 73C (step S54: intervention effect prediction step).
- the predicted image generation unit 72C generates a predicted image in response to input of the subject image, the first predicted information, and the second predicted information, and outputs the predicted image to the terminal device 2 (step S55: prediction image generation step).
- the prediction systems 100 and 100a can output as a visually easy-to-understand prediction image that the state of the target part differs depending on the presence or absence of the effect of the intervention at the second time point.
- the predicted image is generated from the target person image, which is the image of the target person, the predicted image is a realistic image that is persuasive to the target person. Therefore, for example, if a doctor in charge of the subject presents it to the subject, the subject can effectively understand the need for intervention, and the subject's motivation for intervention can be enhanced.
- control blocks of the prediction devices 1, 1A, 1B, 1C may be realized by logic circuits (hardware) formed in integrated circuits (IC chips) or the like. , may be implemented by software.
- the prediction devices 1, 1A, 1B, and 1C are equipped with computers that execute program instructions, which are software that implements each function.
- This computer includes, for example, one or more processors, and a computer-readable recording medium storing the program.
- the processor reads the program from the recording medium and executes it, thereby achieving the object of the present disclosure.
- a CPU Central Processing Unit
- the recording medium a "non-temporary tangible medium" such as a ROM (Read Only Memory), a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
- a RAM Random Access Memory
- the program may be supplied to the computer via any transmission medium (communication network, broadcast wave, etc.) capable of transmitting the program.
- One aspect of the present disclosure may also be embodied in the form of a data signal embedded in a carrier wave, with the program embodied by electronic transmission.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Epidemiology (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Medical Treatment And Welfare Office Work (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Selective Calling Equipment (AREA)
Abstract
Description
(予測システムの概要)
本開示の一態様に係る予測システムは、対象者の身体の対象部位の状態変化を予測した予測画像を生成して出力するシステムである。ここで、対象部位は、対象者の身体の任意の部位であってもよく、例えば、全身、頭部、眼部、口腔部、首部、腕部、手部、胴部、腰部、臀部、脚部、および足部等のうちのいずれかであってもよい。また、予測画像は、対象部位の皮膚、毛髪、眼球、歯、歯肉、筋肉、脂肪、骨、軟骨、関節および椎間板等のうちのいずれかの状態変化を予測した画像であってもよい。
次に、対象者画像および第1予測情報を取得し、第1予測情報に基づいて、対象者画像から予測画像を生成して出力する予測装置1を備える予測システム100の構成について、図1を用いて説明する。本開示の一実施形態において、予測システム100の予測装置1は、単独で上述の予測システムとして機能することが可能である。図1は、予測装置1を導入した医療施設5における予測システム100の構成例を示すブロック図である。
予測装置1は、所定の医療施設5に設置されているコンピュータではなく、通信ネットワーク6を介して複数の医療施設5の各々に配設されたLANと通信可能に接続されていてもよい。図2は、本開示の別の態様に係る予測システム100aの構成例を示すブロック図である。
次に、予測システム100、100aの構成について、図3を用いて説明する。図3は、本開示の一態様に係る予測システム100、100aの構成の一例を示すブロック図である。説明の便宜上、既に説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
予測装置1は、予測装置1の各部を統括的に制御する制御部7、および、制御部7が使用する各種データを記憶する記憶部8を備える。制御部7は、予測情報取得部71、予測画像生成部72、および出力制御部73を備えている。記憶部8には、予測装置1の各種制御を行うためのプログラムである制御プログラム81が格納されている。
予測情報取得部71は、対象者画像管理装置4から対象者画像を取得し、第1予測情報管理装置3から第1予測情報を取得する。対象者画像および第1予測情報は、予測画像生成部72に入力される入力データである。
予測画像生成部72は、第1予測情報に基づいて、対象者画像から、第2時点における対象部位の状態を予測した予測画像を生成して出力する。予測画像生成部72は、予測画像の生成に用いた対象者画像の少なくとも一部を模した画像を生成してもよい。また、予測画像生成部72によって生成される予測画像は、対象部位に生じた疾患が該対象部位に及ぼす影響を示す画像であってもよい。生成された予測画像は、第2時点において第1時点から変化していない対象者の部位に関連する画像を含んでいてもよい。すなわち、予測画像は、第1時点から第2時点までに変化した部位に関連する画像と、第1時点から第2時点までに変化していない部位に関連する画像とを含んでいてもよい。
出力制御部73は、予測画像生成部72から出力された予測画像を、端末装置2に送信する。出力制御部73は、予測画像と共に、該予測画像の生成に用いた対象者画像および第1予測情報のうちの少なくともいずれかを、端末装置2に送信してもよい。
第1学習部74は、予測画像生成部72が有するニューラルネットワークに対する学習処理を制御する。
以下、敵対的生成ネットワーク(GAN)を適用した予測画像生成モデルを生成するための学習処理について、図4を用いて説明する。図4は、予測画像生成部72が有するニューラルネットワークの構成の一例を示す図である。
以下、予測システム100、100aが行う処理の流れについて、図5を用いて説明する。図5は、本実施形態に係る予測システム100、100aが行う処理の流れの一例を示すフローチャートである。
本開示の他の実施形態について、以下に説明する。説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
予測装置1Aは、予測装置1A各部を統括的に制御する制御部7A、および、制御部7Aが使用する各種データを記憶する記憶部8を備える。制御部7Aは、予測情報取得部71、予測画像生成部72、および出力制御部73に加えて、予測情報生成部75をさらに備えている。
予測情報生成部75は、第1時点の対象者の対象部位が写っている対象者画像から、第1時点から所定期間が経過した第2時点における、対象部位に関する第1予測情報を生成し、予測情報取得部71に該第1予測情報を出力する。
・対象者の対象部位に発症している痛みに関する情報
・対象者の破局的思考に関する情報
・対象者の運動能力に関する情報
・前記対象者の生活満足度を示す情報。
以下、第2実施形態の予測システム100、100aが行う処理の流れについて、図8を用いて説明する。図8は、第2実施形態の予測システム100、100aが行う処理の流れの一例を示すフローチャートである。
予測装置1Aでは、予測情報生成部75は、対象者画像に基づき、第1予測情報を生成していた。これに対して、実施形態2の本変形例に係る予測装置1Bでは、予測情報生成部75Bは、対象者画像に加えて基礎情報に基づいて、第1予測情報を生成する。そのような予測装置1Bを備える予測システム100、100aの構成について、図9を用いて説明する。図9は、本開示の別の態様に係る予測システム100、100aの構成の一変形例を示すブロック図である。予測装置1Bは、制御部7Bにおいて、予測情報生成部75Bをさらに有する。
基礎情報取得部76は、電子カルテ管理装置9から、対象者に関連する情報である基礎情報を取得する。電子カルテ管理装置9は、医療施設5または医療施設5以外の医療施設にて診察を受けた対象者の電子カルテ情報を管理するためのサーバとして機能するコンピュータである。電子カルテ情報には、対象者の基礎情報、および問診情報が含まれていてもよい。基礎情報は、対象者画像に加えて予測情報生成部75Bに入力される入力データである。
第2学習部77は、予測情報生成部75Bが有するニューラルネットワークに対する学習処理を制御する。この学習には、後述する第2学習用データセット83が用いられる。第2学習部77が行う学習の具体例については後述する。
以下、ニューラルネットワークを適用した予測情報生成モデルを生成するための学習処理について、図10を用いて説明する。図10は、予測情報生成部75Bが有するニューラルネットワークの学習処理の流れの一例を示すフローチャートである。
第2学習用データセット83は、予測情報生成モデルを生成するための機械学習に用いられるデータである。第2学習用データセット83は、対象部位の疾患を有する患者に関する患者情報を含んでいてもよい。ここで、患者情報とは、過去の複数の時点において取得された、各患者の対象部位の状態を示す状態情報を含み、かつ、患者毎の前記状態情報と、該状態情報が取得された時点を示す情報とが関連付けられている情報であってもよい。第2学習用データセット83は、入力データとして用いられる第2学習データと、予測情報生成部75Bが出力した第1予測情報との誤差を算出するための第2教師データとを含んでいる。
以下、予測装置1Bを備えている予測システム100、100aが行う処理の流れについて、図11を用いて説明する。図11は、本実施形態に係る予測システム100、100aが行う処理の流れの一例を示すフローチャートである。
本開示の他の実施形態について、以下に説明する。説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
予測装置1Cは、予測装置1の各部を統括的に制御する制御部7C、および制御部7Cが使用する各種データを記憶する記憶部8Cを備えている。制御部7Cは、予測情報取得部71、予測画像生成部72C、出力制御部73C、第1学習部74、予測情報生成部75B、基礎情報取得部76、および第2学習部77に加え、介入効果予測部78および第3学習部79を備えている。
予測画像生成部72Cは、対象者画像、第1予測情報、および第2予測情報を予測画像生成モデルに入力し、予測画像を出力させる。予測画像生成部72Cは、予測画像生成モデルから出力された(すなわち、予測画像生成部72Cによって生成された)予測画像を出力する。
出力制御部73Cは、予測画像生成部72Cから出力された予測画像を、端末装置2に送信する。出力制御部73Cは、図12に示すように、予測画像と共に、該予測画像の生成に用いた対象者画像、第1予測情報、および第2予測情報のうちの少なくともいずれかを、端末装置2に送信してもよい。
介入効果予測部78は、第1時点から所定期間が経過した第2時点における、対象部位に関する第1予測情報から、対象者への介入の方法および該介入による効果を示す第2予測情報を出力する。介入効果予測部78は、第2予測情報を、第1予測情報から推定可能な介入効果予測モデルを有していてもよい。
・畳み込みニューラルネットワーク(CNN:convolutional neural network)
・オートエンコーダ(auto encoder)
・リカレントニューラルネットワーク(RNN:recurrent neural network)
・LSTM(Long Short-Term Memory)。
・対象部位周囲の角度がどれだけ正常な角度に近づくか。
・対象部位の増大および縮小がどれだけ正常な大きさに近づくか。
・対象部位の皺およびシミが何%程度緩和されるか。
・対象部位の状態がどれだけ維持されるか。
・どれだけ歩行能力(階段昇降能力を含む)が改善するか。
介入情報85は、介入効果予測部78が効果を推定する介入についての情報である。効果が推定される介入情報85としては、例えば体重制限、温熱治療、超音波療法、装具の着用、またはサプリメントの摂取といった非侵襲性の治療が挙げられる。また、介入情報85として外科的療法等の侵襲性の治療による効果が推定されてもよい。
以下、敵対的生成ネットワーク(GAN)を適用した予測画像生成モデルを生成するための学習処理について、図13を用いて説明する。図13は、予測画像生成部72Cが有するニューラルネットワークの構成の一例を示す図である。敵対的生成ネットワークを適用した予測画像生成モデルは、図13に示すように、生成器721Cと、識別器722Cという2つのネットワークを有している。
第3学習部79は、介入効果予測部78が有するニューラルネットワークに対する学習処理を制御する。この学習には、第3教師データ84が用いられる。
以下、ニューラルネットワークを適用した介入効果予測モデルを生成するための学習処理について、図7を参照しながら、図14を用いて説明する。図14は、介入効果予測部78が有するニューラルネットワークの学習処理の流れの一例を示すフローチャートである。
第3教師データ84は、介入効果予測モデルを生成するための機械学習に用いられるデータである。第3教師データ84は、過去の複数の時点において取得された、各患者の対象部位の状態を示す状態情報を含み、かつ、患者毎の前記状態情報と、各患者に適用された介入を示す介入情報とが関連付けられている情報、すなわち効果情報を含んでいてもよい。第3教師データ84は、入力データとして用いられる第3学習用入力データと、介入効果予測部78が出力した第1予測情報との誤差を算出するための第3教師用データとを含んでいる。
以下、予測システム100、100aが行う処理の流れについて、図15を用いて説明する。図15は、本実施形態に係る予測システム100、100aが行う処理の流れの一例を示すフローチャートである。
予測装置1、1A、1B、1Cの制御ブロック(特に制御部7、7A、7B、7C)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、ソフトウェアによって実現してもよい。
2、2a、2b 端末装置
3、3a、3b 第1予測情報管理装置
4、4a、4b 対象者画像管理装置
5、5a、5b 医療施設
6 通信ネットワーク
7、7A、7B、7C 制御部
8、8B、8C 記憶部
9、9a、9b 電子カルテ管理装置
71 予測情報取得部
72、72C 予測画像生成部
73、73C 出力制御部
74 第1学習部
75、75B 予測情報生成部
76 基礎情報取得部
77 第2学習部
78 介入効果予測部
79 第3学習部
81 制御プログラム
82 第1学習用データセット
83 第2学習用データセット
84 第3教師データ
85 介入情報
100、100a 予測システム
721、721C 生成器
722、722C 識別器
751、781 入力層
752、782 出力層
Claims (20)
- (a)第1時点の対象者の対象部位が写っている対象者画像と、(b)前記第1時点から所定期間が経過した第2時点における前記対象部位に関する第1予測情報と、を取得する予測情報取得部と、
前記第1予測情報および前記対象者画像から、前記第2時点における前記対象部位の状態を予測した予測画像を生成して出力する予測画像生成部と、を備えている、予測システム。 - 前記予測画像生成部は、前記予測画像を、前記対象者画像および前記第1予測情報を用いて生成可能な予測画像生成モデルを有している、請求項1に記載の予測システム。
- 前記予測画像は、前記対象者画像の少なくとも一部を模した画像である、
請求項1または2に記載の予測システム。 - 前記対象者画像は、前記対象部位が写っている外観画像である、
請求項1~3のいずれか1項に記載の予測システム。 - 前記対象者画像は、前記対象部位が写っている医用画像である、
請求項1~4のいずれか1項に記載の予測システム。 - 前記医用画像は、前記対象者を撮像したX線画像、CT画像、MRI画像、PET画像および超音波画像のうちの少なくとも1つである、
請求項5に記載の予測システム。 - 前記対象者画像は、前記対象者の全身、頭部、上半身、下半身、上肢、および下肢のうちのいずれかを撮像した画像である、
請求項1~6のいずれか1項に記載の予測システム。 - 前記予測画像は、前記対象部位に生じた疾患が、該対象部位に及ぼす影響を予測した画像である、
請求項1~7のいずれか1項に記載の予測システム。 - 前記疾患は、肥満症、脱毛症、白内障、歯周病、関節リウマチ、へバーデン結節、外反母趾、変形性関節症、変形性脊椎症、圧迫骨折およびサルコペニアのうちの少なくとも1つを含む、
請求項8に記載の予測システム。 - 前記予測画像生成モデルは、対象部位が写っている複数の画像データを教師データとして用いて学習されたニューラルネットワークである、
請求項2に記載の予測システム。 - 前記予測画像生成モデルは、敵対的生成ネットワークまたはオートエンコーダである、請求項2または10に記載の予測システム。
- 前記第1予測情報は、前記対象部位の前記疾患に関連した前記対象部位の形状および外見に関連する情報である、
請求項8または9に記載の予測システム。 - 前記対象者画像から前記第1予測情報を生成し、前記予測情報取得部に該第1予測情報を出力する予測情報生成部をさらに備え、
前記予測情報生成部は、前記第1予測情報を、前記対象者画像から推定可能な予測情報生成モデルを有している、
請求項1~12のいずれか1項に記載の予測システム。 - 前記対象者の性別、年齢、身長、体重、および、該対象者の前記第1時点における前記対象部位の状態を示す情報のうちの少なくとも1つを含む基礎情報を取得する基礎情報取得部をさらに備え、
前記予測情報生成モデルは、前記第1予測情報を、前記対象者の対象者画像と該対象者の前記基礎情報とから推定可能である、
請求項13に記載の予測システム。 - 前記予測情報生成モデルは、対象部位の疾患を有する患者に関する患者情報を教師データとして用いて学習されたニューラルネットワークであり、
前記患者情報は、過去の複数の時点において取得された、各患者の対象部位の状態を示す状態情報を含み、かつ、患者毎の前記状態情報と、該状態情報が取得された時点を示す情報とが関連付けられている情報である、
請求項13または14に記載の予測システム。 - 前記第1予測情報を入力として、前記対象者への介入の方法および該介入による効果を示す第2予測情報を出力する介入効果予測部をさらに備える、
請求項1~15のいずれか1項に記載の予測システム。 - 前記介入効果予測部は、効果情報を教師データとして用いて学習されたニューラルネットワークを介入効果予測モデルとして有し、
前記効果情報は、過去の複数の時点において取得された、各患者の対象部位の状態を示す状態情報を含み、かつ、患者毎の前記状態情報と、各患者に適用された介入を示す介入情報とが関連付けられている情報である、
請求項16に記載の予測システム。 - 前記介入の方法は、食事療法、運動療法、薬物療法、装具療法、リハビリテーションおよび外科的療法のうちの少なくとも1つを含む、
請求項16または17に記載の予測システム。 - 予測システムの制御方法であって、
(a)第1時点の対象者の対象部位が写っている対象者画像と、(b)前記第1時点から所定期間が経過した第2時点における前記対象部位に関する第1予測情報、を取得する予測情報取得ステップと、
前記第1予測情報および前記対象者画像から、前記第2時点における前記対象部位の状態を予測した予測画像を生成して出力する予測画像生成ステップと、を含み、
前記予測システムは、前記予測画像を、前記対象者画像および前記第1予測情報を用いて生成可能な予測画像生成モデルを有している制御方法。 - 請求項1から18のいずれか1項に記載の予測システムとしてコンピュータを機能させるための制御プログラムであって、前記予測情報取得部および前記予測画像生成部としてコンピュータを機能させるための制御プログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2022209591A AU2022209591A1 (en) | 2021-01-20 | 2022-01-19 | Prediction system, control method, and control program |
CN202280010250.6A CN116724360A (zh) | 2021-01-20 | 2022-01-19 | 预测系统、控制方法以及控制程序 |
JP2022576720A JPWO2022158490A1 (ja) | 2021-01-20 | 2022-01-19 | |
EP22742612.9A EP4283631A1 (en) | 2021-01-20 | 2022-01-19 | Prediction system, control method, and control program |
US18/273,192 US20240119587A1 (en) | 2021-01-20 | 2022-01-19 | Prediction system, control method, and control program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021007477 | 2021-01-20 | ||
JP2021-007477 | 2021-01-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022158490A1 true WO2022158490A1 (ja) | 2022-07-28 |
Family
ID=82549432
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/001798 WO2022158490A1 (ja) | 2021-01-20 | 2022-01-19 | 予測システム、制御方法、および制御プログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20240119587A1 (ja) |
EP (1) | EP4283631A1 (ja) |
JP (1) | JPWO2022158490A1 (ja) |
CN (1) | CN116724360A (ja) |
AU (1) | AU2022209591A1 (ja) |
WO (1) | WO2022158490A1 (ja) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008036068A (ja) | 2006-08-04 | 2008-02-21 | Hiroshima Univ | 骨粗鬆症診断支援装置および方法、骨粗鬆症診断支援プログラム、骨粗鬆症診断支援プログラムを記録したコンピュータ読み取り可能な記録媒体、骨粗鬆症診断支援用lsi |
WO2017191847A1 (ja) * | 2016-05-04 | 2017-11-09 | 理香 大熊 | 将来像予測装置 |
WO2018008288A1 (ja) * | 2016-07-06 | 2018-01-11 | 臨床医学研究所株式会社 | 介入効果推測システム、介入効果推測方法、及び、介入効果推測システムに用いるプログラム |
WO2020044824A1 (ja) * | 2018-08-31 | 2020-03-05 | 日本電信電話株式会社 | 介入内容推定装置、方法およびプログラム |
WO2020138085A1 (ja) * | 2018-12-25 | 2020-07-02 | 京セラ株式会社 | 疾患予測システム |
-
2022
- 2022-01-19 CN CN202280010250.6A patent/CN116724360A/zh active Pending
- 2022-01-19 US US18/273,192 patent/US20240119587A1/en active Pending
- 2022-01-19 AU AU2022209591A patent/AU2022209591A1/en active Pending
- 2022-01-19 JP JP2022576720A patent/JPWO2022158490A1/ja active Pending
- 2022-01-19 WO PCT/JP2022/001798 patent/WO2022158490A1/ja active Application Filing
- 2022-01-19 EP EP22742612.9A patent/EP4283631A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008036068A (ja) | 2006-08-04 | 2008-02-21 | Hiroshima Univ | 骨粗鬆症診断支援装置および方法、骨粗鬆症診断支援プログラム、骨粗鬆症診断支援プログラムを記録したコンピュータ読み取り可能な記録媒体、骨粗鬆症診断支援用lsi |
WO2017191847A1 (ja) * | 2016-05-04 | 2017-11-09 | 理香 大熊 | 将来像予測装置 |
WO2018008288A1 (ja) * | 2016-07-06 | 2018-01-11 | 臨床医学研究所株式会社 | 介入効果推測システム、介入効果推測方法、及び、介入効果推測システムに用いるプログラム |
WO2020044824A1 (ja) * | 2018-08-31 | 2020-03-05 | 日本電信電話株式会社 | 介入内容推定装置、方法およびプログラム |
WO2020138085A1 (ja) * | 2018-12-25 | 2020-07-02 | 京セラ株式会社 | 疾患予測システム |
Also Published As
Publication number | Publication date |
---|---|
CN116724360A (zh) | 2023-09-08 |
EP4283631A1 (en) | 2023-11-29 |
US20240119587A1 (en) | 2024-04-11 |
AU2022209591A1 (en) | 2023-08-17 |
JPWO2022158490A1 (ja) | 2022-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7283673B1 (ja) | 推定装置、プログラム及び記録媒体 | |
US20170360578A1 (en) | System and method for producing clinical models and prostheses | |
US20200129237A1 (en) | Body engagers and methods of use | |
JP2023169392A (ja) | 予測装置、予測システム、制御方法、および制御プログラム | |
US20220223255A1 (en) | Orthopedic intelligence system | |
Villamil et al. | Simulation of the human TMJ behavior based on interdependent joints topology | |
WO2022158490A1 (ja) | 予測システム、制御方法、および制御プログラム | |
JP2018535763A (ja) | 画像処理方法 | |
JP7357872B2 (ja) | 被験者の状態を推定するためのコンピュータシステム、方法、およびプログラム | |
Orsbon | Swallowing Biomechanics of the Macaca mulattaHyolingual Apparatus | |
US20240087716A1 (en) | Computer-assisted recommendation of inpatient or outpatient care for surgery | |
Kulkarni et al. | Artificial intelligence-The robo radiologists | |
Bellapukonda et al. | Digitalization of Pediatric Dentistry: A Review | |
Dołoszycka et al. | Selected aspects of anatomy and biomechanics of the stomatognathic system | |
TW202405756A (zh) | 醫療或美容的外觀重建方法 | |
Zampier et al. | EFFECTS OF TRANSCRANIAL DIRECT CURRENT STIMULATION COMBINED WITH ARM SWING ON THE WALKING PERFORMANCE OF PEOPLE WITH PARKINSON’S DISEASE | |
Liffey | Imaging and device biomechanics: Modelling, diagnosis, rehabilitation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22742612 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280010250.6 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18273192 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2022576720 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022209591 Country of ref document: AU Date of ref document: 20220119 Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022742612 Country of ref document: EP Effective date: 20230821 |