WO2022045202A1 - Dispositif de traitement d'image, système de traitement d'image, procédé de fourniture d'annotation, procédé de production de modèle appris, modèle appris et programme - Google Patents

Dispositif de traitement d'image, système de traitement d'image, procédé de fourniture d'annotation, procédé de production de modèle appris, modèle appris et programme Download PDF

Info

Publication number
WO2022045202A1
WO2022045202A1 PCT/JP2021/031188 JP2021031188W WO2022045202A1 WO 2022045202 A1 WO2022045202 A1 WO 2022045202A1 JP 2021031188 W JP2021031188 W JP 2021031188W WO 2022045202 A1 WO2022045202 A1 WO 2022045202A1
Authority
WO
WIPO (PCT)
Prior art keywords
annotation
dimensional
image
slice
data
Prior art date
Application number
PCT/JP2021/031188
Other languages
English (en)
Japanese (ja)
Inventor
裕嗣 村津
和歳 鵜飼
昌司 小橋
明弘 圓尾
圭吾 林
Original Assignee
グローリー株式会社
兵庫県公立大学法人
裕嗣 村津
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by グローリー株式会社, 兵庫県公立大学法人, 裕嗣 村津 filed Critical グローリー株式会社
Publication of WO2022045202A1 publication Critical patent/WO2022045202A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a processing technique for a plurality of two-dimensional slice images and a technique related thereto.
  • Patent Document 1 describes a technique for determining the presence or absence of a lesion from a plurality of two-dimensional slice images by machine learning using a neural network.
  • a technique for determining the presence or absence of a lesion from a plurality of two-dimensional slice images by machine learning using a neural network In such a technique, a large number of two-dimensional slice images including known lesion sites are used as teacher data when training a machine learning learner.
  • the image processing apparatus has a control unit that generates 3D surface data based on a plurality of 2D slice images, and an annotation for the 3D surface data, that is, 3D position information. It is characterized by including a reception unit that accepts an operation of assigning a 3D annotation, which is an annotation to be possessed.
  • the reception unit may accept a 3D annotation addition operation over a range spanning two or more two-dimensional slice images among the plurality of two-dimensional slice images.
  • the reception unit accepts the operation of designating the two-dimensional position on the projection surface on which the three-dimensional surface data is projected as the operation of designating the position of the 3D annotation, and the control unit receives the two-dimensional position on the projection surface. May be converted into a three-dimensional position on the three-dimensional surface data to generate the three-dimensional position information of the 3D annotation.
  • the control unit may execute machine learning by using data reflecting annotations on the three-dimensional surface data as teacher data.
  • the control unit executes a first process of adding the 3D annotation to the 3D surface data according to the operation of the operating user, and based on the 3D annotation, at least one of the plurality of 2D slice images.
  • the second process of automatically adding the 2D annotation which is an annotation having the two-dimensional position information, is executed, and the data based on the at least one two-dimensional slice image to which the 2D annotation is attached is obtained.
  • the machine learning may be executed by using it as the teacher data.
  • the 2D annotation is added to the corresponding 2D position, which is the 2D position corresponding to the 3D position of the 3D annotation, in the at least one 2D slice image corresponding to the 3D position of the 3D annotation. , Even if it is given to a 2D position determined to be the corresponding 2D position in any of a predetermined number of 2D slice images in the vicinity of each 2D slice image in each 2D slice image. good.
  • the 2D annotation includes a 2D basic annotation given to a 2D position corresponding to the 3D position in at least one 2D slice image corresponding to the 3D position of the 3D annotation, and the 2D basic annotation. It is a 2D complementary annotation to be complemented, and is given to a 2D position to which the 2D basic annotation is given in any of a predetermined number of slice images in the vicinity of each 2D slice image in each 2D slice image. It may include a 2D complementary annotation.
  • the control unit assigns not only the corresponding 2D position which is the 2D position corresponding to the 3D position of the 3D annotation but also the 2D extended annotation given to other than the corresponding 2D position.
  • the 2D extended annotation includes a 2D basic annotation given to a 2D position corresponding to the 3D position in at least one 2D slice image corresponding to the 3D position of the 3D annotation, and 2 each.
  • the dimensional slice image may include a 2D complementary annotation given to a two-dimensional position to which the 2D basic annotation is given in any of a predetermined number of two-dimensional slice images in the vicinity of each of the two-dimensional slice images. ..
  • the 2D annotation is performed at a 2D position corresponding to the 3D position of the 3D annotation in any of the 2D slice image and a predetermined number of 2D slice images in the vicinity thereof in each 2D slice image. It may be given to all the two-dimensional positions determined to be present.
  • the control unit is a partial image in at least a part of the two or more 2D slice images to which two or more 2D annotations are attached to each of the plurality of two-dimensional slice images, and is a bounding box corresponding to the two or more 2D annotations.
  • the machine learning may be executed by using the partial image in the image as the teacher data.
  • the control unit is a partial image in at least a part of the two-dimensional slice images among the plurality of two-dimensional slice images, and is a portion in the bounding box corresponding to both the 2D basic annotation and the 2D complementary annotation.
  • the image may be used as the teacher data to perform the machine learning.
  • the control unit may execute inference processing by using the trained model generated by the machine learning.
  • the three-dimensional surface data may be data representing the three-dimensional surface shape of bone.
  • the plurality of two-dimensional slice images are slice images of a subject having a fractured state
  • the three-dimensional surface data is data representing the three-dimensional surface shape of the bone
  • the reception unit indicates the fractured portion.
  • the control unit receives the operation of adding the 3D annotation, uses the data reflecting the annotation on the three-dimensional surface data as the teacher data, executes the machine learning, and the control unit is generated by the machine learning.
  • the trained model may be used to perform inference processing related to fracture detection.
  • the image processing system is characterized by including one of the above image processing devices and an image generation device for generating a plurality of two-dimensional slice images.
  • the program according to the present invention has a) a step of accepting an annotation operation for 3D surface data generated based on a plurality of 2D slice images, and b) according to the addition operation. It is a program for causing a computer to execute a step of adding an annotation having three-dimensional position information to the surface of a three-dimensional model relating to the three-dimensional surface data.
  • the annotating method includes a) a step of accepting an annotating operation for three-dimensional surface data generated based on a plurality of two-dimensional slice images, and b) the addition operation. Accordingly, it is characterized by comprising a step of adding an annotation having three-dimensional position information to the surface of the three-dimensional model relating to the three-dimensional surface data.
  • the method for manufacturing a trained model includes a) a step of accepting an annotation operation for 3D surface data generated based on a plurality of 2D slice images, and b) the above. It is characterized by comprising a step of executing machine learning to generate a trained model by using data reflecting the annotation on the three-dimensional surface data as teacher data.
  • the manufacturing method is based on c) a step of adding a 3D annotation to the 3D surface data, which is an annotation having 3D position information, and d) the 3D annotation.
  • a step of automatically adding a 2D annotation, which is an annotation having two-dimensional position information, is further provided in at least one slice image of the plurality of two-dimensional slice images, and the step b) is b-. 1) It may have a step of executing the machine learning by using the at least one two-dimensional slice image to which the 2D annotation is attached as the teacher data.
  • the 2D annotation is given to the corresponding 2D position, which is the 2D position corresponding to the 3D position of the 3D annotation, in the at least one slice image, and each 2D slice image.
  • it may be given to the two-dimensional position determined to be the corresponding two-dimensional position by any one of a predetermined number of two-dimensional slice images in the vicinity of each of the two-dimensional slice images.
  • the step b) is a partial image in at least a part of the two-dimensional slice images to which two or more 2D annotations are added to each of the b-1) two or more two-dimensional slice images, and the two or more 2D annotations.
  • the trained model according to the present invention is characterized in that it is a trained model manufactured by using any of the above manufacturing methods.
  • FIG. 1 is a block diagram showing an image processing system 10.
  • the image processing system 10 is a system that processes a plurality of two-dimensional sliced images 220 obtained by slicing a subject (subject or the like) with a cross section perpendicular to a reference axis.
  • the image processing system 10 includes a slice image generation device 20 and an image processing device 30.
  • the slice image generation device 20 is composed of an MRI (Magnetic Resonance Imaging) device, a CT (Computed Tomography) device, or the like.
  • the slice image generation device 20 generates and acquires a plurality of two-dimensional slice images (also simply referred to as slice images) 220 relating to a subject (subject or the like).
  • the plurality of slice images 220 are images obtained by slicing a subject in a cross section perpendicular to the reference axis at a plurality of different positions on the reference axis (for example, at different positions with a pitch of 0.6 mm to 1 mm (millimeter)). be.
  • the plurality of slice images 220 are acquired over a predetermined range (for example, 300 mm) in the reference axis direction, and are composed of hundreds to thousands (for example, 500) of images.
  • the plurality of slice images 220 are acquired with respect to different reference axes (for example, nine different (nine directions) reference axes) for one sample, and the plurality of slice images are obtained. Groups may be formed.
  • the number of the plurality of slice image groups acquired in this way is composed of several thousand or more images (for example, 4500 images), and is very large.
  • the plurality of slice image groups may be acquired, for example, by imaging slice image groups in each direction (for example, each of the nine directions) with respect to different reference axes.
  • the present invention is not limited to this, and the plurality of slice image groups capture one slice image group in one direction and perform image conversion processing on the one slice image group to slice in the other direction. It may be acquired by generating an image group (for example, eight slice image groups having the other eight directions as reference axis directions).
  • the plurality of slice images (one slice image group) or the plurality of slice image groups are acquired for each of the plurality of samples, and a plurality of slice image groups for the plurality of samples are configured.
  • the number of multiple slice image groups for a plurality of samples is enormous.
  • a slice image group 210 relating to the lumbar region including the pelvis is imaged and acquired.
  • the image processing device 30 executes machine learning based on a plurality of two-dimensional slice images 220 (specifically, a plurality of slice image groups relating to a plurality of samples) acquired by the slice image generation device 20. Specifically, the learning parameters of the learning model 410 (learner) are adjusted using a predetermined machine learning method to generate a trained model 410 (also referred to as 420) (see FIG. 3). As the learning model 410, for example, a neural network model composed of a plurality of layers is used.
  • FIG. 3 is a conceptual diagram showing the processing at the learning stage.
  • the image processing device 30 is a learning model (learning) for executing an inference process for specifying a lesion site of a slice image based on teacher data (a plurality of known slice image groups) having a lesion site or the like.
  • a finished model 420) is generated (see FIGS. 3 and 6).
  • a fracture site (more specifically, a fracture site in the pelvis) is exemplified as a lesion site.
  • FIG. 6 is a flowchart showing the processing of the controller 31 (see FIG. 1) in the learning stage.
  • the image processing device 30 also executes processing related to the preparation stage of teacher data (preparation processing for generating teacher data used in the learning stage). Specifically, an annotation processing or the like for the three-dimensional surface data (three-dimensional surface data of bone) is executed (see FIGS. 2 and 5). Note that FIG. 2 is a conceptual diagram showing processing and data in the preparation stage of teacher data, and FIG. 5 is a flowchart showing processing of the controller 31 (FIG. 1) in the preparation stage.
  • the image processing device 30 uses a parameter-adjusted training model 410 (trained model 420) to capture each of the (unknown) plurality of sliced images 220 (also referred to as 270) acquired for a subject. (See FIGS. 4 and 7). Specifically, the image processing apparatus 30 uses the above-mentioned trained model 420 to execute an inference process for identifying a lesion site (fracture site) of each slice image 270. In the inference process, for example, the presence or absence of a fracture site and the position of the fracture site are specified (estimated). Note that FIG. 4 is a conceptual diagram showing the processing of the inference stage in machine learning, and FIG. 7 is a flowchart showing the processing of the controller 31 (FIG. 1) in the inference stage.
  • the image processing device 30 includes a controller 31 (also referred to as a control unit), a storage unit 32, and an operation unit 35.
  • the slice image generation device 20 and the image processing device 30 are connected by wire (or wirelessly) and can communicate with each other.
  • the image processing device 30 receives the information (slice image, etc.) acquired by the slice image generation device 20 from the slice image generation device 20 via a predetermined connection cable.
  • the controller 31 is a control device built in the image processing device 30 and controlling the operation of the image processing device 30.
  • the controller 31 is configured as a computer system including a single or a plurality of CPUs (Central processing Units) (also referred to as a microprocessor or a hardware processor).
  • the controller 31 executes various types of software programs (hereinafter, also simply referred to as programs) stored in the storage unit (nonvolatile storage unit such as ROM and / or hard disk) 32 in the CPU. Realize the processing of.
  • the program (specifically, a program module group) may be recorded on a portable recording medium such as a USB memory, read from the recording medium, and installed in the image processing apparatus 30. Alternatively, the program may be downloaded via a communication network or the like and installed in the image processing apparatus 30.
  • the controller 31 executes processing related to the preparation stage of teacher data in machine learning. Specifically, a process of acquiring a plurality of slice images 220 from the slice image generation device 20, a process of generating three-dimensional surface data 330, a process of assigning an annotation to the three-dimensional surface data 330, and a teacher data based on the annotation. Generation processing and the like are executed.
  • the controller 31 executes a process related to the learning stage in machine learning. Specifically, based on the generated teacher data, a process of optimizing the learning parameters related to the learner (learning model 410) is executed, and the trained model 420 is generated. More specifically, machine learning is performed using the slice image 220 having a known fracture site as teacher data, and the learning parameters are optimized. As a result, the trained model 420 is generated.
  • the controller 31 executes a process related to the inference stage in machine learning. Specifically, an inference process is executed for each of a plurality of (unknown) sliced images 270 acquired for a certain subject using a parameter-adjusted learning model (trained model 420). Specifically, an inference process for identifying the lesion site (fracture site) of each slice image 270 is executed. In addition, the controller 31 also executes a process of outputting an inference result (a process of displaying an image including a fractured portion, etc.).
  • the storage unit 32 is composed of a storage device such as a hard disk drive (HDD) and / or a solid state drive (SSD).
  • the storage unit 32 stores a plurality of two-dimensional slice images 220, 260, 270, 280, three-dimensional surface data 330, a learning model 410 (including learning parameters related to the learning model) (and thus a trained model 420) and the like.
  • the operation unit 35 includes an operation input unit 35a for receiving an operation input to the image processing device 30, and a display unit 35b for displaying and outputting various information.
  • the operation input unit 35a also referred to as a reception unit
  • the display unit 35b displays a slice image 280 or the like regarding the result of the inference processing using the trained model 420.
  • a mouse, keyboard, or the like is used as the operation input unit 35a
  • a display liquid crystal display or the like
  • a touch panel that also functions as a part of the operation input unit 35a and also as a part of the display unit 35b may be provided.
  • the image processing device 30 is also referred to as a medical image processing device, and the image processing system 10 is also referred to as a medical image processing system.
  • a plurality of slice images 220 (slice image group 210) relating to a subject (individual) having a fracture state (fracture disease) are used as teacher data in learning with teacher data. More specifically, a plurality of two-dimensional slice images 220 (a plurality of slice image groups 210) relating to a plurality of subjects having a fractured state are utilized. In particular, a two-dimensional slice image 220 including a fracture site is used. Specifically, a region including a known fracture site (also referred to as a fracture site region) is specified in the two-dimensional slice image 220, and then the two-dimensional slice image 220 including the fracture site region is used.
  • an annotation is added to the two-dimensional slice image 220 (also simply referred to as a slice image) by using a new method. According to this, it is possible to reduce the complexity in the annotation operation.
  • the new method is a method of annotating using 3D surface data 330 (see FIGS. 2 and 9) based on a plurality of 2D slice images 220 (see FIG. 8 and the like). More specifically, the image processing device 30 accepts an operation of assigning the three-dimensional surface data 330 by the operating user. Then, the image processing apparatus 30 assigns a 3D (three-dimensional) annotation 53 (annotation having three-dimensional position information) to the surface of the three-dimensional surface data 330 (three-dimensional model of the human body structure (bone)) based on the imparting operation. (See FIGS. 2 and 10 etc.).
  • the image processing apparatus 30 assigns a 2D (two-dimensional) annotation 52 (an annotation having information on the two-dimensional position in each slice image) to each of the plurality of two-dimensional slice images 220 based on the 3D annotation 53. (See FIGS. 2 and 11 etc.).
  • FIG. 8 is a schematic diagram showing a plurality of two-dimensional slice images.
  • FIG. 9 is a schematic diagram showing three-dimensional surface data 330 and the like generated based on a plurality of two-dimensional slice images 220 (see FIG. 8 and the like).
  • the three-dimensional surface data 330 is shown in a simplified manner, but in reality, the three-dimensional surface data 330 is so precise that the state of the bone surface can be discriminated (the fracture portion can be visually recognized).
  • FIG. 10 is a conceptual diagram showing three-dimensional surface data 330 (also referred to as 340) to which the 3D annotation 53 is attached, and FIG. 11 shows the 2D annotation 52 and the like given based on the 3D annotation 53.
  • FIG. 11 a two-dimensional slice image 220 (also referred to as 260) to which the 2D annotation 52 is attached is shown, and a bounding box 55 (described later) surrounding the 2D annotation 52 (see the upper right portion of FIG. 16 and the like) is shown. It is shown.
  • FIGS. 10 and 11 show the fracture site present on the left half of the subject. The fracture site is shown on the right side of the figure in FIG. 11 and the like in which a two-dimensional slice image (an image in which the spine is placed on the lower side (an image seen from the foot side instead of the head side)) is drawn.
  • FIG. 10 and the like depicting the three-dimensional surface data 330 (a three-dimensional model in which the subject is viewed from the front side (front side)), it is shown on the right side in the figure.
  • FIG. 5 is a flowchart showing the processing of the controller 31 (specifically, the processing in the preparation stage). Note that FIG. 5 is also a diagram showing a method of assigning annotations, and is also a diagram showing a method (partial) of generating a trained model.
  • step S11 the controller 31 acquires a series of a plurality of slice images 220 (see the top row of FIG. 2 and FIG. 8) generated (captured) by the slice image generation device 20. Specifically, a series of plurality of slice images 220 regarding a certain subject (subject) are acquired.
  • the controller 31 obtains three-dimensional surface data 330 (three-dimensional model) (see also the second stage from the top of FIG. 2) based on the plurality of slice images 220.
  • the three-dimensional surface data 330 is data representing the surface of a three-dimensional model (here, a three-dimensional model representing the bones of the pelvis of the subject) relating to a part of the subject.
  • the three-dimensional surface data 330 is configured as polygon mesh data representing a surface by, for example, aggregating a large number of adjacent polygons (for example, triangles).
  • polygon mesh data representing the three-dimensional surface shape of the bone (pelvis) of the subject is generated.
  • the three-dimensional surface data 330 is formed as follows. For example, the surface region of the bone is extracted by image processing for each of the plurality of slice images 220, and the surface regions of the plurality of slice images 220 are laminated in the reference axis direction (normal direction) of the plurality of slice images. .. Then, the three-dimensional surface data 330 is formed by connecting adjacent surface regions to each other in the stacking direction.
  • the controller 31 accepts an operation for adding an annotation (hereinafter, also referred to as "3D annotation") 53 which is an annotation to the three-dimensional surface data 330 and has three-dimensional position information. Further, the controller 31 (automatically) attaches the 3D annotation 53 to the 3D surface data 330 (specifically, the surface of the 3D model (3D model) relating to the 3D surface data 330) according to the operation. Give.
  • 3D annotation an annotation to the 3D surface data 330 and has three-dimensional position information.
  • FIG. 14 is a schematic diagram for explaining such a process of assigning the 3D annotation 53.
  • the three-dimensional surface data 330 is actually a three-dimensional surface model of a pelvis having a very complicated structure, but in FIG. 14, the three-dimensional surface data 330 is used as a cylindrical three-dimensional surface for simplification of explanation. Shown as a model.
  • the controller 31 has a 3D surface data 330 (in other words, a 3D surface data 330) with respect to a projection surface 70 virtually arranged between the position of the 3D surface data 330 (existence position in the virtual space) and the viewpoint position of the operating user. , 3D surface model of bone) is projected and displayed.
  • FIG. 15 is a diagram showing how such a projection surface 70 is displayed on the display unit 35b.
  • the three-dimensional surface data 330 three-dimensional surface model of bone
  • the three-dimensional surface data 330 is appropriately rotated, moved (translated), and scaled (enlarged and / or reduced) according to the user's operation.
  • the user can visually recognize the three-dimensional surface data 330 (three-dimensional surface model of bone) projected on the projection surface 70 from a desired viewpoint with a desired size.
  • the display unit 35b displays a projection image 332 (two-dimensional image) in which the three-dimensional surface data 330 is projected onto the projection surface 70.
  • a rectangular projection image 332 is displayed by viewing a cylindrical three-dimensional surface model (three-dimensional surface data 330) having its central axis in the vertical direction from the front.
  • the operation user draws an annotation line 51 on a portion determined to be a fracture site from a medical point of view.
  • the annotation line 51 is drawn, for example, in a line shape (straight line or curved line) by the operating user using a mouse or the like (operation input unit 35a).
  • Such an operation of drawing the annotation line 51 (an operation including an operation of designating a two-dimensional position on the projection surface 70 on which the three-dimensional surface data 330 is projected) is accepted as an operation of designating the position of the 3D annotation 53.
  • the controller 31 converts the two-dimensional position of the annotation line 51 on the projection surface 70 into a three-dimensional position on the three-dimensional surface data 330, and makes a 3D annotation. Generates 53 three-dimensional position information.
  • each point on the annotation line 51 is moved toward the three-dimensional surface data 330 in the user's line-of-sight direction. Then, the position (reaching position) where each point moving in the line-of-sight direction collides with the surface (the surface of the three-dimensional model of the bone) of the three-dimensional surface data 330 is calculated as the three-dimensional position of the 3D annotation 53.
  • FIG. 14 shows a situation in which the linear annotation line 51 on the projection surface 70 is associated with each position on the surface (curved surface) of the three-dimensional surface data 330. The portion represented by the curve associated with the surface of the three-dimensional surface data 330 is given as the 3D annotation 53.
  • the annotation line 51 is actually drawn on the surface of a three-dimensional model of the pelvis (three-dimensional surface data 330), as shown in FIG. 10, for example.
  • the upper part of FIG. 10 shows the three-dimensional surface data 330 (three-dimensional model of the pelvis) generated in step S12, and the lower part of FIG. 10 shows the three-dimensional surface data 330 on which the annotation line 51 is drawn. It is shown.
  • the annotation line 51 as shown in FIG. 12 or 13 is drawn.
  • the annotation line 51 is drawn as a line having a predetermined width (width of several polygons (for example, 3 to 4)) in the three-dimensional surface data 330 configured as polygon mesh data (FIG. 12 and FIG. 12 and). See FIG. 13).
  • a predetermined width width of several polygons (for example, 3 to 4)
  • the predetermined width may be appropriately changed according to a designated operation (change operation) by the operating user.
  • FIGS. 12 and 13 are partially enlarged views of the three-dimensional model (three-dimensional surface data 330).
  • the fracture state is schematically shown.
  • the three-dimensional surface data 330 is shown as polygon mesh data, and the fracture portion on the polygon mesh data is shown.
  • a “complete fracture” is a fracture in which a fragment is formed (a fracture in which the bone is completely torn, etc.).
  • an “incomplete fracture” is a fracture in which no fragment is formed (a fracture in which the bone is not completely torn, such as a state in which a so-called "crack” is generated).
  • FIG. 12 is a partially enlarged view of the three-dimensional model with a complete fracture
  • FIG. 13 is a partially enlarged view of the three-dimensional model with an incomplete fracture.
  • annotation lines 51 are provided on both sides of the fragment (the space between the two interfaces of the fracture site) (the black part in the center of the lower part of FIG. 12). It is drawn.
  • FIG. 13 in the case of an incomplete fracture in which no fragment exists, the portion determined to be the fracture site (the line determined by the doctor or the like to be the fracture site) (see the broken line in FIG. 13). ), The annotation line 51 is drawn.
  • the three-dimensional position of the annotation line 51 is calculated at the same time as the drawing of the annotation line 51, and the 3D annotation 53 is generated along with the calculation operation of the three-dimensional position.
  • the polygon existing at the position corresponding to the annotation line 51 having the predetermined width is specified in the three-dimensional surface data 330. ..
  • the annotation line 51 composed of the set of the polygons is drawn on the three-dimensional surface data 330 (the surface of the three-dimensional surface data 330), and at the same time, the three-dimensional position of each polygon is the three-dimensional position of the 3D annotation 53. Identified as a location.
  • the annotation line 51 on the three-dimensional surface data 330 is also expressed as a 3D annotation 53. In this way, the 3D annotation 53 is generated in response to the operation of adding the 3D annotation 53 indicating the fractured portion (the drawing operation of the annotation line 51 in the projected image 332).
  • the annotation line 51 is drawn over a relatively long range (for example, a range of half or one circumference of the bone) on the surface of the bone to indicate the fractured portion (see FIGS. 10, 16 and 17). ..
  • a range straddling two or more two-dimensional slice images out of the plurality of two-dimensional slice images 220 (a predetermined range in the normal direction of the slice cross section (for example, the minimum value in the Z-axis direction (the normal direction)).
  • the operation of assigning the 3D annotation 53 is accepted over the range corresponding to the range from Zmin to the maximum value Zmax (described later)).
  • the 3D annotation 53 is added over a range straddling the two or more two-dimensional slice images.
  • FIG. 16 is a conceptual diagram illustrating the process of assigning the 3D annotation 53, the process of assigning the 2D annotation 52 based on the 3D annotation 53, and the like.
  • FIG. 16 is a diagram similar to FIG. However, in FIG. 16, a curve different from the annotation line 51 of FIG. 14 is drawn as the annotation line 51.
  • FIG. 17 is a diagram similar to FIGS. 14 and 16. However, in FIG. 17, a curve different from the annotation line 51 of FIGS. 14 and 16 is drawn as the annotation line 51.
  • FIG. 17 is a curve different from the annotation line 51 of FIGS. 14 and 16 is drawn as the annotation line 51.
  • step S13 the 3D annotation 53 is generated based on the annotation line 51 drawn for the projected image 332.
  • a 3D annotation 53 (see the middle part of FIG. 16) corresponding to the annotation line 51 (see the lower part of FIG. 16) drawn in the projected image 332 is added to the three-dimensional surface data 330.
  • the controller 31 assigns the 2D annotation 52 based on the 3D annotation 53 to the two-dimensional slice image 220.
  • a substantially semicircular 3D annotation 53 is attached to the surface of a columnar three-dimensional model (three-dimensional surface data 330).
  • a fracture (crack) is made near the intersection of a plane (approximate plane) containing a substantially semi-arc-shaped 3D annotation 53 and a cylindrical space (space corresponding to a bone) whose surface is shown by the 3D surface data 330. Etc.) are occurring.
  • a plane (approximate plane) including a substantially semicircular 3D annotation 53 intersects the side surface of the cylindrical space (space corresponding to the bone) diagonally. It is assumed that a fracture has occurred near this intersection.
  • step S14 the controller 31 has a two-dimensional position corresponding to the three-dimensional position in at least one two-dimensional slice image corresponding to the three-dimensional position of the 3D annotation 53 (also referred to as “corresponding two-dimensional position”).
  • 2D annotation 52 is automatically added to.
  • the 2D annotation 52 is an annotation having two-dimensional position information in each two-dimensional slice image.
  • the controller 31 has at least one two-dimensional slice image having an intersection line (or intersection) with the 3D annotation 53, which is an elongated planar region, in the plane among the plurality of two-dimensional slice images 220.
  • 220 (preferably two or more two-dimensional slice images 220) is obtained.
  • the existence range (maximum value Zmax and minimum value Zmin) of the 3D annotation 53 in the Z axis direction is obtained.
  • at least one two-dimensional slice image 220 from the slice image 220 having the minimum value Zmin to the slice image 220 having the maximum value Zmax is specified.
  • the controller 31 automatically assigns a 2D annotation to a 2D position (corresponding 2D position) corresponding to the 3D position of the 3D annotation 53 in the at least one 2D slice image 220.
  • the controller 31 includes each two-dimensional slice image 220 and a 3D annotation 53 which is an elongated planar area.
  • the two-dimensional position (two-dimensional position (XY position) of each intersection) of the intersection line (set of intersections) is obtained.
  • the controller 31 assigns the 2D annotation 52 to the two-dimensional position corresponding to the line of intersection in each two-dimensional slice image 220. Accordingly, the visualized 2D annotation 52 (specifically, the mark indicating the 2D annotation 52) is drawn at the two-dimensional position.
  • the 2D annotation is performed at the two places. 52 is given (see the top row of FIG. 16). More specifically, 2D annotation 52 is added to the vicinity of the lower end and the vicinity of the right end of the uppermost circular region of FIG. 16, respectively.
  • the circular region (region with sandy hatching) at the top of FIG. 16 is a cut surface (two-dimensional slice image 220 (two-dimensional slice image 220) of a three-dimensional model of bone represented (schematically represented) in a cylindrical shape. It represents a cut surface) by a plane), that is, a cross section of a bone.
  • the curve inside the circular region represents a fracture line (a portion where a cracked portion of a bone appears as a linear region in a two-dimensional slice image (cross-sectional image)). Further, the region surrounded by the elliptical broken line in the middle row of FIG. 16 corresponds to the circular region in the uppermost row of FIG.
  • the controller 31 draws the bounding box 55 so as to include the two 2D annotations 52, and determines the rectangular area surrounded by the bounding box 55 as the teacher data indicating the fracture area.
  • the 2D annotation 52 is added to the two-dimensional position corresponding to the line of intersection with the 3D annotation 53 in at least one two-dimensional slice image 220. Further, a rectangular area (partial image) surrounding the 2D annotation 52 with a banding box is determined as teacher data indicating a fracture area.
  • the line of intersection with the 3D annotation 53 may appear in only one place in a certain two-dimensional slice image 220 (slice cross section). be.
  • the bounding box 55 may be formed so as to surround the 2D annotation 52 at only one place.
  • the teacher data generation process and the like related to one slice image group 210 are executed.
  • Such a process of generating teacher data for one slice image group 210 is executed for a plurality of slice image groups 210. For example, it is executed for a plurality of slice image groups 210 relating to a plurality of subjects. As a result, a very large number of teacher data (for example, tens of thousands to hundreds of thousands) are generated.
  • the controller 31 uses data reflecting annotations (3D annotation 53, etc.) for the three-dimensional surface data as teacher data, executes machine learning, and generates a trained model.
  • 3D annotation 53, etc. data reflecting annotations for the three-dimensional surface data
  • FIG. 6 is a flowchart showing the processing of the controller 31 in the learning stage (see FIG. 3). Note that FIG. 6 is also a diagram showing a method (part) of generating a trained model.
  • step S21 the controller 31 executes machine learning using the teacher data (teacher data having a known lesion site (fracture site)) generated by the process of FIG. Specifically, data based on the two-dimensional slice image 220 (260) to which the 2D annotation 52 is added (particularly, image data of the fracture region surrounded by the bounding box 55) is used as teacher data, and machine learning is executed. To. As machine learning, deep learning and the like are executed.
  • various parameters of the learning model 410 are adjusted, and the trained model 420 is generated (step S22).
  • the learning parameters of the learning model 410 (learner) (weighting coefficients between a plurality of layers (input layer, (one or more) intermediate layers, output layers), etc.) are adjusted, and the trained model is trained.
  • 420 is generated.
  • the trained model 420 is a learning model for executing an inference process for identifying a lesion site (fracture site) of an unknown two-dimensional slice image.
  • generating the trained model 420 means manufacturing (producing) the trained model 420
  • the method of generating the trained model means “the method of manufacturing the trained model”.
  • the controller 31 uses the trained model 420 generated by machine learning to execute inference processing in machine learning.
  • the controller 31 uses the trained model 420 for each of a plurality of (unknown) two-dimensional slice images 220 (also referred to as 270 (see FIG. 4)) acquired for a subject. Performs inference processing related to fracture detection. Specifically, using the trained model 420 described above, an inference process for identifying a lesion site (fracture site) in each slice image is executed. Hereinafter, the inference process will be described with reference to FIG. 7.
  • the trained model 420 acquires the inference result regarding the input data in step S32. Specifically, the trained model 420 (controller 31) detects the presence / absence and position of a fractured portion in the slice image 270. Such processing is repeatedly executed for the plurality of two-dimensional slice images 270 until it is determined in step S33 that the processing should be completed. Specifically, the image number i is incremented (step S34), and the process is executed for the new two-dimensional slice image Mi (270).
  • the trained model 420 (controller 31) detects the existence of a fractured portion in the two-dimensional slice image 270, it outputs a two-dimensional slice image 270 (also referred to as 280) that specifies the position of the fractured portion in step S35 (also referred to as 280). See Figure 4).
  • a region showing a fracture site is displayed surrounded by a bounding box 58.
  • the reliability for example, 80% regarding the inference result of the fracture site is also output.
  • the inference results for the plurality of two-dimensional slice images 220 are collectively displayed (step S35), but the present invention is not limited to this.
  • the inference results for each two-dimensional slice image 220 may be displayed sequentially.
  • the three-dimensional surface data 330 is generated based on the plurality of two-dimensional slice images 220 (S12 (see FIG. 5)), and the operation of assigning the 3D annotation to the three-dimensional surface data 330 is accepted. (S13). Therefore, it is possible to reduce the complexity of the annotation addition operation as compared with the case where the operation of adding annotations to each of a large number (for example, several hundreds) of two-dimensional slice images is performed by the operation user for each two-dimensional slice image. Is possible.
  • machine learning is executed by using the data reflecting the annotation (3D annotation 53, etc.) for the three-dimensional surface data 330 as the teacher data (S21). Therefore, it is possible to reduce the complexity of the annotation operation, create teacher data, and execute machine learning based on the teacher data.
  • the first process (S13) of adding the 3D annotation 53 to the three-dimensional surface data 330 according to the operation of the operating user and the second process (S14) following the first process are executed.
  • the second process is a process of automatically adding the 2D annotation in at least one slice image of the plurality of two-dimensional slice images 220 based on the 3D annotation 53.
  • the second process automatically places the 2D annotation 52 at the 2D position (corresponding 2D position) corresponding to the 3D position in at least one 2D slice image corresponding to the 3D position of the 3D annotation 53. Includes processing to be given.
  • machine learning is executed by using the data based on at least one two-dimensional slice image to which the 2D annotation 52 is attached as the teacher data (S21).
  • the complexity in the annotation operation is greatly reduced as compared with the case where the teacher data is created by individually annotating the two-dimensional slice image 220 (particularly a large number of two-dimensional slice images 220). It is possible to generate teacher data.
  • the above-mentioned processing is executed for a plurality of slice image groups 210, it is complicated in the annotation operation for a very large number (for example, thousands to hundreds of thousands) of teacher data. Can be significantly reduced and generated.
  • the annotation or the like may be given at a rough position.
  • the annotations and the like in each two-dimensional slice image 220 may be added without confirming the positional relationship with the annotations in the other two-dimensional slice images 220.
  • variations in the annotation position (error in the user operation) between the plurality of two-dimensional slice images 220 are likely to occur.
  • the 3D annotation 53 is added based on the annotation line 51 drawn by the operating user while confirming the 3D position using the 3D surface data 330, and the 2D annotation 52 is added based on the 3D annotation 53. Granted. More specifically, the 3D annotation 53 and the 2D annotation 52 are added based on the continuous annotation line 51 spanning the plurality of two-dimensional slice images 220. Therefore, the annotation position error (variation in the addition position in the user operation) between the plurality of two-dimensional slice images 220 can be reduced. Therefore, it is possible to more accurately add the 2D annotation 52 to each two-dimensional slice image 220.
  • FIG. 16 in a certain two-dimensional slice image 220, the line of intersection between the two-dimensional slice image 220 and the 3D annotation 53 is obtained at two points. Is given a 2D annotation 52 at the two locations. Then, a bounding box 55 including the two 2D annotations 52 is drawn, and a rectangular region surrounded by the bounding box 55 is determined as teacher data indicating a fracture region (top row of FIG. 16 and FIG. See the top row of 17).
  • the line of intersection with the 3D annotation 53 may appear only in one place (see FIG. 18 and the like).
  • the bounding box 55 is formed so as to surround the 2D annotation 52 at only one place.
  • the partial region including the fracture site (fracture line or the like) protrudes from the bounding box 55. That is, the bounding box 55 cannot always accurately surround the partial region.
  • the second embodiment there is a technique for complementing and imparting the 2D annotation 52 in a cross section in which the line of intersection with the 3D annotation 53 appears only in one place (a cross section in which only a single 2D annotation 52 is assigned).
  • the 2D position determined to be the "corresponding 2D position" in any of a predetermined number of 2D slice images in the vicinity of each 2D slice image is also included.
  • the 2D annotation 52 is complementarily added.
  • 2D position to which the 2D basic annotation 52A (described below) is given in any of a predetermined number of 2D slice images in the vicinity of each 2D slice image is also , 2D annotation 52 is added complementarily.
  • the 2D annotation 52 given to the position of the line of intersection in the two-dimensional slice image having the line of intersection with the 3D annotation 53 is also referred to as "2D basic annotation" (52A).
  • the 2D basic annotation 52A is given to the 2D position (corresponding 2D position) corresponding to the 3D position in at least one 2D slice image corresponding to the 3D position of the 3D annotation 53.
  • the 2D annotation 52 given in the first embodiment corresponds to the "2D basic annotation".
  • each 2D slice image it complements the 2D position determined to be the "corresponding 2D position" in any of the predetermined number of 2D slice images in the vicinity of each 2D slice image.
  • the given 2D annotation 52 is also referred to as a "2D complementary annotation" (52B).
  • the 2D complement annotation 52B (see FIG. 25) is a 2D annotation 52 that is complementarily added separately from the 2D basic annotation 52A in the second embodiment.
  • the 2D completion annotation 52B is also expressed as a 2D annotation that complements the 2D basic annotation 52A.
  • the 2D annotation 52 including both the 2D basic annotation 52A and the 2D complementary annotation 52B is also referred to as a "2D extended annotation" (52C) (because it is a conceptual extension of the 2D basic annotation 52A). And.
  • the 2D extended annotation 52C is also expressed as a 2D annotation given not only to the two-dimensional position (corresponding two-dimensional position) corresponding to the three-dimensional position of the 3D annotation 53 but also to other than the corresponding two-dimensional position.
  • the second embodiment is a modification of the first embodiment.
  • the differences from the first embodiment will be mainly described.
  • FIG. 22 is a flowchart showing the process of step S14 (also referred to as S14B) in the second embodiment.
  • step S51 of FIG. 22 the same processing as the 2D annotation (2D basic annotation 52A) assignment process in step S14 of the first embodiment is performed.
  • step S52 the process of assigning the 2D complementary annotation 52B
  • step S53 a bounding box is set based on the process result of step S52 and teacher data is generated. ..
  • FIG. 23 is a schematic diagram showing a series of two-dimensional slice images 220.
  • a two-dimensional slice image 220 also referred to as a cross-sectional image L5 to L11
  • the circular region (region with sandy hatching) in FIG. 23 represents the cross section of the bone, and the curve inside the circular region (the portion where the cracked portion of the bone appears as a linear region in the cross-sectional image). Represents.
  • the fracture line penetrates from one part (lower end part) in the circular cross-section to the other part (right end part). is doing.
  • the fracture line does not penetrate from one part of the circular cross-section to the other part, and stops in the middle of the circular cross-sectional image.
  • the fracture line extending from the lower end does not reach the right end and stops in the middle.
  • the fracture line extending from the right end does not reach the lower end and stops in the middle.
  • such a plurality of 2D slice images 220 may be obtained.
  • FIG. 18 is a schematic diagram showing the three-dimensional surface data 330 (three-dimensional model) generated based on such a two-dimensional slice image 220.
  • the three-dimensional surface data 330 is generated in the same manner as in the first embodiment (see steps S11 and S12 (FIG. 5)).
  • FIG. 18 is a diagram similar to FIG. In the middle perspective view of FIG. 18, a slice cross section (two-dimensional slice image 220) at a certain Z position is also shown. The elliptical region drawn by the broken line in the middle stage shows the intersection of the cylindrical three-dimensional model and the slice cross section. Further, in the lower part of FIG. 18, a projected image 332 obtained by projecting the three-dimensional surface data 330 (three-dimensional model) onto the projection surface 70 is shown. Further, in the upper part (upper right part) of FIG. 18, a two-dimensional slice image 220 to which the 2D annotation 52 (specifically, the 2D basic annotation 52A) is attached is schematically shown.
  • the 2D annotation 52 specifically, the 2D basic annotation 52A
  • the two annotation lines 51 divided in the middle are not one continuous annotation line 51 that makes a half or one round of the bone (schematically represented by a cylindrical shape here). It is drawn.
  • the fracture line does not reach a certain surface part of the bone, the part indicating the fracture does not appear on the surface part (the surface of the three-dimensional surface data 330 (the surface of the three-dimensional model of the bone)), and the part indicating the fracture does not appear on the surface part.
  • the annotation line 51 may not be drawn. As a result, a situation may occur in which the annotation line 51 is divided.
  • One of the two annotation lines 51 is drawn in the projected image 332 of FIG. 18 from the vicinity of the center to the lower end side toward the left.
  • the annotation line 51 is drawn in the projected image 332 along the fracture portion appearing on the surface near the front side (lower end side in FIG. 23) of the three-dimensional surface data 330 in the middle of FIG.
  • the 3D annotation 53 corresponding to the annotation line 51 is added along the fracture portion appearing on the surface near the front side (lower end side in FIG. 23) of the three-dimensional surface data 330 in the middle of FIG.
  • the other annotation line 51 is drawn substantially in the vertical direction near the right end of the center in the projected image 332 of FIG.
  • the annotation line 51 is drawn in the projected image 332 along the fracture portion appearing on the surface near the right side (corresponding to the right end side of FIG. 23) of the three-dimensional surface data 330 in the middle of FIG.
  • the 3D annotation 53 corresponding to the annotation line 51 is added along the fracture portion appearing on the surface near the right side (corresponding to the right end side of FIG. 23) of the three-dimensional surface data 330 in the middle of FIG.
  • FIGS. 18 to 21 show slice cross sections (two-dimensional slice images 220) in which the positions in the reference axis direction (here, the Z axis direction) are different from each other. From FIG. 18 to FIG. 21, the position of the two-dimensional slice image 220 in the Z-axis direction gradually increases.
  • Each of the two-dimensional slice images 220 of FIGS. 18 to 21 corresponds to the cross-sectional images L7 to L10 of FIG. 23 and the like, respectively.
  • FIG. 24 is a diagram showing how the 2D basic annotation 52A is added to each of the series of two-dimensional slice images of FIG. 23.
  • two lines of intersection with the 3D annotation 53 appear.
  • two 2D annotations 52 are added to different positions in the two-dimensional slice image 220.
  • the line of intersection with the 3D annotation 53 appears in only one place in some of the other two-dimensional slice images 220.
  • a single 2D annotation 52 (2D basic annotation 52A) is added in the two-dimensional slice image 220.
  • the 2D annotation 52 given by the same processing as in the first embodiment is also referred to as the 2D basic annotation 52A, and is distinguished from the 2D annotation 52B complementarily given in the second embodiment.
  • the 2D basic annotation 52A assignment process is executed (see also FIG. 24). Specifically, in each two-dimensional slice image 220 (the second two-dimensional slice image Li), the 2D basic annotation 52A is added to the intersection line portion with the 3D annotation 53.
  • FIG. 24 is a diagram showing a series of two-dimensional slice images 220 to which the 2D basic annotation 52A is attached.
  • the bounding box 55 may be formed so as to surround the 2D annotation 52 (2D basic annotation 52A) given only to the one place.
  • the partial region including the fracture site protrudes from the bounding box 55, and the bounding box 55 cannot always accurately surround the partial region. ..
  • step S51 the process of imparting the 2D basic annotation 52A (step S51) is executed in the same manner as in the first embodiment, the process of assigning the 2D complementary annotation 52B (step S52) is further performed. Will be executed.
  • step S51 the 2D basic annotation 52A is added to the line of intersection with the 3D annotation 53 in the second two-dimensional slice image (cross-sectional image) Li.
  • step S52 the 2D complementary annotation 52B is added (if necessary) in the second two-dimensional slice image Li.
  • the 2D basic annotation 52A is given to any one of a predetermined number (predetermined number of) of two-dimensional slice images in the vicinity of the second two-dimensional slice image Li.
  • the 2D complementary annotation 52B is added to the two-dimensional position.
  • the predetermined number of sheets is, for example, several to several tens.
  • the 2D basic annotation 52A is given to the 6 2D slice images L2 to L4 and L6 to L8 in the vicinity thereof in the 2D slice image L8.
  • a 2D complementary annotation 52B is added to the dimensional position.
  • the predetermined number of sheets is three before and after (six in total).
  • FIG. 25 is a diagram showing a plurality of two-dimensional slice images before and after the process of step S52. Similar to FIG. 24, the upper part of FIG. 25 shows each two-dimensional slice image 220 (cross-sectional images L5 to L11) to which only the 2D basic annotation 52A is added (in step S51). Further, in the lower part of FIG. 25, the two-dimensional slice image 220 to which the 2D complementary annotation 52B is further added is shown. In other words, 2D extended annotation 52C (2D annotation 52 including both 2D basic annotation 52A and 2D complementary annotation 52B) is attached to each two-dimensional slice image 220 in the lower part of FIG. 25.
  • 2D extended annotation 52C (2D annotation 52 including both 2D basic annotation 52A and 2D complementary annotation 52B) is attached to each two-dimensional slice image 220 in the lower part of FIG. 25.
  • step S52 the controller 31 complementarily assigns the 2D complementary annotation 52B to each of the two-dimensional slice images 220 (two-dimensional slice images to which the 2D basic annotation 52A is attached, etc.). See the bottom of FIG. 25).
  • the 2D complementary annotation 52B is any one of a predetermined number (predetermined number of) slice images in the vicinity of each two-dimensional slice image 220 in each two-dimensional slice image 220.
  • the 2D basic annotation 52A is (automatically) added to the two-dimensional position to which the 2D basic annotation 52A is attached.
  • the 2D complementary annotation 52B is added (near the right end) in addition to the 2D basic annotation 52A near the lower end.
  • step S51 A total of 6 cross-sectional images L2 to L4 and L6 to L8 in the vicinity of the cross-sectional image L5 are already given 2D basic annotation 52A (in step S51) (see the upper part of FIG. 25).
  • step S52 all of the 2D basic annotations 52A given in these cross-sectional images L2 to L4 and L6 to L8 are given as 2D complementary annotations 52B to the corresponding positions in the cross-sectional image L5.
  • the 2D complementary annotation 52B is further added to the position).
  • the 2D complementary annotation 52B is complementarily added to a two-dimensional position (near the right end) different from the position (near the lower end) of the 2D basic annotation 52A of the cross-sectional image L5 (FIG. 25). See bottom row).
  • the 2D complementary annotation 52B given at substantially the same position as the 2D basic annotation 52A is omitted as appropriate.
  • the 2D complementary annotation 52B given near the lower end in the cross-sectional image L5 (the 2D complementary annotation 52B given corresponding to the 2D basic annotation 52A given near the lower end in the nearby cross-sectional image L6) is , Omitted.
  • the 2D complementary annotation 52B is attached to the position near the right end of each of the cross-sectional images L6 and L7 (the position where the 2D basic annotation 52A is given in the nearby cross-sectional image L8). It is given complementarily.
  • the 2D complementary annotation 52B is added (near the lower end, etc.).
  • the 2D basic annotation 52A has already been added (in step S51) to each of the three cross-sectional images L7 to L9 and L11 to L13 in the vicinity of the cross-sectional image L10 (see the upper part of FIG. 25).
  • step S52 all of the 2D basic annotations 52A given in these cross-sectional images L7 to L9 and L11 to L13 are given as 2D complementary annotations 52B to the corresponding positions in the cross-sectional image L5.
  • the 2D complementary annotation 52B is added to the vicinity of the lower end of the cross-sectional image L10 (see the lower part of FIG. 25).
  • the 2D complementary annotation 52B is added to the position near the lower end of the sectional image L11 (the position where the 2D basic annotation 52A is attached in the sectional images L8 and L9).
  • each cross-sectional image Li the 2D complementary annotation 52B is added to the position (corresponding position) to which the 2D basic annotation 52A is attached in a predetermined number of sectional images (two-dimensional slice images 220) in the vicinity.
  • the 2D annotation 52 is complemented.
  • step S51 the 2D basic annotation 52A is added to the line of intersection with the 3D annotation 53 in the second two-dimensional slice image Li.
  • step S52 in the second two-dimensional slice image Li, a predetermined number of two-dimensional slice images (L (i-N) to L (i-1)) in the vicinity of the second two-dimensional slice image Li are , L (i + 1) to L (i + N)), the 2D complementary annotation 52B is given to the two-dimensional position determined to be the "corresponding two-dimensional position" (the intersection with the 3D annotation 53). ..
  • the 2D complementary annotation 52B is complementarily added to the one slice image by using the 2D basic annotation 52A attached to a predetermined number of slice images in the vicinity of the one slice image.
  • the 2D annotation 52 (2D extended annotation 52C) is added as follows.
  • the 2D annotation 52 is two-dimensional corresponding to the three-dimensional position of the 3D annotation in either the two-dimensional slice image Li and a predetermined number of two-dimensional slice images in the vicinity thereof in each two-dimensional slice image Li. It is given to all two-dimensional positions determined to be positions (corresponding two-dimensional positions).
  • a two-dimensional slice image of a predetermined range including the two-dimensional slice image Li of the i-th (for example, (L (i-N) to L (i-1)). ), Li, L (i + 1) to L (i + N))), 2D at all 2D positions determined to be "corresponding 2D positions" (intersections with 3D annotation 53).
  • the extended annotation 52C is added.
  • the 2D slice image Li of the i as a logical sum set of the 2D basic annotation 52A in the total (2 * N + 1) two-dimensional slice images of the neighborhood range including the 2D slice image Li of the i.
  • the 2D extended annotation 52C is added.
  • a bounding box 55 corresponding to both the 2D basic annotation 52A and the 2D complementary annotation 52B is formed.
  • the bounding box 55 is formed as an circumscribing rectangle that surrounds both of them.
  • the partial image in the bounding box 55 is specified as a partial image including the fracture site. In other words, it is a partial image in at least a part of the 2D slice images to which 2 or more 2D annotations are added among the plurality of 2D slice images, and is in the bounding box corresponding to the 2 or more 2D annotations.
  • the partial image of is specified as a partial image including the fracture site.
  • each bounding box 55 appropriately covers the fractured portion (fracture line, etc.). It is formed to surround. In other words, an appropriate partial image including the fractured portion so as not to protrude is extracted as teacher data.
  • machine learning is executed using the partial image as teacher data (see step S21 and the like in FIG. 6).
  • the machine learning produces a trained model 420 (see also FIG. 3).
  • inference processing in machine learning is executed using the trained model 420 generated by machine learning (see FIGS. 4 and 7).
  • the 2D annotation 52 corresponds to the 3D position of the 3D annotation in at least one slice image having an intersection with the 3D annotation 53 among the plurality of 2D slice images 220. It is given to the dimensional position (corresponding two-dimensional position). Further, the 2D annotation 52 is set at a two-dimensional position determined to be a corresponding two-dimensional position in any of a predetermined number of two-dimensional slice images in the vicinity of each two-dimensional slice image in each two-dimensional slice image. Is also given. That is, both the 2D basic annotation 52A and the 2D complementary annotation 52B are added as the 2D annotation 52. In other words, the 2D extension annotation 52C is added as the 2D annotation 52.
  • the 2D complementary annotation 52B is also attached. Therefore, it is possible to give a more appropriate 2D annotation 52 as compared with the case where only the 2D basic annotation 52A is given. As a result, it is possible to generate more appropriate teacher data. In addition, by performing machine learning using such appropriate teacher data, it is possible to optimize the parameters in the learning model, in other words, to improve the accuracy of inference processing using the trained model. Is possible.
  • the 3D annotation 53 has a predetermined width (an elongated planar region), and the 2D annotation 52 is at the intersection of the two-dimensional slice image 220 and the 3D annotation 53. Granted, but not limited to this. Specifically, it is assumed that the 3D annotation 53 is linear, and the 2D annotation 52 may be added to the intersection portion of each two-dimensional slice image 220 and the 3D annotation 53. In other words, the 2D position corresponding to the 3D position of the 3D annotation 53 may be the intersection portion between the 2D slice image 220 and the 3D annotation, and the intersection portion between the 2D slice image 220 and the 3D annotation. And so on.
  • the 3D annotation 53 is added based on the drawing operation of the annotation line 51 (an elongated planar area having a predetermined width), but the present invention is not limited to this.
  • a closed curve (circle, ellipse, polygon, etc.) in the projected image 332, etc. ) May be added with the 3D annotation 53 based on the drawing operation.
  • the polygon in the closed region surrounded by the closed curve may be designated as indicating the fracture site.
  • the three-dimensional position of each polygon may be specified as the three-dimensional position of the 3D annotation 53.
  • the 2D annotation 52 may be added to the intersection line portion between the 3D annotation 53 (the region having a planar spread) and each two-dimensional slice image 220 according to the drawing operation of such a closed region.
  • both the 2D basic annotation and the 2D complementary annotation are sequentially added (S51, S52). Specifically, the process of assigning the 2D basic annotation 52A (step S51) to the plurality of 2D slice images 220 is first performed (in advance), and then the process of supplementing the 2D complementary annotation 52B to each 2D slice image. (Step S52) is performed.
  • the present invention is not limited to this, and both may be applied simultaneously and in parallel.
  • a plurality of 2D extended annotations 52C may be added at once (collectively) in each two-dimensional slice image Li in a state where the process of step S51 is not performed.
  • the controller 31 has a predetermined range of two-dimensional slice images including the i-th two-dimensional slice image Li (for example, (L (i-N) to L (i-1), Li, L (i + 1) to In L (i + N))), it is determined whether or not the “corresponding two-dimensional position” (the intersection with the 3D annotation 53) exists. Then, the controller 31 is in all the two-dimensional positions (in the second two-dimensional slice image Li) determined to be the "corresponding two-dimensional position" in any of the two-dimensional slice images in the predetermined range (near range). 2D extended annotation 52C is added to the position). Further, the controller 31 repeatedly executes the above processing for all i (all two-dimensional slice images 220).
  • Li for example, (L (i-N) to L (i-1), Li, L (i + 1) to In L (i + N)
  • annotation 52 (2D extended annotation 52C) may be collectively attached. That is, the 2D extended annotation 52C may be added by a method different from the method in which the 2D basic annotation 52A is given in advance to the plurality of two-dimensional slice images 220 and then the 2D complementary annotation 52B is given.
  • the 2D extended annotation 52C including the 2D basic annotation 52A and the 2D complementary annotation 52B is attached.
  • a logical union (logical union of corresponding two-dimensional positions) covering the range of a plurality of two-dimensional slice images having the same number (N each) before and after each two-dimensional slice image Li is obtained. It has been demanded.
  • the present invention is not limited to this.
  • a logical union may be obtained over a range in which the number of front and rear images is different from each other (range of a plurality of two-dimensional slice images).
  • Image processing system 20
  • Slice image generator 30
  • Image processing device 51
  • Annotation line 52
  • 2D complementary annotation 52C
  • Bounding box 70
  • Projection surface 210
  • Training model 420 Trained model

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne une technologie capable de réduire l'encombrement lors d'une opération de fourniture d'annotation. Ce dispositif de traitement d'image génère des données de surface tridimensionnelles 330 sur la base d'une pluralité d'images de tranche bidimensionnelles 220, et reçoit une opération consistant à fournir une annotation 3D (annotation ayant des informations de position tridimensionnelles) aux données de surface tridimensionnelles 330. Le dispositif de traitement d'image exécute un apprentissage machine en utilisant des données dans lesquelles l'annotation pour les données de surface tridimensionnelles 330 a été réfléchie, en tant que données d'apprentissage. Par exemple, le dispositif de traitement d'image exécute un premier traitement pour fournir une annotation 3D 53 à des données de surface tridimensionnelles conformément au fonctionnement par un utilisateur d'opération, et exécute un second traitement pour fournir automatiquement une annotation 2D dans l'image de tranche bidimensionnelle 220 sur la base de l'annotation 3D 53. Ensuite, des données basées sur l'image de tranche bidimensionnelle 220 ayant l'annotation 2D fournie sont utilisées en tant que données d'apprentissage pour exécuter un apprentissage automatique.
PCT/JP2021/031188 2020-08-31 2021-08-25 Dispositif de traitement d'image, système de traitement d'image, procédé de fourniture d'annotation, procédé de production de modèle appris, modèle appris et programme WO2022045202A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020145345A JP2022040570A (ja) 2020-08-31 2020-08-31 画像処理装置、画像処理システム、アノテーション付与方法、学習済みモデルの製造方法、学習済みモデル、およびプログラム
JP2020-145345 2020-08-31

Publications (1)

Publication Number Publication Date
WO2022045202A1 true WO2022045202A1 (fr) 2022-03-03

Family

ID=80355360

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/031188 WO2022045202A1 (fr) 2020-08-31 2021-08-25 Dispositif de traitement d'image, système de traitement d'image, procédé de fourniture d'annotation, procédé de production de modèle appris, modèle appris et programme

Country Status (2)

Country Link
JP (1) JP2022040570A (fr)
WO (1) WO2022045202A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008515595A (ja) * 2004-10-12 2008-05-15 シーメンス コーポレイト リサーチ インコーポレイテツド 3次元画像ボリューム内のポリープの検出方法
WO2019106061A1 (fr) * 2017-12-01 2019-06-06 Ucb Biopharma Sprl Procédé et système d'analyse d'image médicale tridimensionnelle pour l'identification de fractures vertébrales
JP2019162339A (ja) * 2018-03-20 2019-09-26 ソニー株式会社 手術支援システムおよび表示方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008515595A (ja) * 2004-10-12 2008-05-15 シーメンス コーポレイト リサーチ インコーポレイテツド 3次元画像ボリューム内のポリープの検出方法
WO2019106061A1 (fr) * 2017-12-01 2019-06-06 Ucb Biopharma Sprl Procédé et système d'analyse d'image médicale tridimensionnelle pour l'identification de fractures vertébrales
JP2019162339A (ja) * 2018-03-20 2019-09-26 ソニー株式会社 手術支援システムおよび表示方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAMAMOTO NAOTO; RAHMAN RASHEDUR; YAGI NAOMI; HAYASHI KEIGO; MARUO AKIHIRO; MURATSU HIROTSUGU; KOBASHI SYOJI: "An automated fracture detection from pelvic CT images with 3-D convolutional neural networks", 2020 INTERNATIONAL SYMPOSIUM ON COMMUNITY-CENTRIC SYSTEMS (CCS), IEEE, 23 September 2020 (2020-09-23), pages 1 - 6, XP033844449, DOI: 10.1109/CcS49175.2020.9231453 *

Also Published As

Publication number Publication date
JP2022040570A (ja) 2022-03-11

Similar Documents

Publication Publication Date Title
US5737506A (en) Anatomical visualization system
JP5399225B2 (ja) 画像処理装置および方法並びにプログラム
US7773786B2 (en) Method and apparatus for three-dimensional interactive tools for semi-automatic segmentation and editing of image objects
US8994720B2 (en) Diagnosis assisting apparatus, diagnosis assisting program, and diagnosis assisting method
JP2006512133A (ja) 3dモデルを表示および比較するためのシステムおよび方法
US9697600B2 (en) Multi-modal segmentatin of image data
EA027016B1 (ru) Система и способ компьютерного моделирования медицинской процедуры
CN103607959A (zh) 用于三维超声中导管的可视化的mpr 切片选择
EP3427231B1 (fr) Génération de modèle de simulation dentaire
CN109157284A (zh) 一种脑肿瘤医学影像三维重建显示交互方法及系统
CN103345568A (zh) 基于三维模型的手术规划的方法及其系统
JP2010158452A (ja) 画像処理装置および方法並びにプログラム
CN113645896A (zh) 手术计划、手术导航和成像用系统
JP2001149366A (ja) 三次元画像処理装置
JP2008500866A (ja) 画像処理をするための方法、コンピュータプログラム、装置及びイメージングシステム
EP2960870B1 (fr) Procédé de visualisation d'un squelette humain à partir d'un scannage médical
EP2083393A2 (fr) Appareil de traitement d'images, procédé de traitement d'images et support de stockage stockant un programme pour faire qu'un appareil de traitement d'images exécute un procédé de traitement d'images
WO2022045202A1 (fr) Dispositif de traitement d'image, système de traitement d'image, procédé de fourniture d'annotation, procédé de production de modèle appris, modèle appris et programme
CN108805876A (zh) 使用生物力学模型的磁共振和超声图像的可形变配准
JP7123696B2 (ja) 医用レポート作成装置、医用レポート作成方法および医用レポート作成プログラム
JP2007213437A (ja) 情報処理方法、情報処理装置
US20230115322A1 (en) Incision simulation device, incision simulation method, and program
US20220313360A1 (en) Incision simulation device, incision simulation method, and program
US20240112395A1 (en) Image processing device, image processing method, and program
US20230419602A1 (en) Rendering and displaying a 3d representation of an anatomical structure

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21861625

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21861625

Country of ref document: EP

Kind code of ref document: A1