WO2021149530A1 - Information processing system, information processing method, and program - Google Patents

Information processing system, information processing method, and program Download PDF

Info

Publication number
WO2021149530A1
WO2021149530A1 PCT/JP2021/000625 JP2021000625W WO2021149530A1 WO 2021149530 A1 WO2021149530 A1 WO 2021149530A1 JP 2021000625 W JP2021000625 W JP 2021000625W WO 2021149530 A1 WO2021149530 A1 WO 2021149530A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
shape
tooth
restoration
acquisition unit
Prior art date
Application number
PCT/JP2021/000625
Other languages
French (fr)
Japanese (ja)
Inventor
恭平 新田
佐藤 大輔
エンリコ リナルディ
Original Assignee
Arithmer株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arithmer株式会社 filed Critical Arithmer株式会社
Priority to JP2021573072A priority Critical patent/JP7390669B2/en
Publication of WO2021149530A1 publication Critical patent/WO2021149530A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry

Definitions

  • the present invention relates to an information processing system, an information processing method, and a program that output information for producing a dental restoration used for one or more teeth to be restored.
  • dental restorations such as insert teeth and crowns used for patients in the dental field are manually created by dental technicians for each patient.
  • CAD / CAM technology has also come to be used in the dental field. That is, dental restorations are produced by a 3D printer, a milling machine, or the like using data modeled by a dental technician using CAD (see, for example, Patent Document 1).
  • Non-Patent Document 1 a technique for segmenting the point cloud obtained as the above-mentioned tooth scan data for each tooth has been published (see, for example, Non-Patent Document 1).
  • the work of producing a dental restoration as in the past has a problem that it is difficult and time-consuming. That is, it is necessary to adjust the shape of each dental restoration according to the patient, whether it is produced manually by a worker such as a dental technician or modeling is performed using CAD. Therefore, the worker is required to have skillful skills and knowledge, and the work performed by the worker is also troublesome.
  • the information processing system of the first invention is an information processing system that outputs information for producing a dental restoration used for one or more attention teeth to be restored, and is a shape of two or more teeth adjacent to each other.
  • Learning that stores a learning device for acquiring restoration information for representing the shape of a dental restoration corresponding to a part of two or more teeth obtained by using the dentition shape information including Acquires target shape information including the container storage unit and the shape of one or more adjacent teeth close to the tooth of interest, and target identification information for identifying the tooth whose shape is included in the target shape information.
  • the first acquisition unit and the learning device are used to acquire the restoration information for representing the shape of the dental restoration corresponding to the tooth of interest from the target identification information and the target shape information acquired by the first acquisition unit. It is an information processing system equipped with an acquisition unit.
  • the information processing system of the second invention includes, for the first invention, a reference shape information storage unit for storing reference shape information indicating a reference shape prepared in advance corresponding to the tooth of interest.
  • the restoration information is information corresponding to the difference in shape between the reference shape and the shape of the dental restoration corresponding to the tooth of interest, and the second acquisition unit uses the restoration information and the reference shape information to perform dental restoration. It is an information processing system that acquires 3D data representing the shape of an object.
  • the learning device is information obtained by machine learning performed by using two or more learning target information with respect to the first or second invention, and is a learning target.
  • the information includes dentition shape information, dentition identification information that identifies each of the two or more teeth included in the dentition shape information, and the shape of the dental restoration applied to some of the two or more teeth. It is an information processing system that includes restoration information for representing.
  • the learning device storage unit is provided with restoration information for expressing the shape of the dental restoration corresponding to one or more attention teeth.
  • a learning device obtained by machine learning using two or more learning target information including dentition identification information and dentition shape information including two or more tooth shapes close to the attention tooth is stored for each attention tooth.
  • the second acquisition unit uses a learning device corresponding to the target identification information acquired by the first acquisition unit, and uses a machine learning method from the target identification information and the target shape information acquired by the first acquisition unit. It is an information processing system that acquires restoration information.
  • the second acquisition unit is a multidimensional vector based on the target identification information and the target shape information acquired by the first acquisition unit.
  • the second vector representing a feature vector having a lower dimension than the vector represented by the first vector information. It is an information processing system that generates information and acquires restoration information using the generated second vector information.
  • restoration information can be acquired at a higher speed.
  • the learning unit evaluates the shape including the evaluation of the interference between the dental restoration and the surrounding teeth.
  • the parameters of the cost function, including the penalty term, have been adjusted based on.
  • the learning unit receives the target shape information acquired by the first acquisition unit. Evaluation is performed based on the restoration information or 3D data acquired by the second acquisition unit, and the parameter of the penalty term is changed based on the evaluation result.
  • the tooth row shape information is the tooth shape corresponding to the dental restoration and all the teeth adjacent to the tooth.
  • Information representing at least a part of the shape in the oral cavity including the shape of the tooth, and the target shape information is the shape of at least a part of the oral cavity including the shape of the tooth of interest and the shape of a adjacent tooth close to the tooth of interest. It is an information processing system that is information representing.
  • the target shape information is information indicating a point group representing the shape of at least a part of the oral cavity
  • the first The acquisition unit identifies the area including the point group corresponding to each tooth among the point groups indicated by the target shape information, and obtains the target identification information in which the tooth identifier that identifies the tooth is associated with the specified area for each tooth. It is an information processing system to acquire.
  • the first acquisition unit sets a point group including a point group corresponding to each tooth among the point groups indicated by the target shape information. Estimated based on the relationship between each point included in the group and the points around it, the point group included in the estimated area and the point group not included in the area are displayed on the display in different display modes and the user.
  • This is an information processing system that acquires information on the display mode input by the user and identifies an area including a point group corresponding to each tooth based on the information input by the user.
  • the first acquisition unit displays a group of points included in each specified area on a display and points included in each area. It is an information processing system that acquires labeling information input by a user for a group and acquires target identification information based on the information input by the user.
  • the user can easily perform an operation of associating a tooth identifier that identifies the tooth with a specified area for each tooth.
  • the information processing system of the twelfth invention is an information processing system that outputs information for producing a dental restoration used for one or more attention teeth to be restored, and is an arbitrarily selected selection.
  • a learner that stores a learner adjusted by a plurality of vector information obtained from information indicating the shape of each of a plurality of teeth adjacent to the tooth and output information obtained from information indicating the shape of the selected tooth.
  • the output information corresponding to the attention tooth is acquired based on the plurality of vector information obtained from the information indicating the shapes of the plurality of teeth adjacent to the attention tooth, and the acquired output is obtained.
  • It is an information processing system including a restoration information acquisition unit that acquires restoration information for expressing the shape of a dental restoration corresponding to a tooth of interest from information.
  • restoration information can be easily acquired.
  • a dental restore is a repair or prosthesis to be placed in the oral cavity in dental treatment, for example, a crown (including an insert tooth and an artificial tooth attached to a dental implant), an inlay, a bridge, and the like. Is. It may be interpreted that other prostheses such as dentures are included.
  • a tooth of interest is a tooth that is the target of treatment using a dental restoration.
  • the tooth of interest may be a completely missing tooth in the dentition, whether or not it is an existing tooth in the patient being treated. Further, the tooth of interest may be an artificially shaped tooth or a part of the tooth.
  • the tooth of interest can be arbitrarily selected according to the purpose of using the information processing system, etc., but is not limited to this.
  • the 3D data is information representing a three-dimensional shape, and is composed of information representing, for example, a point cloud (which may be meshed), a line, a surface, a voxel, and the like.
  • the 3D data may be information in a format used in a specific CAD, or information in a general-purpose intermediate file format that can be used in various CADs.
  • the identifier is a character or code that uniquely indicates the item.
  • the code is, for example, alphanumeric characters or other symbols, but is not limited to this.
  • the identifier is, for example, a code string that does not have a specific meaning by itself, but any kind of information can be used as long as it can identify the corresponding item. That is, the identifier may be the name of what it indicates, or it may be a combination of codes so as to uniquely correspond to each other.
  • the tooth identifier uniquely identifies the patient's tooth, for example.
  • a number represented by the so-called universal system which is represented by the dental notation, can be used as the tooth identifier, but the present invention is not limited to this.
  • the acquisition may include acquiring the matters input by the user or the like, or the information stored in the own device or another device (the information may be stored in advance, or the device concerned). It may include acquiring information (which may be information generated by information processing performed in). Acquiring the information stored in the other device may include acquiring the information stored in the other device via API or the like, or the document file provided by the other device. It may include acquiring the content (including the content of the web page).
  • To output information means to display on a display, project using a projector, print with a printer, output sound, transmit to an external device, store in a recording medium, process to another processing device or other program. It is a concept that includes delivery of results. Specifically, for example, it includes enabling information to be displayed on a web page, transmitting it as an e-mail or the like, and outputting information for printing.
  • Information reception means receiving information input from input devices such as keyboards, mice, and touch panels, receiving information transmitted from other devices via wired or wireless communication lines, optical disks, magnetic disks, and semiconductors. It is a concept including acceptance of information read from a recording medium such as a memory.
  • updating means changing the stored information, adding new information to the stored information, and updating the stored information. It is a concept that includes the fact that part or all of it is erased.
  • FIG. 1 is a diagram showing an outline of the dental restoration production system 900 according to the first embodiment.
  • the dental restoration production system 900 includes an information processing system 100, a dental scanning system 910, and a modeling device 920.
  • the dental restoration production system 900 is used to produce a dental restoration used for one or more notable teeth to be restored.
  • the dental scan system 910 includes, for example, a terminal device, a dental scanner connected to the terminal device, and the like.
  • the dental scan system 910 uses a dental scanner to generate 3D data representing the shape of the patient's oral cavity.
  • the dental scanner may be an intraoral scanner or a scanner that reads a mold taken from the oral cavity.
  • the dental scan system 910 transmits the generated 3D data to the information processing system 100.
  • the 3D data transmitted to the information processing system 100 is data indicating a point cloud, but is not limited to this.
  • the information processing system 100 acquires 3D data transmitted from the dental scan system 910 and performs a process as described later.
  • the information processing system 100 generates 3D data showing the shape of the dental restoration.
  • the information processing system 100 shows the shape of the dental restoration reflecting the content of the editing operation based on the editing operation of the user (for example, a worker who produces the dental restoration such as a dental technician). 3D data may be generated.
  • the information processing system 100 outputs 3D data indicating the shape of the dental restoration to the modeling apparatus 920.
  • the modeling device 920 is a device that models a dental repair object having a three-dimensional shape using 3D data.
  • the modeling apparatus 920 is, for example, a known dental 3D printer or milling machine, but is not limited thereto.
  • the dental restoration is modeled based on the 3D data indicating the shape of the dental restoration output by the information processing system 100. As a result, the user can obtain the dental restoration modeled by the modeling device 920 by utilizing the dental restoration product production system 900.
  • the devices included in the dental restoration production system 900 can communicate with each other via a network such as the Internet or a LAN, but the present invention is not limited to this.
  • a network such as the Internet or a LAN
  • one device may be directly connected to another device by a wired or wireless communication path.
  • the dental restoration production system 900 may include each of the above-mentioned devices and other devices.
  • the editing operation of the 3D data generated by the information processing system 100 may be performed by the user using another terminal device different from the information processing system 100. .. Further, even if the 3D data indicating the shape of the dental restoration reflecting the content of the editing operation and the output of the 3D data to the modeling device 920 are performed by another terminal device different from the information processing system 100. good.
  • the electronic computers used in the dental scan system 910 and the information processing system 100 include personal computers and server devices, as well as portable information terminal devices such as so-called smartphones and tablet-type information terminal devices. Equipment can be used. In the following examples, it is assumed that a so-called personal computer having a keyboard, a display, or the like (not shown) is used as the electronic computer used in the information processing system 100, but the description is not limited to this.
  • the dental scan system 910 and the information processing system 100 may be configured by one device, may be configured by a plurality of devices that operate in cooperation with each other, or may be built in other devices. It may be an electronic computer or the like.
  • the server may be a so-called cloud server, an ASP server, or the like, and the type thereof does not matter.
  • FIG. 2 is a diagram showing the configuration of the information processing system 100.
  • the information processing system 100 includes a storage unit 110, a reception unit 120, a reception unit 130, a processing unit 140, an output unit 160, and a transmission unit 170.
  • the storage unit 110 includes a learning device storage unit 111, a reference shape information storage unit 112, and a learning target information storage unit 113.
  • the storage unit 110 is preferably a non-volatile recording medium, but can also be realized by a volatile recording medium.
  • information acquired by the receiving unit 120 and the processing unit 140 is stored in each unit of the storage unit 110, but the process of storing the information or the like in each unit of the storage unit 110 is not limited to this.
  • information or the like may be stored in the storage unit 110 via a recording medium, or information or the like transmitted via a communication line or the like may be stored in the storage unit 110.
  • the information or the like input via the input device may be stored in the storage unit 110.
  • the learning device is stored in the learning device storage unit 111.
  • the learning device is obtained by machine learning of the learning unit 141 as described later, for example.
  • the learner may be referred to as a classifier or a trained model.
  • the learner is used to obtain restoration information to represent the shape of the dental restoration. Details of the learner and its use will be described later.
  • the learning device is stored in association with a tooth identifier that identifies one or more teeth of interest.
  • the learning device is stored in association with the attention tooth for each specific one or more attention teeth.
  • a learning device corresponding to the tooth of interest is used based on the tooth identifier. Since a learning device is prepared for each tooth of interest, it is possible to acquire restoration information for expressing the shape of the dental restoration with high accuracy by using the learning device corresponding to the tooth of interest.
  • the learner does not have to be associated with the tooth identifier.
  • the learner may be one that can be used to obtain restoration information for any of two or more teeth of interest, regardless of which tooth of interest.
  • the learning device may be information prepared for each tooth region (maxillary side, mandibular side, left and right, etc.).
  • the reference shape information storage unit 112 stores reference shape information indicating a reference shape prepared in advance for each tooth.
  • the reference shape information of each tooth is stored, for example, in association with a tooth identifier that identifies the corresponding tooth.
  • the reference shape information is, for example, 3D data of the reference shape, but may be a parameter used to generate 3D data of the reference shape according to a predetermined processing method.
  • the reference shape is a shape that can be used as a template when repairing the tooth of interest.
  • a plurality of reference shape information may be prepared according to an index (for example, height, weight, etc.) representing the gender, age, and physique of the patient.
  • the learning target information storage unit 113 stores two or more learning target information used for machine learning performed by the learning unit 141, which will be described later.
  • One learning target information includes, for example, restoration information for representing the shape of a dental restoration applied to one or more teeth of interest, dentition shape information including the shapes of two or more teeth adjacent to each other, and teeth. Includes dentition identification information that identifies each of the two or more teeth included in the row shape information.
  • the dentition shape information is information representing at least a part of the shape in the oral cavity including the shape of the tooth corresponding to the dental restoration and the shape of all the teeth adjacent to the tooth in the present embodiment.
  • the dentition shape information is, for example, data having the same contents as the 3D data transmitted from the dental scan system 910. It does not matter whether the dentition shape information is the 3D data itself transmitted from the dental scanning system 910.
  • the dentition shape information does not include the shape of the tooth corresponding to the dental restoration, but may include the shape of the tooth close to the tooth corresponding to the dental restoration.
  • the "proximity tooth” means an arbitrary tooth in contact with one tooth, and includes an occlusal tooth and an adjacent tooth.
  • the dentition identification information is information for identifying a tooth whose shape is included in the dentition shape information with respect to the dentition shape information.
  • the dentition identification information is, for example, information for specifying a portion of the dentition shape information corresponding to the tooth (for example, for each of the teeth whose shape is included in the dentition shape information).
  • Information that specifies the range indicating the shape of the tooth) and the tooth identifier that identifies the tooth include information associated with each other. That is, in one learning target information, based on the dentition shape information and the dentition identification information, each tooth corresponding to the dentition shape information is associated with a tooth identifier to indicate the shape of the tooth.
  • the data can be identified.
  • each data of the dentition shape information and the dentition identification information does not matter. Both may be separate information, and by associating a tooth identifier with each point of the point group included in the dentition shape information, the point indicating the tooth in the point group can be associated with each tooth. It may be configured so that the range can be specified.
  • the dentition shape information is information including coordinate information of each point in a state where individual points can be identified
  • the dentition identification information is information in which individual points are associated with tooth identifiers. good.
  • the information recorded as a pair of the coordinates of each point and the tooth identifier (for example, point group data in which which tooth is labeled) is the dentition shape information.
  • the state in which the points can be identified may be, for example, a state in which the points can be identified by including an identifier for identifying the points as information on the individual points, or in the dentition shape information.
  • the points may be identifiable based on the order in which the information of the individual points exists.
  • the dentition shape information is information including coordinate information of a plurality of points
  • the dentition identification information is information for specifying a space in which a point group indicating each tooth exists and a tooth identifier for identifying the tooth. It may be information associated with.
  • Restoration information is information that represents the shape of the dental restoration.
  • the restoration information is, for example, information corresponding to the difference in shape between the reference shape and the shape of the dental restoration corresponding to one tooth.
  • the restoration information is a parameter group indicating the amount of deformation from the reference shape to the shape of the dental restoration for one tooth. That is, based on the restoration information and the reference shape information of one tooth, it is possible to generate 3D data showing the shape of the dental restoration when the tooth is the tooth of interest.
  • the restoration information may include only information related to the shape of a part of the dental restoration.
  • the restoration information includes only information related to the shape of the portion of the attention tooth to be restored that faces another tooth (for example, an occlusal tooth to be meshed up and down or an adjacent adjacent tooth). It may be a thing.
  • the shape of the other part of the dental restoration can be, for example, a shape based on the shape of the corresponding part of the reference shape.
  • the restoration information may be 3D data showing the shape of the dental restoration as it is.
  • the learning target information is prepared in advance based on, for example, a case where treatment has been performed in the past, and is stored in the learning target information storage unit 113.
  • the restoration information included in the learning target information is used, for example, for modeling the reference shape information stored in the reference shape information storage unit 112 and the dental restoration used for treatment. It can be obtained based on 3D data.
  • the restoration information used when the dental restoration is produced in the dental restoration production system 900 (the information may reflect the result of the editing operation by the user, or the second acquisition unit 152 described later). (It may be the information acquired by) is associated with the dentition shape information and the dentition identification information used as inputs at that time, and may be stored in the learning target information storage unit 113. .. Such accumulation processing may be performed by, for example, the processing unit 140 or the like.
  • the receiving unit 120 receives the information transmitted from another device.
  • the receiving unit 120 stores the received information in, for example, the storage unit 110.
  • the receiving unit 120 is usually realized by a wireless or wired communication means, but may be realized by a means for receiving a broadcast.
  • the reception unit 130 receives various input operations to the information processing system 100 performed by the user.
  • the reception unit 130 uses, for example, information input using an input means (not shown) connected to the information processing system 100 or a reading device (for example, a code reader) (for example, a code reader) connected to the information processing system 100 (not shown). Accepts the information input by the input operation performed (including, for example, the information read by the device).
  • the reception unit 130 may receive information related to an input operation or the like transmitted via another device connected via a network or the like.
  • the received information is stored in, for example, the storage unit 110.
  • the input means that can be used for inputting information that can be accepted by the reception unit 130 may be any input means such as a numeric keypad, a keyboard, a mouse, or a menu screen.
  • the reception unit 130 can be realized by a device driver for input means such as a numeric keypad or a keyboard, control software for a menu screen, or the like.
  • the processing unit 140 includes a learning unit 141, a first acquisition unit 151, and a second acquisition unit 152.
  • the processing unit 140 performs various processes such as, for example, the processes performed by each unit of the processing unit 140 as follows.
  • the processing unit 140 can usually be realized from an MPU (including a CPU and / or a GPU), a memory, or the like.
  • the processing procedure of the processing unit 140 is usually realized by software, and the software is recorded in a recording medium such as a ROM. However, it may be realized by hardware (dedicated circuit).
  • the learning unit 141 acquires two or more learning target information, and generates and acquires a learning device by performing machine learning using the acquired learning target information.
  • the learning device is obtained by machine learning performed by using two or more learning target information.
  • the learner inputs input information based on information indicating the shapes of two or more teeth that are close to each other, such as dentition shape information, and information that identifies each of the two or more teeth, such as dentition identification information, and restores the material. It is a learning device that outputs output information corresponding to object information. That is, the learner acquires restoration information for expressing the shape of the dental restoration applied to some of the two or more teeth by using the information including the shapes of two or more teeth that are close to each other. It can be said that it is information for doing.
  • the learning unit 141 generates (learns) a learning device by using a machine learning method, for example, as follows. That is, the learning unit 141 generates input information based on the dentition shape information and the dentition identification information for each of the two or more learning target information. Then, the information of the combination of the input information and the output information obtained from each of the two or more learning target information is given to the module for configuring the learning device of machine learning to generate and acquire the learning device. The learning unit 141 stores the acquired learning device in the learning device storage unit 111.
  • a machine learning method that can be applied to a regression problem that outputs numerical data from some numerical data can be used, for example, deep learning (for example, Deep Feed Forward Neural Networks, etc.), random forest, polypoly regression, etc. SGD regression, LASSO regression, Ridge regression and the like can be applied.
  • deep learning for example, Deep Feed Forward Neural Networks, etc.
  • random forest for example, Random forest
  • polypoly regression etc.
  • SGD regression LASSO regression
  • Ridge regression and the like can be applied.
  • functions in various known machine learning frameworks and various existing libraries can be used.
  • machine learning is performed using input information with dimension reduction and output information based on restoration information.
  • dimensionality reduction for example, a known principal component analysis (PCA) method can be used, but is not limited thereto.
  • the learning unit 141 multidimensionally characterizes the features of two or more teeth included in the dentition shape information, for example, based on the dentition identification information and the dentition shape information included in each learning target information. Generates first vector information represented as a vector of. Then, by reducing the dimension of the generated first vector information, the second vector information representing the feature vector having a lower dimension than the vector represented by the first vector information is generated.
  • the learning unit 141 generates the third vector information from the restoration information by performing the dimension reduction in the same manner as the second vector is generated.
  • the learning unit 141 generates a learning device by using two or more pieces of information on the combination of the second vector information and the third vector information as the information of the combination of the input information and the output information. Since the learning device is generated using the information obtained by reducing the dimensions in this way, the amount of calculation required for performing machine learning can be reduced. In addition, the amount of calculation required for processing performed using the learner can be reduced. Before performing the dimension reduction, mesh registration described later is performed.
  • the combination of the first vector information and the information representing the feature vector indicating the restoration information may be used as the information of the combination of the input information and the output information.
  • the learning unit 141 optimizes by adjusting the parameters of the cost function including the penalty term so that the evaluation regarding the shape including the evaluation of interference is the highest in the machine learning. ..
  • the learning unit 141 evaluates the interference between the dental restoration and the surrounding teeth based on the learning target information used for learning, and changes the parameters of the penalty term (function and / or value) based on the evaluation result. By doing so, optimization is performed.
  • the parameter change of the penalty term based on the evaluation result can be set to correspond to the rule set regarding the interference between the dental restoration and the surrounding teeth.
  • the first acquisition unit 151 acquires the target shape information and the target identification information for identifying each of the attention tooth and the adjacent tooth (occlusal tooth and adjacent tooth) corresponding to the target shape information.
  • the target shape information is 3D data transmitted from the dental scan system 910, and is information including the shape of the tooth of interest and the shape of one or more adjacent teeth close to the tooth of interest.
  • the target shape information is information indicating a point cloud representing the shape of at least a part of the oral cavity of the patient in the present embodiment, and more specifically, the shape of the tooth of interest and all the adjacent teeth close to the tooth of interest. Information including the shape of.
  • the target shape information does not have to include the shape of the tooth of interest.
  • the target identification information is information for identifying the tooth whose shape is included in the target shape information with respect to the target shape information.
  • the target identification information is, for example, information for specifying a portion of the target shape information corresponding to the tooth (for example, the tooth) for each tooth whose shape is included in the target shape information.
  • Information that specifies the range indicating the shape of the tooth) and the tooth identifier that identifies the tooth are associated with each other. That is, based on the target shape information and the target identification information, it has become possible to specify 3D data indicating the shape of the tooth by associating it with a tooth identifier for each tooth corresponding to the target shape information.
  • the target identification information is information that labels which part of the target shape information corresponds to which tooth.
  • each data of the target identification information and the target shape information does not matter. Both may be separate information, and by associating a tooth identifier with each point of the point group included in the target identification information, the range of points indicating the tooth in the point group is associated with each tooth. May be configured to be able to identify. That is, the distinction between the target identification information and the target shape information does not have to be strict, and indicates information in which a tooth identifier is associated with each point in the point group or a tooth shape divided for each tooth.
  • Information including 3D data (for example, point group data labeled which tooth corresponds to) may be interpreted as information in which target identification information and target shape information are combined.
  • the target shape information may be information including coordinate information of each point in a state where individual points can be identified
  • the target identification information may be information in which individual points are associated with tooth identifiers.
  • the state in which points can be identified may be, for example, a state in which points can be identified by including an identifier for identifying the points as information on each point, or individual points can be identified in the target shape information.
  • the points may be identifiable based on the order in which the information of the points exists.
  • the target shape information is information including coordinate information of a plurality of points
  • the target identification information includes information for specifying a space in which a point cloud indicating each tooth exists and a tooth identifier for identifying the tooth. It may be associated information or the like.
  • the target shape information and the dentition shape information are similar 3D data in that they both include the shape of one tooth and the shape of a tooth adjacent to the tooth.
  • the target identification information and the dentition identification information both include a portion (range) indicating the shape of each tooth in the 3D data and information for identifying which tooth the portion indicates. In terms of inclusion, it can be said to be similar information.
  • the combination of the dentition shape information and the dentition identification information and the combination of the target identification information and the target shape information are both the shape of the tooth and which tooth is the tooth for each of the plurality of teeth.
  • the information includes the information that identifies the tooth.
  • the information used for the processing in the learning unit 141 is the dentition shape information or the tooth. It is called column identification information, and what is acquired by the first acquisition unit 151 is called target shape information or target identification information.
  • the first acquisition unit 151 generates and acquires the target identification information based on the target shape information acquired from the dental scan system 910.
  • the first acquisition unit 151 specifies a region including the point cloud corresponding to each tooth in the point cloud indicated by the target shape information.
  • the first acquisition unit 151 generates the target identification information by associating the specified region with the tooth identifier that identifies the tooth for each tooth.
  • the target identification information may be generated by the first acquisition unit 151 by using, for example, a machine learning method as follows. That is, a machine learning method is used for a learning device that inputs target shape information in advance and outputs target identification information (which may be information indicating a labeled point cloud) for the target shape information. Configure using. Specifically, for example, there are two or more pairs of target shape information and target identification information (for example, a pair of information indicating an unlabeled point group and information indicating a labeled point group). The two or more sets of information are prepared and given to a module for configuring a learning device for machine learning to configure the learning device, and the configured learning device is stored in the storage unit 110.
  • a machine learning method is used for a learning device that inputs target shape information in advance and outputs target identification information (which may be information indicating a labeled point cloud) for the target shape information.
  • target identification information which may be information indicating a labeled point cloud
  • Configure using Specifically, for example,
  • the first acquisition unit 151 may be realized by combining the Dynamic Graph Convolutional Neural Network (DGCNN) and the Mask R-CNN.
  • DGCNN Dynamic Graph Convolutional Neural Network
  • the Mask R-CNN identifies each tooth constituting the dentition by using an image obtained by projecting the dentition from a plurality of viewpoints.
  • the first acquisition unit 151 may be configured to acquire the target identification information generated in advance corresponding to the target shape information by receiving it from an external device or the like.
  • the second acquisition unit 152 uses the learning device stored in the learning device storage unit 111 to obtain the shape of the dental restoration corresponding to the tooth of interest from the target identification information and the target shape information acquired by the first acquisition unit 151. Generates and obtains restoration information to represent. In addition, the second acquisition unit 152 generates and acquires 3D data representing the shape of the dental restoration using the restoration information.
  • the restoration information is generated as follows, for example. That is, the second acquisition unit 152 generates input information to be input to the learner based on the information acquired by the first acquisition unit 151. When the input information is generated, the second acquisition unit 152 inputs the input information to the learning device stored in the learning device storage unit 111, generates and acquires the output information. In this case, the second acquisition unit 152 acquires the learning device corresponding to the target identification information acquired by the first acquisition unit 151 from the learning device storage unit 111, and generates output information using the learning device. In other words, the second acquisition unit 152 generates output information using the learner acquired from the learner storage unit 111 using the tooth identifier of the tooth that is the tooth of interest.
  • the second acquisition unit 152 generates and acquires the restoration information based on the output information. That is, the second acquisition unit 152 acquires the restoration information by using the machine learning method.
  • a machine learning method that can be applied to a regression problem that outputs numerical data from some numerical data can be used, for example, deep learning (for example, Deep Feed Forward Neural Networks, etc.), random forest, polypoly regression, etc. SGD regression, LASSO regression, Ridge regression and the like can be applied.
  • functions in various known machine learning frameworks and various existing libraries can be used.
  • the generation of input information and the generation of restoration information based on output information are performed using the same method as that performed in the learning unit 141.
  • the second acquisition unit 152 generates the first vector information representing a multidimensional vector based on the target identification information and the target shape information acquired by the first acquisition unit 151. Then, by reducing the dimension of the generated first vector information, the second vector information representing the feature vector having a lower dimension than the vector represented by the first vector information is generated and used as the input information. Further, for example, the second acquisition unit 152 performs the inverse transformation of the dimension reduction performed at the time of generating the input information on the output information obtained by using the learner, and generates the inverse transformed information as the restoration information. ..
  • one first vector information or second vector information may be generated based on the shape of each of the two or more adjacent teeth. Further, two or more first vector information and two or more second vector information corresponding to two or more adjacent teeth may be generated.
  • the second acquisition unit 152 when the first vector information is generated at the time of generating the input information, the second acquisition unit 152 performs mesh registration (sometimes referred to as point cloud registration). Mesh registration is performed, for example, as follows.
  • the second acquisition unit 152 uses the reference shape information (hereinafter referred to as a template) of the corresponding tooth stored in the reference shape information storage unit 112 for the 3D data indicating each tooth in the point group to make a point. Adjust the number of groups to the predetermined number corresponding to the template. Then, the second acquisition unit 152 adjusts the position of each point in the point cloud so that the entire shape follows the template. Specifically, while maintaining the overall shape of each tooth, each point in the point cloud should not be too far from a nearby point, and the deviation between the point cloud data and the template should be reduced. In addition, the deviation between the point cloud data and the marker coordinates of the template should be reduced. This makes it possible to acquire highly accurate restoration information.
  • a template reference shape information of the corresponding tooth stored in the reference shape information storage unit 112 for the 3D data indicating each tooth in the point group to make a point. Adjust the number of groups to the predetermined number corresponding to the template. Then, the second acquisition unit 152 adjusts the position of each point in
  • each vertex of the template is associated with the nearest vertex of the point cloud data (3D data) corresponding to each tooth before being moved.
  • the point cloud data corresponding to each tooth can be expressed by the number of point clouds in the template.
  • the unevenness expressed by the 3D data of each tooth may be filled.
  • the vertices whose curvature is equal to or less than the threshold are associated with the vertices of the template from the vertices of the point cloud data.
  • mesh registration may be performed by associating the vertices of the template with the vertices of the point cloud data. As a result, it is possible to generate restoration information with good meshing.
  • the learning unit 141 also performs mesh registration on the dentition identification information and the dentition shape information in the same manner when generating the first vector information at the time of generating the input information performed during machine learning. It should be.
  • acquisition of 3D data representing the shape of the dental restoration using the restoration information is performed, for example, as follows. That is, when the second acquisition unit 152 acquires the restoration information corresponding to the attention tooth, the second acquisition unit 152 acquires the reference shape information of the attention tooth from the reference shape information storage unit 112. Then, the second acquisition unit 152 generates 3D data indicating the shape of the dental restoration by using the acquired reference shape information and the restoration information.
  • the learning unit 141 when the second acquisition unit 152 acquires the 3D data, the learning unit 141 generates a learning device using the 3D data. In this case, the learning unit 141 interferes with the dental restoration and the surrounding teeth based on the target shape information acquired by the first acquisition unit 151 and the restoration information or 3D data acquired by the second acquisition unit. To evaluate. Then, the learning unit 141 changes the parameter of the penalty term of the cost function based on the evaluation result. As a result, the restoration information subsequently generated by using the learner reflects a predetermined rule.
  • the output unit 160 outputs information by transmitting information to another device using the transmission unit 170 or the like, or outputs information by displaying the information on a display device provided in the information processing system 100, for example. To do.
  • the output unit 160 may or may not include an output device such as a display or a speaker.
  • the output unit 160 can be realized by the driver software of the output device, the driver software of the output device, the output device, or the like.
  • the output unit 160 outputs the 3D data acquired by the second acquisition unit 152.
  • the transmission unit 170 transmits the information to other devices constituting the dental restoration production system 900 via the network.
  • the transmission unit 170 transmits, for example, the information output by the output unit 160.
  • the transmission unit 170 is usually realized by wireless or wired communication means, but may be realized by broadcasting means.
  • FIG. 3 is a flowchart showing an example of the operation of the information processing system 100.
  • the information processing system 100 performs an operation related to the output of 3D data of the dental restoration, for example, as follows.
  • Step S101 The learning unit 141 acquires a learning device by the "learning device acquisition / storage process" described later, and stores the learning device in the learning device storage unit 111.
  • the process according to step S101 can save labor when the learner is stored in the learner storage unit 111 in advance.
  • Step S102 The first acquisition unit 151 acquires target shape information including the shape of the tooth of interest.
  • the first acquisition unit 151 performs a process of extracting information on each tooth by a "identification information acquisition process" described later, and acquires the target identification information based on the target shape information.
  • Step S104 The second acquisition unit 152 acquires output information (output information acquisition process).
  • Step S105 The second acquisition unit 152 acquires the restoration information by performing inverse transformation on the dimension-reduced information which is the output information.
  • Step S106 The second acquisition unit 152 acquires the reference shape information of the tooth of interest from the reference shape information storage unit 112.
  • the second acquisition unit 152 acquires 3D data indicating the shape of the dental restoration for the tooth of interest based on the acquired restoration information and the reference shape information.
  • Step S108 The output unit 160 outputs the acquired 3D data so that it can be handled by CAD.
  • the user can edit the data on the CAD by using the CAD / CAM system, and can model the dental restoration by the modeling device 920.
  • FIG. 4 is a flowchart showing an example of the learning device acquisition / accumulation process performed by the information processing system 100.
  • Step S121 The learning unit 141 acquires two or more learning target information including restoration information related to the same attention tooth from the learning target information storage unit 113.
  • Step S122 The learning unit 141 executes mesh registration based on the dentition identification information and the dentition shape information of each learning target information. Further, the learning unit 141 generates the first vector information based on the information generated by the mesh registration.
  • Step S123 The learning unit 141 reduces the dimension of the first vector information and generates the second vector information.
  • the second vector information is input information to the learner.
  • Step S124 The learning unit 141 reduces the dimension of the vector indicating the restoration information of each learning target information, and generates the third vector information.
  • the third vector information becomes the output information of the learner.
  • Step S125 The learning unit 141 performs machine learning using a combination of the second vector information and the third vector information for each learning target information.
  • the learning unit 141 acquires a learning device by performing machine learning.
  • Step S126 The learning unit 141 stores the acquired learning device in the learning device storage unit 111 in association with the tooth identifier of the tooth of interest. The process returns to the process shown in FIG.
  • FIG. 5 is a flowchart showing an example of the identification information acquisition process performed by the information processing system 100.
  • the first acquisition unit 151 acquires from the storage unit 110 a learner configured by machine learning using the learning data that inputs the point cloud data and outputs the labeled point cloud data. ..
  • Step S142 The first acquisition unit 151 inputs the acquired target shape information into the acquired learner.
  • Step S143 The first acquisition unit 151 acquires the target identification information which is the output of the learner. As a result, the first acquisition unit 151 can acquire, for example, the target shape information (target identification information) in which the point cloud is labeled.
  • the process returns to the process shown in FIG.
  • FIG. 6 is a flowchart showing an example of the output information acquisition process performed by the information processing system 100.
  • Step S161 The second acquisition unit 152 acquires the reference shape information from the reference shape information storage unit 112 for each tooth corresponding to the target identification information.
  • Step S162 The second acquisition unit 152 performs mesh registration for each tooth corresponding to the target identification information using the acquired reference shape information.
  • Step S163 The second acquisition unit 152 generates the first vector information using the information generated by the mesh registration.
  • Step S164 The second acquisition unit 152 reduces the dimension of the first vector information and generates the second vector information.
  • the second vector information is input information to the learner.
  • Step S165 The second acquisition unit 152 acquires a learning device, which is a learning device corresponding to the tooth of interest, from the learning device storage unit 111.
  • Step S166 The second acquisition unit 152 inputs the input information to the acquired learner and acquires the output information. The process returns to the process shown in FIG.
  • FIG. 7 is a diagram illustrating a specific example of 3D data showing the shape of the dental restoration acquired by the second acquisition unit 152.
  • FIG. 7 one attention tooth E and its adjacent teeth A, B, C, and D are schematically shown.
  • the attention tooth E is, for example, one of the lower teeth, and the proximity teeth A and D are teeth located on both sides of the attention tooth E so as to be adjacent to the attention tooth E.
  • Proximity teeth B and C are teeth located on the upper side, and can be said to be adjacent to the attention tooth E in the vertical direction, and are subject to engagement with the attention tooth E.
  • the learning device corresponding to the attention tooth E can obtain 3D data of the dental restoration having an appropriate shape for the attention tooth E.
  • the learning device is generated by the learning target information including the dentition shape information corresponding to the adjacent teeth A, B, C, and D and the restoration information corresponding to the attention tooth E.
  • the second vector information obtained by the first acquisition unit 151 performing processing such as dimension reduction using the target shape information corresponding to the proximity teeth A, B, C, and D (in the figure, the proximity teeth A, Using ⁇ _1, ⁇ _2, ..., ⁇ _n (underscore indicates that the character following it is a subscript) shown for each of B, C, and D as input information, the characteristics of the tooth E of interest are described by the learner.
  • the output information ( ⁇ bar_11, ⁇ bar_2, ..., ⁇ bar_n (bar means the horizontal line above ⁇ in FIG.
  • the combination must match. That is, when there are a plurality of types of learners having different combinations of dentition shape information used for one tooth of interest, a combination of dentition shape information corresponding to the combination of target shape information to be used is used.
  • the learning device generated by using the learning device may be used.
  • the second vector information for each of the adjacent teeth A, B, C, and D is shown, and thus two or more second vector information is used as the input information.
  • one second vector information may be constructed based on the shapes of the adjacent teeth A, B, C, and D, and may be used as input information.
  • a plurality of second vector information obtained from the dentition shape information of a plurality of teeth adjacent to the arbitrarily selected selected tooth and the dentition shape of the selected tooth is input to the learner adjusted by the output information obtained from the information, and the attention tooth is input. The corresponding output information is generated. Then, from the generated output information, restoration information for expressing the shape of the dental restoration corresponding to the tooth of interest is generated.
  • the generation of the output information from the dentition shape information of the selected tooth is performed by, for example, the PCA, and the generation of the restoration information from the output information is performed, for example, by the inverse transformation of the PCA.
  • 3D data representing the shape of the dental restoration can be easily output. That is, it is possible to eliminate the need for the modeling work of the dental restoration by the worker, which conventionally required a considerable amount of man-hours, to significantly reduce the man-hours required for modeling, and to perform high-precision dentistry by machine learning. 3D data representing the shape of the restoration can be obtained.
  • the information processing system 100 can output 3D data representing the shape of the dental restoration according to the scan data of each patient based on the learning device, so that the dentistry can be output. It is possible to reduce the personal part in the production process of the restoration.
  • the penalty term of the cost function of machine learning it is possible to obtain 3D data according to the rule regarding the interference between the dental restoration and the surrounding teeth. ..
  • a learning device can be prepared for each tooth, and restoration information can be generated using the learning device corresponding to the tooth of interest. Therefore, it is possible to output 3D data representing the shape of the dental restoration with higher accuracy.
  • the input information generated by using the attribute information such as the patient's attribute may be used.
  • classification information on each patient's gender, age, natural anthropological classification, lifestyle-related information, and the like may be used.
  • the reference shape information corresponding to the attribute information may be used for generating the restoration information.
  • processing in this embodiment may be realized by software. Then, this software may be distributed by software download or the like. Further, this software may be recorded on a recording medium such as a CD-ROM and disseminated.
  • the software that realizes the information processing system 100 in this embodiment is the following program. That is, this program is a program for outputting information for producing a dental restoration used for one or more attention teeth to be restored, and is a dentition shape including two or more tooth shapes adjacent to each other. It is provided with a learning device storage unit for storing a learning device for acquiring restoration information for representing the shape of a dental restoration corresponding to a part of two or more teeth obtained by using the information. A computer possessed by an information processing system is used to identify target shape information including the shape of the attention tooth and the shape of one or more adjacent teeth adjacent to the attention tooth and target shape information corresponding to each of the attention tooth and the proximity tooth.
  • the first acquisition unit that acquires the identification information, and the restoration information for expressing the shape of the dental restoration corresponding to the tooth of interest from the target identification information and the target shape information acquired by the first acquisition unit using the learning device.
  • This is a program for functioning as a second acquisition unit.
  • the user can use the annotation tool which is the software realized by the operation of the first acquisition unit 151.
  • the first acquisition unit 151 can acquire the target identification information based on the user's input operation using the annotation tool and the target shape information.
  • the first acquisition unit 151 confirms the relationship between each point included in the point cloud and the surrounding points with respect to the acquired target shape information. Then, by identifying the part of the point cloud having a predetermined relationship and performing segmentation of the point cloud, the area including the point cloud corresponding to each tooth is estimated from the point cloud indicated by the target shape information. ..
  • the first acquisition unit 151 calculates the curvature (for example, the minimum curvature) at each vertex of the mesh formed by using the point cloud, and classifies the point cloud based on the calculated curvature. .. Specifically, the first acquisition unit 151 specifies a portion having a curvature larger than a predetermined first threshold value (a portion having a radius of curvature smaller than a predetermined second threshold value). Then, a process of converting the specified portion into a single line in the three-dimensional space is performed, and the boundary to be divided is determined.
  • a predetermined first threshold value a portion having a radius of curvature smaller than a predetermined second threshold value
  • the annotation tool may accept the adjustment operation for changing the first threshold value or the second threshold value by the user, and classify the point cloud according to the received adjustment operation.
  • the threshold value is set according to the position of the slider in the slider bar. good.
  • the output unit 160 accepts the adjustment operation while outputting the shape indicated by the target shape information and the information indicating the area to be divided to the display device, and immediately displays on the display device each time the adjustment operation is performed. The mode may be changed according to the adjustment operation.
  • the annotation tool is used by the user to perform a labeling operation for associating which tooth is indicated for each area. Is done using. Based on the result of the labeling operation, the point cloud of each region is associated with the tooth identifier.
  • the first acquisition unit 151 displays the point cloud included in the estimated area and the point cloud not included in the area on the display by the output unit 160 in different display modes, and the user may display the point cloud.
  • Get the input annotation information Displaying in different display modes includes, for example, different display colors, different dot sizes, different background colors and different background patterns, but is not limited to these.
  • the annotation information is information that identifies the tooth indicated by each region.
  • the first acquisition unit 151 can specify the area including the point group corresponding to each tooth based on the annotation information input by the user.
  • the first acquisition unit 151 associates the point cloud included in each region with the tooth identifier corresponding to the region based on the annotation information.
  • the target identification information corresponding to the target shape information can be acquired by using the annotation tool.
  • the first acquisition unit 151 performs mesh registration with respect to the target shape information, but the present invention is not limited to this.
  • FIG. 8 is a diagram showing an example of the operation performed by the first acquisition unit 151 according to the second embodiment.
  • FIG. 8 among the operations performed by the first acquisition unit 151, the operations related to the acquisition of the target identification information using the annotation tool are shown.
  • the process shown in FIG. 8 is performed as a process of acquiring target identification information (step S103) in the process shown in FIG. 3 executed in the second embodiment.
  • Step S241 The first acquisition unit 151 reads the acquired target shape information.
  • Step S242 The first acquisition unit 151 calculates the curvature at each vertex of the read target shape information.
  • Step S243 The first acquisition unit 151 acquires the set threshold value of the curvature.
  • the first acquisition unit 151 calculates a dividing line that divides the area of the point cloud based on the acquired threshold value. Further, the first acquisition unit 151 receives an adjustment operation from the user regarding the dividing line, and recalculates the dividing line based on the adjustment operation.
  • Step S245 The first acquisition unit 151 divides the area based on the calculated dividing line. As a result, the area belonging to each tooth and the area belonging to the gums and the like are partitioned.
  • Step S246 The first acquisition unit 151 accepts the area integration operation by the user and reflects it in the point cloud section.
  • Step S247 The first acquisition unit 151 accepts a labeling operation by the user. Thereby, which tooth each region belongs to is associated.
  • Step S248 The first acquisition unit 151 acquires the target identification information based on the acceptance result of the labeling operation. As a result, the labeling content and the target shape information are acquired in association with each other.
  • FIG. 9 is a diagram illustrating a specific example of an annotation tool that can be used by a user in the information processing system 100.
  • FIG. 9 shows, for example, an example of an operation screen of the annotation tool displayed on the display device of the information processing system 100.
  • the display data G1 showing the read target shape information in the 3D space and the operation column G2 including two or more buttons associated with various commands for operating the data are displayed. included.
  • the data included in the target shape information can be manipulated or divided by operating the buttons of the operation column G2 or performing the operation of partially selecting each data in the display data G1. You can adjust the line.
  • the display mode of the display data G1 is appropriately changed according to the operation performed by the user, the operation applied to the target shape information, and the like. As a result, the user's attention can be focused on the portion where the display mode has been changed, and the user can be made aware that the operation has been reflected. Therefore, the user can perform an intuitive operation.
  • the operation column G2 includes a slider bar G3 for changing a threshold value related to curvature.
  • the user can intuitively adjust the threshold value by adjusting the position of the slider on the slider bar G3.
  • the operation column G2 also includes a merge button G4 for joining regions.
  • the user can combine two or more selected areas into one area by operating the merge button G4 after performing an operation of selecting two or more areas included in the display data G1. ..
  • the combined region may be decomposed.
  • the operation screen includes a label selection tray G11 and a label assignment button G12.
  • the label selection tray G11 includes, for example, two or more label selection buttons corresponding to each tooth identifier that can be used for labeling by the user. For example, the user performs an operation of selecting an area included in the display data G1 and then an operation of selecting a label selection button on the label selection tray G11. Then, by operating the label assignment button G12 in that state, a tooth identifier corresponding to the selected label selection button can be assigned (labeled) to the selected area.
  • the operation screen of the annotation tool is not limited to this, and can be set as appropriate.
  • the annotation tool can be used to accept operations related to point cloud classification and labeling by the user. Therefore, even when processing the target shape information including a plurality of connected teeth, which has conventionally taken a long time for the user to process, the operation of specifying the shape of each tooth can be easily performed. ..
  • the user can easily perform an operation of associating a tooth identifier that identifies the tooth with a specified area for each tooth. By accepting the user's operation using the GUI in this way, the user can perform the operation more intuitively.
  • the software that realizes the annotation tool in the second embodiment is the following program. That is, this program is a program for processing the target shape information indicating the point cloud representing the shape of at least a part of the oral cavity, and among the point clouds indicated by the target shape information on the computer of the information processing system 100.
  • the area including the point cloud corresponding to each tooth is estimated based on the relationship between each point included in the point cloud and the surrounding points, and the point cloud included in the estimated area and the point cloud not included in the area are estimated. Are displayed on the display in different display modes.
  • this program causes the computer to acquire information input by the user (selected label, etc.), and identifies an area including a point group corresponding to each tooth based on the information input by the user.
  • the target identification information in which the tooth identifier that identifies the tooth is associated with the specified area is acquired for each tooth.
  • annotation tool may be able to accept area integration operations.
  • the annotation tool according to the second embodiment is not limited to the information processing system 100 having a function of generating restoration information using a learning device, and processes related to 3D data such as a point cloud (including meshed data). It may be feasible in the various devices that perform. In this case, the correspondence between the point cloud included in each area and the identifier associated with the area can be output and used in other devices.
  • FIG. 10 is an overview view of the computer system 800 according to the above embodiment.
  • FIG. 11 is a block diagram of the computer system 800.
  • the computer system 800 includes a computer 801 including a CD-ROM drive, a keyboard 802, a mouse 803, and a monitor 804.
  • the computer 801 is connected to the MPU 8013, the bus 8014 connected to the CD-ROM drive 8012, the ROM 8015 for storing programs such as the bootup program, and the MPU 8013. It includes a RAM 8016 for temporarily storing program instructions and providing a temporary storage space, and a hard disk 8017 for storing application programs, system programs, and data.
  • the computer 801 may further include a network card that provides a connection to the LAN.
  • the program that causes the computer system 800 to execute the functions of the information processing system and the like according to the above-described embodiment may be stored in the CD-ROM 8101, inserted into the CD-ROM drive 8012, and further transferred to the hard disk 8017.
  • the program may be transmitted to the computer 801 via a network (not shown) and stored on the hard disk 8017.
  • the program is loaded into RAM 8016 at run time.
  • the program may be loaded directly from the CD-ROM 8101 or the network.
  • the program does not necessarily have to include an operating system (OS) or a third-party program that causes the computer 801 to execute functions such as the information processing system of the above-described embodiment.
  • the program need only include a portion of the instruction that calls the appropriate function (module) in a controlled manner to obtain the desired result. It is well known how the computer system 800 works, and detailed description thereof will be omitted.
  • processing performed by hardware for example, processing performed by a modem or interface card in the transmission step (only performed by hardware). Processing that is not done) is not included.
  • the two or more components existing in one device may be physically realized by one medium.
  • each process may be realized by centralized processing by a single device (system), or may be realized by distributed processing by a plurality of devices. You may.
  • the distributed processing is performed by a plurality of devices, it is also possible to grasp the entire system composed of the plurality of devices performing the distributed processing as one "device".
  • the information processing system acquires 3D data representing the shape of the dental restoration by using the acquired restoration information, but the present invention is not limited to this.
  • the information processing system acquires restoration information for representing the shape of the dental restoration, and outputs the restoration information and the information generated based on the restoration information to an external device or the like as information for producing the dental restoration. You may try to do it.
  • a dental CAD / CAM system or the like is used to acquire 3D data representing the shape of the dental restoration or to obtain the dental restoration based on the restoration information and the information generated based on the restoration information.
  • 3D data for modeling may be acquired.
  • the transfer of information performed between the respective components depends on, for example, one of the components when the two components that transfer the information are physically different. It may be performed by outputting information and accepting information by the other component, or if the two components that pass the information are physically the same, one component. It may be performed by moving from the processing phase corresponding to the above to the processing phase corresponding to the other component.
  • each component information related to the processing executed by each component, for example, information received, acquired, selected, generated, transmitted, or received by each component.
  • information such as threshold values, mathematical formulas, and addresses used by each component in processing may be temporarily or for a long period of time in a recording medium (not shown) even if it is not specified in the above description.
  • each component or a storage unit may store information on a recording medium (not shown).
  • each component or a reading unit may read the information from the recording medium (not shown).
  • the information used in each component or the like for example, the information such as the threshold value and the address used in the processing by each component and various setting values may be changed by the user, the above Although not specified in the description, the user may or may not be able to change the information as appropriate.
  • the change is realized by, for example, a reception unit (not shown) that receives a change instruction from the user and a change unit (not shown) that changes the information in response to the change instruction. You may.
  • the reception unit may accept the change instruction from, for example, an input device, information transmitted via a communication line, or information read from a predetermined recording medium. ..
  • An embodiment may be configured by appropriately combining the above-mentioned plurality of embodiments.
  • each component of any of the above embodiments may be optionally replaced or combined with a component of another embodiment.
  • some components and functions may be omitted from the above-described embodiments.
  • the learning device is a learning device obtained by machine learning, but the learning device is not limited to this.
  • the learner may be, for example, a table showing the correspondence between the input vector based on the input information indicating the shapes of two or more teeth including the attention tooth and the restoration information applied to the attention tooth. good.
  • the second acquisition unit may acquire the restoration information corresponding to the feature vector based on the target shape information from the table.
  • the second acquisition unit generates a vector that approximates the feature vector based on the target shape information by using two or more input vectors in the table and parameters that weight each input vector, and each used for the generation.
  • the restoration information and parameters corresponding to the input vector may be used to acquire the restoration information applied to the tooth of interest.
  • the learner has a relationship between, for example, an input vector based on input information indicating the shape of two or more teeth including the tooth of interest and information for generating restoration information applied to the tooth of interest. It may be a function representing.
  • the second acquisition unit may, for example, obtain information corresponding to the feature vector based on the target shape information by a function, and acquire the restoration information using the obtained information.
  • the information processing system according to the present invention has the effect of being able to easily output 3D data representing the shape of the dental restoration, and is useful as an information processing system or the like.
  • Information processing system 110 Storage unit 111 Learning device storage unit 112 Reference shape information storage unit 120 Reception unit 130 Reception unit 140 Processing unit 141 Learning unit 151 First acquisition unit 152 Second acquisition unit (Example of restoration information acquisition unit) 160 Output unit 170 Transmission unit 900 Dental restoration production system 910 Dental scanning system 920 Modeling equipment

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Epidemiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

[Problem] Conventional information processing systems have had a problem in that producing a dental restoration is very difficult, and takes time and effort. [Solution] An information processing system 100 comprises: a learning device storage unit 111 storing a learning device for acquiring restoration information for representing the shape of a dental restoration corresponding to certain teeth, obtained by using dentition shape information including the shapes of two or more teeth that are in mutual proximity; a first acquisition unit 151 for acquiring subject shape information including the shapes of one or more proximity teeth that are in proximity to a tooth of interest, and subject identification information for identifying a tooth the shape of which is included in the subject shape information on the basis of the subject shape information; and a second acquisition unit 152 for acquiring restoration information for representing the shape of a dental restoration corresponding to the tooth of interest from the subject identification information and the subject shape information by using the learning device. The information processing system makes it easy to acquire restoration information.

Description

情報処理システム、情報処理方法、及びプログラムInformation processing system, information processing method, and program
 本発明は、修復対象である1以上の歯に用いる歯科修復物を生産するための情報を出力する情報処理システム、情報処理方法、プログラムに関するものである。 The present invention relates to an information processing system, an information processing method, and a program that output information for producing a dental restoration used for one or more teeth to be restored.
 従来、歯科分野において患者に用いられる差し歯やクラウンなどの歯科修復物は、歯科技工士の手作業により、各患者に合わせて作成されている。また、近年では、CAD/CAM技術が歯科分野においても用いられるようになっている。すなわち、CADを用いて歯科技工士がモデリングしたデータを用いて、3Dプリンタやミリングマシンなどにより歯科修復物を生産することが行われている(例えば、特許文献1参照)。 Conventionally, dental restorations such as insert teeth and crowns used for patients in the dental field are manually created by dental technicians for each patient. In recent years, CAD / CAM technology has also come to be used in the dental field. That is, dental restorations are produced by a 3D printer, a milling machine, or the like using data modeled by a dental technician using CAD (see, for example, Patent Document 1).
 なお、上述のような歯のスキャンデータとして得られた点群について、歯毎に点群を区画するセグメンテーションを行う技術が公開されている(例えば、非特許文献1参照)。 It should be noted that a technique for segmenting the point cloud obtained as the above-mentioned tooth scan data for each tooth has been published (see, for example, Non-Patent Document 1).
特開2000-107203号公報Japanese Unexamined Patent Publication No. 2000-107203
 ところで、従来のようにして歯科修復物を生産する作業は、難易度が高く、また、手間がかかるという問題がある。すなわち、歯科技工士等の作業者が手作業で生産する場合もCADを用いて行うモデリングを行う場合も、ひとつひとつの歯科修復物の形状を患者に合わせて調整することが必要とされる。そのため、作業者に熟練した技術や知識が必要とされ、また、作業者が行う作業にも手間がかかる。 By the way, the work of producing a dental restoration as in the past has a problem that it is difficult and time-consuming. That is, it is necessary to adjust the shape of each dental restoration according to the patient, whether it is produced manually by a worker such as a dental technician or modeling is performed using CAD. Therefore, the worker is required to have skillful skills and knowledge, and the work performed by the worker is also troublesome.
 本第一の発明の情報処理システムは、修復対象である1以上の注目歯に用いる歯科修復物を生産するための情報を出力する情報処理システムであって、互いに近接する2以上の歯の形状を含む歯列形状情報を用いて得られた、2以上の歯のうち一部の歯に対応する歯科修復物の形状を表すための修復物情報を取得するための学習器が格納される学習器格納部と、注目歯に近接する1以上の近接歯の形状とを含む対象形状情報と、対象形状情報について対象形状情報にその形状が含まれている歯を識別する対象識別情報とを取得する第一取得部と、学習器を用いて、第一取得部が取得した対象識別情報及び対象形状情報から注目歯に対応する歯科修復物の形状を表すための修復物情報を取得する第二取得部とを備える、情報処理システムである。 The information processing system of the first invention is an information processing system that outputs information for producing a dental restoration used for one or more attention teeth to be restored, and is a shape of two or more teeth adjacent to each other. Learning that stores a learning device for acquiring restoration information for representing the shape of a dental restoration corresponding to a part of two or more teeth obtained by using the dentition shape information including Acquires target shape information including the container storage unit and the shape of one or more adjacent teeth close to the tooth of interest, and target identification information for identifying the tooth whose shape is included in the target shape information. The first acquisition unit and the learning device are used to acquire the restoration information for representing the shape of the dental restoration corresponding to the tooth of interest from the target identification information and the target shape information acquired by the first acquisition unit. It is an information processing system equipped with an acquisition unit.
 かかる構成により、容易に、歯科修復物の形状を表すための修復物情報を取得することができる。 With such a configuration, it is possible to easily obtain restoration information for representing the shape of the dental restoration.
 また、本第二の発明の情報処理システムは、第一の発明に対して、注目歯に対応する、予め用意された基準形状を示す基準形状情報が格納される基準形状情報格納部を備え、修復物情報は、基準形状と注目歯に対応する歯科修復物の形状との形状の差分に対応する情報であり、第二取得部は、修復物情報と基準形状情報とを用いて、歯科修復物の形状を表す3Dデータを取得する、情報処理システムである。 Further, the information processing system of the second invention includes, for the first invention, a reference shape information storage unit for storing reference shape information indicating a reference shape prepared in advance corresponding to the tooth of interest. The restoration information is information corresponding to the difference in shape between the reference shape and the shape of the dental restoration corresponding to the tooth of interest, and the second acquisition unit uses the restoration information and the reference shape information to perform dental restoration. It is an information processing system that acquires 3D data representing the shape of an object.
 かかる構成により、より高精度な、歯科修復物の形状を表す3Dデータを取得することができる。 With such a configuration, it is possible to acquire more accurate 3D data representing the shape of the dental restoration.
 また、本第三の発明の情報処理システムは、第一又は二の発明に対して、学習器は、2以上の学習対象情報を用いて行われる機械学習により得られた情報であり、学習対象情報は、歯列形状情報と、歯列形状情報に含まれる2以上の歯のそれぞれを識別する歯列識別情報と、2以上の歯のうち一部の歯に適用される歯科修復物の形状を表すための修復物情報とを含む、情報処理システムである。 Further, in the information processing system of the third invention, the learning device is information obtained by machine learning performed by using two or more learning target information with respect to the first or second invention, and is a learning target. The information includes dentition shape information, dentition identification information that identifies each of the two or more teeth included in the dentition shape information, and the shape of the dental restoration applied to some of the two or more teeth. It is an information processing system that includes restoration information for representing.
 かかる構成により、機械学習により高精度な修復物情報を取得することができる。 With such a configuration, it is possible to acquire highly accurate restoration information by machine learning.
 また、本第四の発明の情報処理システムは、第三の発明に対して、学習器格納部には、1以上の注目歯に対応する歯科修復物の形状を表すための修復物情報と、歯列識別情報と、注目歯に近接する2以上の歯の形状を含む歯列形状情報とを含む2以上の学習対象情報を用いた機械学習により得られた学習器が、注目歯毎に格納されており、第二取得部は、第一取得部が取得した対象識別情報に対応する学習器を用いて、第一取得部が取得した対象識別情報及び対象形状情報から機械学習の手法を利用して修復物情報を取得する、情報処理システムである。 Further, in the information processing system of the fourth invention, in contrast to the third invention, the learning device storage unit is provided with restoration information for expressing the shape of the dental restoration corresponding to one or more attention teeth. A learning device obtained by machine learning using two or more learning target information including dentition identification information and dentition shape information including two or more tooth shapes close to the attention tooth is stored for each attention tooth. The second acquisition unit uses a learning device corresponding to the target identification information acquired by the first acquisition unit, and uses a machine learning method from the target identification information and the target shape information acquired by the first acquisition unit. It is an information processing system that acquires restoration information.
 かかる構成により、より高精度な修復物情報を取得することができる。 With such a configuration, it is possible to acquire more accurate restoration information.
 また、本第五の発明の情報処理システムは、第三又は四の発明に対して、第二取得部は、第一取得部が取得した対象識別情報及び対象形状情報に基づいて多次元のベクトルを表す第一ベクトル情報を生成し、生成した第一ベクトル情報に対して次元削減(次元圧縮)を行うことにより第一ベクトル情報により表されるベクトルよりも低次元の特徴ベクトルを表す第二ベクトル情報を生成し、生成した第二ベクトル情報を用いて修復物情報を取得する、情報処理システムである。 Further, in the information processing system of the fifth invention, with respect to the third or fourth invention, the second acquisition unit is a multidimensional vector based on the target identification information and the target shape information acquired by the first acquisition unit. By generating the first vector information representing the above and performing dimensional reduction (dimensional compression) on the generated first vector information, the second vector representing a feature vector having a lower dimension than the vector represented by the first vector information. It is an information processing system that generates information and acquires restoration information using the generated second vector information.
 かかる構成により、より高速に修復物情報を取得することができる。 With such a configuration, restoration information can be acquired at a higher speed.
 また、本第六の発明の情報処理システムは、第三から五のいずれかの発明に対して、 学習部は、歯科修復物と周囲の歯との間の干渉の評価を含む、形状に関する評価に基づいて、ペナルティ項を含むコスト関数のパラメータが調整されたものである。 Further, in the information processing system of the sixth invention, for any of the third to fifth inventions, the learning unit evaluates the shape including the evaluation of the interference between the dental restoration and the surrounding teeth. The parameters of the cost function, including the penalty term, have been adjusted based on.
 かかる構成により、歯科修復物と周囲の歯との干渉に関するルールを設定することができる。 With such a configuration, it is possible to set rules regarding interference between the dental restoration and the surrounding teeth.
 また、本第七の発明の情報処理システムは、第六の発明に対して、第二取得部が3Dデータを取得した場合に、学習部は、第一取得部が取得した対象形状情報と、第二取得部が取得した修復物情報又は3Dデータとに基づいて、評価を行い、その評価結果に基づいてペナルティ項のパラメータを変更する。 Further, in the information processing system of the seventh invention, when the second acquisition unit acquires 3D data with respect to the sixth invention, the learning unit receives the target shape information acquired by the first acquisition unit. Evaluation is performed based on the restoration information or 3D data acquired by the second acquisition unit, and the parameter of the penalty term is changed based on the evaluation result.
 かかる構成により、歯科修復物と周囲の歯との干渉に関するルールに応じた修復物情報を取得できるようにすることができる。 With such a configuration, it is possible to acquire restoration information according to the rules regarding interference between the dental restoration and the surrounding teeth.
 また、本第八の発明の情報処理システムは、第一から七のいずれかの発明に対して、歯列形状情報は、歯科修復物に対応する歯の形状及びその歯に隣り合う全ての歯の形状を含む、口腔内の少なくとも一部の形状を表す情報であり、対象形状情報は、注目歯の形状及びその注目歯に近接する近接歯の形状を含む、口腔内の少なくとも一部の形状を表す情報である、情報処理システムである。 Further, in the information processing system of the eighth invention, for any one of the first to seventh inventions, the tooth row shape information is the tooth shape corresponding to the dental restoration and all the teeth adjacent to the tooth. Information representing at least a part of the shape in the oral cavity including the shape of the tooth, and the target shape information is the shape of at least a part of the oral cavity including the shape of the tooth of interest and the shape of a adjacent tooth close to the tooth of interest. It is an information processing system that is information representing.
 かかる構成により、より高精度な修復物情報を取得することができる。 With such a configuration, it is possible to acquire more accurate restoration information.
 また、本第九の発明の情報処理システムは、第一から八のいずれかの発明に対して、対象形状情報は、口腔内の少なくとも一部分の形状を表す点群を示す情報であり、第一取得部は、対象形状情報が示す点群のうち各歯に対応する点群が含まれる領域を特定し、歯毎に歯を識別する歯識別子と特定した領域とを対応付けた対象識別情報を取得する、情報処理システムである。 Further, in the information processing system of the ninth invention, with respect to any of the first to eighth inventions, the target shape information is information indicating a point group representing the shape of at least a part of the oral cavity, and the first The acquisition unit identifies the area including the point group corresponding to each tooth among the point groups indicated by the target shape information, and obtains the target identification information in which the tooth identifier that identifies the tooth is associated with the specified area for each tooth. It is an information processing system to acquire.
 かかる構成により、歯毎の形状を特定する対象識別情報を用いて、より高精度な修復物情報を取得することができる。 With such a configuration, it is possible to acquire more accurate restoration information by using the target identification information that identifies the shape of each tooth.
 また、本第十の発明の情報処理システムは、第九の発明に対して、第一取得部は、対象形状情報が示す点群のうち各歯に対応する点群が含まれる領域を、点群に含まれる各点とその周囲の点との関係に基づいて推定し、推定した領域に含まれる点群と領域に含まれない点群とを、互いに異なる表示態様でディスプレイに表示するとともにユーザにより入力される表示態様に関する情報を取得し、ユーザにより入力された情報に基づいて、各歯に対応する点群が含まれる領域を特定する、情報処理システムである。 Further, in the information processing system of the tenth invention, with respect to the ninth invention, the first acquisition unit sets a point group including a point group corresponding to each tooth among the point groups indicated by the target shape information. Estimated based on the relationship between each point included in the group and the points around it, the point group included in the estimated area and the point group not included in the area are displayed on the display in different display modes and the user. This is an information processing system that acquires information on the display mode input by the user and identifies an area including a point group corresponding to each tooth based on the information input by the user.
 かかる構成により、ユーザが、容易に、歯毎の形状を特定する操作を行うことができる。 With such a configuration, the user can easily perform an operation of specifying the shape of each tooth.
 また、本第十一の発明の情報処理システムは、第十の発明に対して、第一取得部は、特定した各領域に含まれる点群をディスプレイに表示するとともに、各領域に含まれる点群についてユーザにより入力されるラベル付け情報を取得し、ユーザにより入力された情報に基づいて対象識別情報を取得する、情報処理システムである。 Further, in the information processing system of the eleventh invention, with respect to the tenth invention, the first acquisition unit displays a group of points included in each specified area on a display and points included in each area. It is an information processing system that acquires labeling information input by a user for a group and acquires target identification information based on the information input by the user.
 かかる構成により、ユーザが、容易に、歯毎に当該歯を識別する歯識別子と特定した領域とを対応付ける操作を行うことができる。 With such a configuration, the user can easily perform an operation of associating a tooth identifier that identifies the tooth with a specified area for each tooth.
 また、本第十二の発明の情報処理システムは、修復対象である1以上の注目歯に用いる歯科修復物を生産するための情報を出力する情報処理システムであって、任意に選択された選択歯に近接する複数の歯のそれぞれの形状を示す情報から得られた複数のベクトル情報と、選択歯の形状を示す情報から得られた出力情報とにより調整された学習器が格納される学習器格納部と、学習器を用いて、注目歯に隣り合う複数の歯のそれぞれの形状を示す情報から得られた複数のベクトル情報に基づいて注目歯に対応する出力情報を取得し、取得した出力情報から注目歯に対応する歯科修復物の形状を表すための修復物情報を取得する修復物情報取得部と、を備える、情報処理システムである。 Further, the information processing system of the twelfth invention is an information processing system that outputs information for producing a dental restoration used for one or more attention teeth to be restored, and is an arbitrarily selected selection. A learner that stores a learner adjusted by a plurality of vector information obtained from information indicating the shape of each of a plurality of teeth adjacent to the tooth and output information obtained from information indicating the shape of the selected tooth. Using the storage unit and the learner, the output information corresponding to the attention tooth is acquired based on the plurality of vector information obtained from the information indicating the shapes of the plurality of teeth adjacent to the attention tooth, and the acquired output is obtained. It is an information processing system including a restoration information acquisition unit that acquires restoration information for expressing the shape of a dental restoration corresponding to a tooth of interest from information.
 かかる構成により、容易に、歯科修復物の形状を表すための修復物情報を取得することができる。 With such a configuration, it is possible to easily obtain restoration information for representing the shape of the dental restoration.
 本発明による情報処理システムによれば、容易に、修復物情報を取得することができる。 According to the information processing system according to the present invention, restoration information can be easily acquired.
実施の形態1に係る歯科修復物生産システムの概要を示す図The figure which shows the outline of the dental restoration production system which concerns on Embodiment 1. 同情報処理システムの構成を示す図Diagram showing the configuration of the information processing system 同情報処理システムの動作の一例を示すフローチャートFlow chart showing an example of the operation of the information processing system 同情報処理システムが行う学習器取得蓄積処理の一例を示すフローチャートFlow chart showing an example of learning device acquisition and storage processing performed by the information processing system 同情報処理システムが行う識別情報取得処理の一例を示すフローチャートFlow chart showing an example of identification information acquisition processing performed by the information processing system 同情報処理システムが行う出力情報取得処理の一例を示すフローチャートFlow chart showing an example of output information acquisition processing performed by the information processing system 同第二取得部により取得される歯科修復物の形状を示す3Dデータの具体例について説明する図The figure explaining the specific example of the 3D data which shows the shape of the dental restoration acquired by the 2nd acquisition part. 実施の形態2に係る第一取得部が行う動作の一例を示す図The figure which shows an example of the operation performed by the 1st acquisition part which concerns on Embodiment 2. 同情報処理システムにおいてユーザが使用することができるアノテーションツールの具体例について説明する図A diagram illustrating a specific example of an annotation tool that can be used by a user in the information processing system. 上記実施の形態におけるコンピュータシステムの概観図Overview of the computer system according to the above embodiment 同コンピュータシステムのブロック図Block diagram of the computer system
 以下、情報処理システム等の実施形態について図面を参照して説明する。なお、実施の形態において同じ符号を付した構成要素は同様の動作を行うので、再度の説明を省略する場合がある。 Hereinafter, embodiments of the information processing system and the like will be described with reference to the drawings. In addition, since the components with the same reference numerals perform the same operation in the embodiment, the description may be omitted again.
 なお、以下において用いる用語は、一般的には次のように定義される。なお、これらの用語の語義は常にここに示されるように解釈されるべきではなく、例えば以下において個別に説明されている場合にはその説明も踏まえて解釈されるべきである。 The terms used below are generally defined as follows. It should be noted that the meanings of these terms should not always be interpreted as shown here, and should be interpreted based on the explanations, for example, when they are explained individually below.
 歯科修復物とは、歯科の治療において口腔内に配置するための補修物や補綴物であり、例えば、クラウン(差し歯を含み、歯科用インプラントに装着される人工歯も含む)、インレー、ブリッジなどである。その他の義歯等の補綴物が含まれると解釈してもよい。 A dental restore is a repair or prosthesis to be placed in the oral cavity in dental treatment, for example, a crown (including an insert tooth and an artificial tooth attached to a dental implant), an inlay, a bridge, and the like. Is. It may be interpreted that other prostheses such as dentures are included.
 注目歯とは、歯科修復物を用いた治療の対象となる歯である。注目歯は、治療される患者において現存する歯であるかどうかは問わず、歯列中において完全に欠損している歯であってもよい。また、注目歯は、人工的に造形された歯や歯の一部分であってもよい。注目歯は、情報処理システムを用いる目的等に応じて、任意に選択可能であるが、これに限られない。 A tooth of interest is a tooth that is the target of treatment using a dental restoration. The tooth of interest may be a completely missing tooth in the dentition, whether or not it is an existing tooth in the patient being treated. Further, the tooth of interest may be an artificially shaped tooth or a part of the tooth. The tooth of interest can be arbitrarily selected according to the purpose of using the information processing system, etc., but is not limited to this.
 3Dデータとは、3次元の形状を表す情報であり、例えば、点群(メッシュ化されたものであってもよい)、線、面、ボクセルなどを表す情報で構成されるものである。3Dデータは、特定のCADで用いられる形式の情報であったり、各種CADで利用可能な汎用の中間ファイル形式の情報であったりしてもよい。 The 3D data is information representing a three-dimensional shape, and is composed of information representing, for example, a point cloud (which may be meshed), a line, a surface, a voxel, and the like. The 3D data may be information in a format used in a specific CAD, or information in a general-purpose intermediate file format that can be used in various CADs.
 ある事項について識別子とは、当該事項を一意に示す文字又は符号等である。符号とは、例えば英数字やその他記号等であるが、これに限られない。識別子は、例えば、それ自体が特定の意味を示すものではない符号列であるが、対応する事項を識別しうる情報であれば種類は問わない。すなわち、識別子は、それが示すものそのものの名前であってもよいし、一意に対応するように符号を組み合わせたものであってもよい。 For a certain item, the identifier is a character or code that uniquely indicates the item. The code is, for example, alphanumeric characters or other symbols, but is not limited to this. The identifier is, for example, a code string that does not have a specific meaning by itself, but any kind of information can be used as long as it can identify the corresponding item. That is, the identifier may be the name of what it indicates, or it may be a combination of codes so as to uniquely correspond to each other.
 歯識別子とは、例えば、患者の歯を一意に特定するものである。例えば、いわゆるユニバーサルシステムと呼ばれる歯式表記法により表される番号を歯識別子として用いることができるが、これに限られない。 The tooth identifier uniquely identifies the patient's tooth, for example. For example, a number represented by the so-called universal system, which is represented by the dental notation, can be used as the tooth identifier, but the present invention is not limited to this.
 取得とは、ユーザ等により入力された事項を取得することを含んでいてもよいし、自装置又は他の装置に記憶されている情報(予め記憶されている情報であってもよいし当該装置において情報処理が行われることにより生成された情報であってもよい)を取得することを含んでいてもよい。他の装置に記憶されている情報を取得するとは、他の装置に記憶されている情報をAPI経由などで取得することを含んでいてもよいし、他の装置により提供されている文書ファイルの内容(ウェブページの内容なども含む)を取得することを含んでいてもよい。 The acquisition may include acquiring the matters input by the user or the like, or the information stored in the own device or another device (the information may be stored in advance, or the device concerned). It may include acquiring information (which may be information generated by information processing performed in). Acquiring the information stored in the other device may include acquiring the information stored in the other device via API or the like, or the document file provided by the other device. It may include acquiring the content (including the content of the web page).
 情報を出力するとは、ディスプレイへの表示、プロジェクタを用いた投影、プリンタでの印字、音出力、外部の装置への送信、記録媒体への蓄積、他の処理装置や他のプログラムなどへの処理結果の引渡しなどを含む概念である。具体的には、例えば、情報のウェブページへの表示を可能とすることや、電子メール等として送信することや、印刷するための情報を出力することなどを含む。 To output information means to display on a display, project using a projector, print with a printer, output sound, transmit to an external device, store in a recording medium, process to another processing device or other program. It is a concept that includes delivery of results. Specifically, for example, it includes enabling information to be displayed on a web page, transmitting it as an e-mail or the like, and outputting information for printing.
 情報の受け付けとは、キーボードやマウス、タッチパネルなどの入力デバイスから入力された情報の受け付け、他の装置等から有線もしくは無線の通信回線を介して送信された情報の受信、光ディスクや磁気ディスク、半導体メモリなどの記録媒体から読み出された情報の受け付けなどを含む概念である。 Information reception means receiving information input from input devices such as keyboards, mice, and touch panels, receiving information transmitted from other devices via wired or wireless communication lines, optical disks, magnetic disks, and semiconductors. It is a concept including acceptance of information read from a recording medium such as a memory.
 情報処理システム等に格納されている各種の情報について、更新とは、格納されている情報の変更のほか、格納されている情報に新たな情報が追加されることや、格納されている情報の一部又は全部が消去されることなどを含む概念である。 Regarding various types of information stored in information processing systems, etc., updating means changing the stored information, adding new information to the stored information, and updating the stored information. It is a concept that includes the fact that part or all of it is erased.
 (実施の形態1) (Embodiment 1)
 以下、実施の形態1に係る情報処理システムを含む歯科修復物生産システムについて説明する。 Hereinafter, the dental restoration production system including the information processing system according to the first embodiment will be described.
 図1は、実施の形態1に係る歯科修復物生産システム900の概要を示す図である。 FIG. 1 is a diagram showing an outline of the dental restoration production system 900 according to the first embodiment.
 図1に示されるように、歯科修復物生産システム900は、情報処理システム100、歯科用スキャンシステム910、及び造形装置920を備える。歯科修復物生産システム900は、修復対象である1以上の注目歯に用いる歯科修復物を生産するために用いられる。 As shown in FIG. 1, the dental restoration production system 900 includes an information processing system 100, a dental scanning system 910, and a modeling device 920. The dental restoration production system 900 is used to produce a dental restoration used for one or more notable teeth to be restored.
 歯科用スキャンシステム910は、例えば、端末装置と、端末装置に接続された歯科用スキャナ等を含む。歯科用スキャンシステム910は、歯科用スキャナを用いて患者の口腔内の形状を表す3Dデータを生成する。歯科用スキャナは、口腔内スキャナであってもよいし、口腔内から採取された型を読み取るスキャナであってもよい。歯科用スキャンシステム910は、生成した3Dデータを、情報処理システム100に送信する。本実施の形態において、情報処理システム100に送信される3Dデータは、点群を示すデータであるが、これに限られない。 The dental scan system 910 includes, for example, a terminal device, a dental scanner connected to the terminal device, and the like. The dental scan system 910 uses a dental scanner to generate 3D data representing the shape of the patient's oral cavity. The dental scanner may be an intraoral scanner or a scanner that reads a mold taken from the oral cavity. The dental scan system 910 transmits the generated 3D data to the information processing system 100. In the present embodiment, the 3D data transmitted to the information processing system 100 is data indicating a point cloud, but is not limited to this.
 情報処理システム100は、歯科用スキャンシステム910から送信された3Dデータを取得し、後述するような処理を行う。本実施の形態において、情報処理システム100は、歯科修復物の形状を示す3Dデータを生成する。情報処理システム100は、ユーザ(例えば、歯科技工士など、歯科修復物の生産を行う作業者をいう)などの編集操作に基づいて、編集操作の内容を反映させた歯科修復物の形状を示す3Dデータを生成してもよい。情報処理システム100は、歯科修復物の形状を示す3Dデータを、造形装置920に出力する。 The information processing system 100 acquires 3D data transmitted from the dental scan system 910 and performs a process as described later. In this embodiment, the information processing system 100 generates 3D data showing the shape of the dental restoration. The information processing system 100 shows the shape of the dental restoration reflecting the content of the editing operation based on the editing operation of the user (for example, a worker who produces the dental restoration such as a dental technician). 3D data may be generated. The information processing system 100 outputs 3D data indicating the shape of the dental restoration to the modeling apparatus 920.
 造形装置920は、3Dデータを用いて3次元形状を有する歯科補修物を造形する装置である。造形装置920は、例えば、公知の、歯科用3Dプリンタやミリングマシンであるが、これに限られない。情報処理システム100が出力した歯科修復物の形状を示す3Dデータに基づいて、歯科修復物を造形する。これにより、ユーザは、歯科修復物生産システム900を利用して、造形装置920により造形された歯科修復物を得ることができる。 The modeling device 920 is a device that models a dental repair object having a three-dimensional shape using 3D data. The modeling apparatus 920 is, for example, a known dental 3D printer or milling machine, but is not limited thereto. The dental restoration is modeled based on the 3D data indicating the shape of the dental restoration output by the information processing system 100. As a result, the user can obtain the dental restoration modeled by the modeling device 920 by utilizing the dental restoration product production system 900.
 なお、本実施の形態において、歯科修復物生産システム900に含まれる各装置同士は、例えば、インターネットやLANなどのネットワークを介して通信可能であるが、これに限られない。例えば、一の装置に他の装置が有線又は無線の通信経路により直接接続されていてもよい。また、歯科修復物生産システム900には、上述の各装置他の装置も含まれていてもよい。 In the present embodiment, the devices included in the dental restoration production system 900 can communicate with each other via a network such as the Internet or a LAN, but the present invention is not limited to this. For example, one device may be directly connected to another device by a wired or wireless communication path. In addition, the dental restoration production system 900 may include each of the above-mentioned devices and other devices.
 また、歯科修復物生産システム900において、情報処理システム100により生成された3Dデータの編集操作は、情報処理システム100とは異なる他の端末装置を用いてユーザが行うことができるようにしてもよい。また、編集操作の内容を反映させた歯科修復物の形状を示す3Dデータや、3Dデータの造形装置920への出力は、情報処理システム100とは異なる他の端末装置により行われるようにしてもよい。 Further, in the dental restoration production system 900, the editing operation of the 3D data generated by the information processing system 100 may be performed by the user using another terminal device different from the information processing system 100. .. Further, even if the 3D data indicating the shape of the dental restoration reflecting the content of the editing operation and the output of the 3D data to the modeling device 920 are performed by another terminal device different from the information processing system 100. good.
 なお、歯科用スキャンシステム910や情報処理システム100に用いられる電子計算機としては、パーソナルコンピュータやサーバ装置などのほか、例えば、いわゆるスマートフォンなどの携帯情報端末装置や、タブレット型の情報端末装置など、種々の装置が用いられうる。以下の例においては、情報処理システム100に用いられる電子計算機として、図示しないキーボードやディスプレイ等を有するいわゆるパーソナルコンピュータが用いられることを想定して説明するが、これに限られるものではない。なお、歯科用スキャンシステム910や情報処理システム100は、1つの装置により構成されていてもよいし、互いに連携して動作する複数の装置により構成されていてもよいし、その他の機器に内蔵された電子計算機等であってもよい。なお、サーバは、いわゆるクラウドサーバでも、ASPサーバ等でもよく、その種類は問わない。 The electronic computers used in the dental scan system 910 and the information processing system 100 include personal computers and server devices, as well as portable information terminal devices such as so-called smartphones and tablet-type information terminal devices. Equipment can be used. In the following examples, it is assumed that a so-called personal computer having a keyboard, a display, or the like (not shown) is used as the electronic computer used in the information processing system 100, but the description is not limited to this. The dental scan system 910 and the information processing system 100 may be configured by one device, may be configured by a plurality of devices that operate in cooperation with each other, or may be built in other devices. It may be an electronic computer or the like. The server may be a so-called cloud server, an ASP server, or the like, and the type thereof does not matter.
 図2は、同情報処理システム100の構成を示す図である。 FIG. 2 is a diagram showing the configuration of the information processing system 100.
 図2に示されるように、情報処理システム100は、格納部110、受信部120、受付部130、処理部140、出力部160、及び送信部170を備える。 As shown in FIG. 2, the information processing system 100 includes a storage unit 110, a reception unit 120, a reception unit 130, a processing unit 140, an output unit 160, and a transmission unit 170.
 格納部110は、学習器格納部111、基準形状情報格納部112、及び学習対象情報格納部113を備える。 The storage unit 110 includes a learning device storage unit 111, a reference shape information storage unit 112, and a learning target information storage unit 113.
 格納部110は、不揮発性の記録媒体が好適であるが、揮発性の記録媒体でも実現可能である。格納部110の各部には、例えば受信部120や処理部140によって取得された情報などがそれぞれ格納されるが、格納部110の各部に情報等が記憶される過程はこれに限られない。例えば、記録媒体を介して情報等が格納部110で記憶されるようになってもよく、通信回線等を介して送信された情報等が格納部110で記憶されるようになってもよく、あるいは、入力デバイスを介して入力された情報等が格納部110で記憶されるようになってもよい。 The storage unit 110 is preferably a non-volatile recording medium, but can also be realized by a volatile recording medium. For example, information acquired by the receiving unit 120 and the processing unit 140 is stored in each unit of the storage unit 110, but the process of storing the information or the like in each unit of the storage unit 110 is not limited to this. For example, information or the like may be stored in the storage unit 110 via a recording medium, or information or the like transmitted via a communication line or the like may be stored in the storage unit 110. Alternatively, the information or the like input via the input device may be stored in the storage unit 110.
 学習器格納部111には、学習器が格納される。本実施の形態において、学習器は、例えば、後述のようにして学習部141の機械学習により得られたものである。本実施の形態において、学習器を、分類器又は学習済モデルと呼んでもよい。学習器は、歯科修復物の形状を表すための修復物情報を取得するために用いられる。学習器やその利用の詳細については、後述する。 The learning device is stored in the learning device storage unit 111. In the present embodiment, the learning device is obtained by machine learning of the learning unit 141 as described later, for example. In this embodiment, the learner may be referred to as a classifier or a trained model. The learner is used to obtain restoration information to represent the shape of the dental restoration. Details of the learner and its use will be described later.
 本実施の形態において、学習器は、1以上の注目歯を識別する歯識別子に対応付けて格納されている。換言すると、学習器は、特定の1以上の注目歯毎に、当該注目歯に対応付けて格納されている。修復対象となる1以上の注目歯についての歯科修復物を生産する際には、歯識別子に基づいて、その注目歯に対応する学習器が用いられる。注目歯毎に学習器が用意されていることにより、注目歯に対応する学習器を用いることによって高精度に歯科修復物の形状を表すための修復物情報を取得することができる。なお、学習器は、歯識別子に対応付けられていなくてもよい。例えば、学習器は、注目歯がどの歯であるかにかかわらずに、2以上の注目歯のいずれかについての修復物情報を取得する際に用いることができるものであってもよい。例えば、学習器は、歯の領域(上顎側、下顎側、左右など)毎に用意された情報であってもよい。 In the present embodiment, the learning device is stored in association with a tooth identifier that identifies one or more teeth of interest. In other words, the learning device is stored in association with the attention tooth for each specific one or more attention teeth. When producing a dental restoration for one or more teeth of interest to be restored, a learning device corresponding to the tooth of interest is used based on the tooth identifier. Since a learning device is prepared for each tooth of interest, it is possible to acquire restoration information for expressing the shape of the dental restoration with high accuracy by using the learning device corresponding to the tooth of interest. The learner does not have to be associated with the tooth identifier. For example, the learner may be one that can be used to obtain restoration information for any of two or more teeth of interest, regardless of which tooth of interest. For example, the learning device may be information prepared for each tooth region (maxillary side, mandibular side, left and right, etc.).
 基準形状情報格納部112には、各歯について、予め用意された基準形状を示す基準形状情報が格納される。各歯の基準形状情報は、例えば、対応する歯を識別する歯識別子に対応付けて格納されている。基準形状情報は、例えば、基準形状の3Dデータであるが、所定の処理方法に従って基準形状の3Dデータを生成するために用いられるパラメータなどであってもよい。基準形状は、注目歯を修復する際にテンプレートとして用いることができる形状である。なお、各歯について、患者の性別、年齢、体格を表す指標(例えば、身長や体重など)などに応じて複数の基準形状情報が用意されていてもよい。 The reference shape information storage unit 112 stores reference shape information indicating a reference shape prepared in advance for each tooth. The reference shape information of each tooth is stored, for example, in association with a tooth identifier that identifies the corresponding tooth. The reference shape information is, for example, 3D data of the reference shape, but may be a parameter used to generate 3D data of the reference shape according to a predetermined processing method. The reference shape is a shape that can be used as a template when repairing the tooth of interest. For each tooth, a plurality of reference shape information may be prepared according to an index (for example, height, weight, etc.) representing the gender, age, and physique of the patient.
 学習対象情報格納部113には、後述の学習部141が行う機械学習に用いられる2以上の学習対象情報が格納される。一の学習対象情報は、例えば、1以上の注目歯に適用される歯科修復物の形状を表すための修復物情報と、互いに近接する2以上の歯の形状を含む歯列形状情報と、歯列形状情報に含まれる2以上の歯のそれぞれを識別する歯列識別情報とを含む。 The learning target information storage unit 113 stores two or more learning target information used for machine learning performed by the learning unit 141, which will be described later. One learning target information includes, for example, restoration information for representing the shape of a dental restoration applied to one or more teeth of interest, dentition shape information including the shapes of two or more teeth adjacent to each other, and teeth. Includes dentition identification information that identifies each of the two or more teeth included in the row shape information.
 歯列形状情報は、本実施の形態において、歯科修復物に対応する歯の形状及びその歯に近接する全ての歯の形状を含む、口腔内の少なくとも一部の形状を表す情報である。本実施の形態において、歯列形状情報は、例えば、歯科用スキャンシステム910から送信された3Dデータと同内容のデータである。歯列形状情報が歯科用スキャンシステム910から送信された3Dデータそのものであるか否かは問わない。なお、歯列形状情報は、歯科修復物に対応する歯の形状を含まず、歯科修復物に対応する歯に近接する歯の形状を含むものであってもよい。なお、「近接する歯」とは、一の歯に接する任意の歯を意味し、咬合歯及び隣接歯を含むものである。 The dentition shape information is information representing at least a part of the shape in the oral cavity including the shape of the tooth corresponding to the dental restoration and the shape of all the teeth adjacent to the tooth in the present embodiment. In the present embodiment, the dentition shape information is, for example, data having the same contents as the 3D data transmitted from the dental scan system 910. It does not matter whether the dentition shape information is the 3D data itself transmitted from the dental scanning system 910. The dentition shape information does not include the shape of the tooth corresponding to the dental restoration, but may include the shape of the tooth close to the tooth corresponding to the dental restoration. The "proximity tooth" means an arbitrary tooth in contact with one tooth, and includes an occlusal tooth and an adjacent tooth.
 歯列識別情報は、歯列形状情報について、歯列形状情報にその形状が含まれている歯を識別する情報である。本実施の形態において、歯列識別情報は、例えば、歯列形状情報にその形状が含まれている歯のそれぞれについての、歯列形状情報のうち当該歯に対応する部分を特定する情報(例えば、当該歯の形状を示す範囲を特定する情報)と当該歯を識別する歯識別子とが対応付けられた情報を含む。すなわち、一の学習対象情報において、歯列形状情報と歯列識別情報とに基づいて、歯列形状情報に対応するそれぞれの歯について、歯識別子に対応付けて、当該の歯の形状を示す3Dデータを特定することができるようになっている。 The dentition identification information is information for identifying a tooth whose shape is included in the dentition shape information with respect to the dentition shape information. In the present embodiment, the dentition identification information is, for example, information for specifying a portion of the dentition shape information corresponding to the tooth (for example, for each of the teeth whose shape is included in the dentition shape information). , Information that specifies the range indicating the shape of the tooth) and the tooth identifier that identifies the tooth include information associated with each other. That is, in one learning target information, based on the dentition shape information and the dentition identification information, each tooth corresponding to the dentition shape information is associated with a tooth identifier to indicate the shape of the tooth. The data can be identified.
 なお、歯列形状情報と歯列識別情報とのそれぞれのデータの形式は問わない。両者が別々の情報であってもよいし、歯列形状情報に含まれる点群の各点について歯識別子が対応付けられていることで、歯毎に、点群のうち当該歯を示す点の範囲を特定することができるように構成されていてもよい。例えば、歯列形状情報は、個々の点を識別可能な状態で各点の座標情報を含む情報であり、歯列識別情報は、個々の点と歯識別子とを対応付けた情報であってもよい。この場合、個々の点の座標と歯識別子とが対となって記録されている情報(例えば、どの歯に対応するかのラベル付けが行われた点群データ)を、歯列形状情報であって歯列識別情報でもある情報として把握することも可能である。ここで点を識別可能な状態とは、例えば、個々の点の情報として当該点を識別する識別子が含まれていることにより点を識別可能な状態であってもよいし、歯列形状情報において個々の点の情報が存在する順番に基づいて点を識別可能な状態であってもよい。また、例えば、歯列形状情報は、複数の点の座標情報を含む情報であり、歯列識別情報は、各歯を示す点群が存在する空間を特定する情報と当該歯を識別する歯識別子とを対応付けた情報などであってもよい。 The format of each data of the dentition shape information and the dentition identification information does not matter. Both may be separate information, and by associating a tooth identifier with each point of the point group included in the dentition shape information, the point indicating the tooth in the point group can be associated with each tooth. It may be configured so that the range can be specified. For example, the dentition shape information is information including coordinate information of each point in a state where individual points can be identified, and the dentition identification information is information in which individual points are associated with tooth identifiers. good. In this case, the information recorded as a pair of the coordinates of each point and the tooth identifier (for example, point group data in which which tooth is labeled) is the dentition shape information. It is also possible to grasp it as information that is also dentition identification information. Here, the state in which the points can be identified may be, for example, a state in which the points can be identified by including an identifier for identifying the points as information on the individual points, or in the dentition shape information. The points may be identifiable based on the order in which the information of the individual points exists. Further, for example, the dentition shape information is information including coordinate information of a plurality of points, and the dentition identification information is information for specifying a space in which a point group indicating each tooth exists and a tooth identifier for identifying the tooth. It may be information associated with.
 修復物情報は、歯科修復物の形状を表す情報である。本実施の形態において、修復物情報は、例えば、基準形状と一の歯に対応する歯科修復物の形状との形状の差分に対応する情報である。換言すると、本実施の形態において、修復物情報は、一の歯について、基準形状から歯科修復物の形状までの変形量を示すパラメータ群である。すなわち、修復物情報と、一の歯の基準形状情報とに基づいて、当該歯を注目歯としたときの歯科修復物の形状を示す3Dデータを生成することができる。 Restoration information is information that represents the shape of the dental restoration. In the present embodiment, the restoration information is, for example, information corresponding to the difference in shape between the reference shape and the shape of the dental restoration corresponding to one tooth. In other words, in the present embodiment, the restoration information is a parameter group indicating the amount of deformation from the reference shape to the shape of the dental restoration for one tooth. That is, based on the restoration information and the reference shape information of one tooth, it is possible to generate 3D data showing the shape of the dental restoration when the tooth is the tooth of interest.
 なお、修復物情報は、歯科修復物の一部の形状に関係する情報のみを含むものであってもよい。例えば、修復物情報は、修復対象である注目歯のうち、他の歯(例えば上下でかみ合わせの対象となる咬合歯や、隣接する隣接歯)に面する部分の形状に関係する情報のみを含むものであってもよい。その場合、歯科修復物のその他の部分の形状は、例えば、基準形状のうち対応する部分の形状に基づく形状とすることができる。また、修復物情報は、歯科修復物の形状をそのまま示す3Dデータであってもよい。 Note that the restoration information may include only information related to the shape of a part of the dental restoration. For example, the restoration information includes only information related to the shape of the portion of the attention tooth to be restored that faces another tooth (for example, an occlusal tooth to be meshed up and down or an adjacent adjacent tooth). It may be a thing. In that case, the shape of the other part of the dental restoration can be, for example, a shape based on the shape of the corresponding part of the reference shape. Further, the restoration information may be 3D data showing the shape of the dental restoration as it is.
 学習対象情報は、例えば、過去に治療が行われた事例に基づいて予め用意され、学習対象情報格納部113に蓄積される。なお、本実施の形態において、学習対象情報に含まれる修復物情報は、例えば、基準形状情報格納部112に格納されている基準形状情報と、治療に用いられる歯科修復物の造形に用いられた3Dデータとに基づいて得ることができる。 The learning target information is prepared in advance based on, for example, a case where treatment has been performed in the past, and is stored in the learning target information storage unit 113. In the present embodiment, the restoration information included in the learning target information is used, for example, for modeling the reference shape information stored in the reference shape information storage unit 112 and the dental restoration used for treatment. It can be obtained based on 3D data.
 なお、歯科修復物生産システム900において歯科修復物が生産される際に用いられた修復物情報(ユーザによる編集操作の結果が反映された情報であってもよいし、後述する第二取得部152により取得された情報であってもよい)が、その際に入力として用いられた歯列形状情報及び歯列識別情報に関連付けられて、学習対象情報格納部113に蓄積されるようにしてもよい。このような蓄積処理は、例えば、処理部140などにより行われればよい。 It should be noted that the restoration information used when the dental restoration is produced in the dental restoration production system 900 (the information may reflect the result of the editing operation by the user, or the second acquisition unit 152 described later). (It may be the information acquired by) is associated with the dentition shape information and the dentition identification information used as inputs at that time, and may be stored in the learning target information storage unit 113. .. Such accumulation processing may be performed by, for example, the processing unit 140 or the like.
 受信部120は、他の装置から送信された情報を受信する。受信部120は、受信した情報を、例えば、格納部110に蓄積する。受信部120は、通常、無線又は有線の通信手段で実現されるが、放送を受信する手段で実現されてもよい。 The receiving unit 120 receives the information transmitted from another device. The receiving unit 120 stores the received information in, for example, the storage unit 110. The receiving unit 120 is usually realized by a wireless or wired communication means, but may be realized by a means for receiving a broadcast.
 受付部130は、ユーザにより行われた、情報処理システム100に対する種々の入力操作を受け付ける。受付部130は、例えば、情報処理システム100に接続された図示しない入力手段を用いて入力された情報や、情報処理システム100に接続された図示しない読み取り装置(例えば、コードリーダなど)を用いて行われた入力操作(例えば、装置により読み取られた情報も含む)により入力された情報を受け付ける。受付部130は、ネットワーク等を介して接続された他の装置を介して送信された、入力操作等に関する情報を受け付けるようにしてもよい。受け付けられた情報は、例えば、格納部110に蓄積される。 The reception unit 130 receives various input operations to the information processing system 100 performed by the user. The reception unit 130 uses, for example, information input using an input means (not shown) connected to the information processing system 100 or a reading device (for example, a code reader) (for example, a code reader) connected to the information processing system 100 (not shown). Accepts the information input by the input operation performed (including, for example, the information read by the device). The reception unit 130 may receive information related to an input operation or the like transmitted via another device connected via a network or the like. The received information is stored in, for example, the storage unit 110.
 なお、受付部130により受付可能な情報の入力に用いられうる入力手段は、テンキーやキーボードやマウスやメニュー画面によるものなど、何でもよい。受付部130は、テンキーやキーボード等の入力手段のデバイスドライバーや、メニュー画面の制御ソフトウェア等で実現されうる。 The input means that can be used for inputting information that can be accepted by the reception unit 130 may be any input means such as a numeric keypad, a keyboard, a mouse, or a menu screen. The reception unit 130 can be realized by a device driver for input means such as a numeric keypad or a keyboard, control software for a menu screen, or the like.
 処理部140は、学習部141、第一取得部151、及び第二取得部152を備える。処理部140は、例えば、以下のように処理部140の各部が行う処理など、各種の処理を行う。処理部140は、通常、MPU(CPU及び/又はGPUを含む)やメモリ等から実現されうる。処理部140の処理手順は、通常、ソフトウェアで実現され、当該ソフトウェアはROM等の記録媒体に記録されている。但し、ハードウェア(専用回路)で実現してもよい。 The processing unit 140 includes a learning unit 141, a first acquisition unit 151, and a second acquisition unit 152. The processing unit 140 performs various processes such as, for example, the processes performed by each unit of the processing unit 140 as follows. The processing unit 140 can usually be realized from an MPU (including a CPU and / or a GPU), a memory, or the like. The processing procedure of the processing unit 140 is usually realized by software, and the software is recorded in a recording medium such as a ROM. However, it may be realized by hardware (dedicated circuit).
 学習部141は、2以上の学習対象情報を取得し、取得した学習対象情報を用いて、機械学習を行うことにより学習器を生成し、取得する。換言すると、本実施の形態において、学習器は、2以上の学習対象情報を用いて行われる機械学習により得られる。学習器は、歯列形状情報など、互いに近接する2以上の歯の形状を示す情報と、歯列識別情報など、2以上の歯のそれぞれを識別する情報とに基づく入力情報を入力とし、修復物情報に対応する出力情報を出力とする学習器である。すなわち、学習器は、互いに近接する2以上の歯の形状を含む情報を用いて、2以上の歯のうち一部の歯に適用される歯科修復物の形状を表すための修復物情報を取得するための情報であるということができる。 The learning unit 141 acquires two or more learning target information, and generates and acquires a learning device by performing machine learning using the acquired learning target information. In other words, in the present embodiment, the learning device is obtained by machine learning performed by using two or more learning target information. The learner inputs input information based on information indicating the shapes of two or more teeth that are close to each other, such as dentition shape information, and information that identifies each of the two or more teeth, such as dentition identification information, and restores the material. It is a learning device that outputs output information corresponding to object information. That is, the learner acquires restoration information for expressing the shape of the dental restoration applied to some of the two or more teeth by using the information including the shapes of two or more teeth that are close to each other. It can be said that it is information for doing.
 本実施の形態において、学習部141は、例えば次のようにして、機械学習の手法を利用して学習器を生成する(学習を行う)。すなわち、学習部141は、2以上の学習対象情報のそれぞれについて、歯列形状情報と歯列識別情報とに基づく入力情報を生成する。そして、2以上の学習対象情報のそれぞれから得られた入力情報と出力情報との組み合わせの情報を、機械学習の学習器を構成するためのモジュールに与えて学習器を生成し、取得する。学習部141は、取得した学習器を学習器格納部111に蓄積する。なお、機械学習の手法は、なんらかの数字データから数字データを出力する回帰問題に適用可能なものを用いることができ、例えば、深層学習(例えばDeep Feed Forward Neural Networksなど)、ランダムフォレスト、多項式回帰、SGD回帰、LASSO回帰、及びRidge回帰などを適用可能である。機械学習には、各種の公知の機械学習フレームワークにおける関数や、種々の既存のライブラリを用いることができる。 In the present embodiment, the learning unit 141 generates (learns) a learning device by using a machine learning method, for example, as follows. That is, the learning unit 141 generates input information based on the dentition shape information and the dentition identification information for each of the two or more learning target information. Then, the information of the combination of the input information and the output information obtained from each of the two or more learning target information is given to the module for configuring the learning device of machine learning to generate and acquire the learning device. The learning unit 141 stores the acquired learning device in the learning device storage unit 111. A machine learning method that can be applied to a regression problem that outputs numerical data from some numerical data can be used, for example, deep learning (for example, Deep Feed Forward Neural Networks, etc.), random forest, polypoly regression, etc. SGD regression, LASSO regression, Ridge regression and the like can be applied. For machine learning, functions in various known machine learning frameworks and various existing libraries can be used.
 本実施の形態において、機械学習は、次元削減が行われた入力情報と修復物情報に基づく出力情報とを用いて行われる。次元削減には、例えば、公知の主成分分析(PCA)の手法を用いることができるが、これに限られない。本実施の形態において、学習部141は、例えば、各学習対象情報に含まれる歯列識別情報及び歯列形状情報に基づいて、歯列形状情報に含まれる2以上の歯についての特徴を多次元のベクトルとして表す第一ベクトル情報を生成する。そして、生成した第一ベクトル情報に対して次元削減を行うことにより、第一ベクトル情報により表されるベクトルよりも低次元の特徴ベクトルを表す第二ベクトル情報を生成する。また、学習部141は、第二ベクトルを生成したのと同様に、次元削減を行なうことにより、修復物情報から、第三ベクトル情報を生成する。学習部141は、入力情報と出力情報との組み合わせの情報として、第二ベクトル情報と第三ベクトル情報との組み合わせの情報を2以上用いて、学習器を生成する。このように次元削減が行われた情報を用いて学習器が生成されるので、機械学習を行うために必要な計算量を低減することができる。また、学習器を用いて行われる処理に必要な計算量も低減することができる。なお、次元削減を行なう前に、後述するメッシュレジストレーションが行われる。 In the present embodiment, machine learning is performed using input information with dimension reduction and output information based on restoration information. For dimensionality reduction, for example, a known principal component analysis (PCA) method can be used, but is not limited thereto. In the present embodiment, the learning unit 141 multidimensionally characterizes the features of two or more teeth included in the dentition shape information, for example, based on the dentition identification information and the dentition shape information included in each learning target information. Generates first vector information represented as a vector of. Then, by reducing the dimension of the generated first vector information, the second vector information representing the feature vector having a lower dimension than the vector represented by the first vector information is generated. Further, the learning unit 141 generates the third vector information from the restoration information by performing the dimension reduction in the same manner as the second vector is generated. The learning unit 141 generates a learning device by using two or more pieces of information on the combination of the second vector information and the third vector information as the information of the combination of the input information and the output information. Since the learning device is generated using the information obtained by reducing the dimensions in this way, the amount of calculation required for performing machine learning can be reduced. In addition, the amount of calculation required for processing performed using the learner can be reduced. Before performing the dimension reduction, mesh registration described later is performed.
 なお、入力情報と出力情報との組み合わせの情報として、第一ベクトル情報と修復物情報を示す特徴ベクトルを表す情報との組み合わせが用いられるようにしてもよい。 Note that the combination of the first vector information and the information representing the feature vector indicating the restoration information may be used as the information of the combination of the input information and the output information.
 ここで、本実施の形態においては、学習部141は、機械学習において、干渉の評価を含む形状に関する評価が最も高くなるように、ペナルティ項を含むコスト関数のパラメータを調整して最適化を行う。例えば、学習部141は、学習に用いる学習対象情報に基づいて歯科修復物と周囲の歯との干渉に関する評価を行い、その評価結果に基づいてペナルティ項(関数及び/又は値)のパラメータを変更することにより、最適化を行う。評価結果に基づくペナルティ項のパラメータの変更は、歯科修復物と周囲の歯との干渉に関して設定するルールに対応するように設定することができる。なお、周囲の歯との干渉に関するルールとしては、例えば、左右の歯の対称性に関するもの(左右で対称になるようにするなど)や、上下の噛み合わせが合うようにするものなど、種々のものを採用することができる。所定のルールに対応する方法で評価結果に応じてペナルティ項のパラメータを変更することにより、当該所定のルールを反映した修復物情報を得るための学習器を生成することができる。 Here, in the present embodiment, the learning unit 141 optimizes by adjusting the parameters of the cost function including the penalty term so that the evaluation regarding the shape including the evaluation of interference is the highest in the machine learning. .. For example, the learning unit 141 evaluates the interference between the dental restoration and the surrounding teeth based on the learning target information used for learning, and changes the parameters of the penalty term (function and / or value) based on the evaluation result. By doing so, optimization is performed. The parameter change of the penalty term based on the evaluation result can be set to correspond to the rule set regarding the interference between the dental restoration and the surrounding teeth. There are various rules regarding interference with surrounding teeth, such as those related to the symmetry of the left and right teeth (such as making them symmetrical on the left and right) and those that make the upper and lower teeth mesh with each other. Things can be adopted. By changing the parameter of the penalty term according to the evaluation result by a method corresponding to the predetermined rule, it is possible to generate a learning device for obtaining restoration information reflecting the predetermined rule.
 第一取得部151は、対象形状情報と、対象形状情報に対応する注目歯及び近接歯(咬合歯及び隣接歯)のそれぞれを識別する対象識別情報とを取得する。 The first acquisition unit 151 acquires the target shape information and the target identification information for identifying each of the attention tooth and the adjacent tooth (occlusal tooth and adjacent tooth) corresponding to the target shape information.
 対象形状情報は、歯科用スキャンシステム910から送信された3Dデータであり、注目歯の形状と注目歯に近接する1以上の近接歯の形状とを含む情報である。対象形状情報は、本実施の形態において、患者の口腔内の少なくとも一部分の形状を表す点群を示す情報であり、より詳しくは、注目歯の形状と、その注目歯に近接する全ての近接歯の形状とを含む情報である。なお、対象形状情報が注目歯の形状を含んでいなくてもよい。 The target shape information is 3D data transmitted from the dental scan system 910, and is information including the shape of the tooth of interest and the shape of one or more adjacent teeth close to the tooth of interest. The target shape information is information indicating a point cloud representing the shape of at least a part of the oral cavity of the patient in the present embodiment, and more specifically, the shape of the tooth of interest and all the adjacent teeth close to the tooth of interest. Information including the shape of. The target shape information does not have to include the shape of the tooth of interest.
 対象識別情報は、対象形状情報について、対象形状情報にその形状が含まれている歯を識別する情報である。本実施の形態において、対象識別情報は、例えば、対象形状情報にその形状が含まれている歯のそれぞれについての、対象形状情報のうち当該歯に対応する部分を特定する情報(例えば、当該歯の形状を示す範囲を特定する情報)と当該歯を識別する歯識別子とが対応付けられた情報を含む。すなわち、対象形状情報と対象識別情報とに基づいて、対象形状情報に対応するそれぞれの歯について、歯識別子に対応付けて、当該歯の形状を示す3Dデータを特定することができるようになっている。対象識別情報は、対象形状情報について、どの部分の情報がどの歯に相当するものであるかをラベル付けする情報であるということができる。 The target identification information is information for identifying the tooth whose shape is included in the target shape information with respect to the target shape information. In the present embodiment, the target identification information is, for example, information for specifying a portion of the target shape information corresponding to the tooth (for example, the tooth) for each tooth whose shape is included in the target shape information. Information that specifies the range indicating the shape of the tooth) and the tooth identifier that identifies the tooth are associated with each other. That is, based on the target shape information and the target identification information, it has become possible to specify 3D data indicating the shape of the tooth by associating it with a tooth identifier for each tooth corresponding to the target shape information. There is. It can be said that the target identification information is information that labels which part of the target shape information corresponds to which tooth.
 対象識別情報と対象形状情報とのそれぞれのデータの形式は問わない。両者が別々の情報であってもよいし、対象識別情報に含まれる点群の各点について歯識別子が対応付けられていることで、歯毎に、点群のうち当該歯を示す点の範囲を特定することができるように構成されていてもよい。すなわち、対象識別情報と対象形状情報との区別は厳密なものではなくてもよく、点群の各点について歯識別子が対応付けられた情報や、歯毎に分割された、歯の形状を示す3Dデータを含む情報(例えば、どの歯に対応するかのラベル付けが行われた点群データ)を、対象識別情報と対象形状情報とが組み合わされた情報として解釈してもよい。例えば、対象形状情報は、個々の点を識別可能な状態で各点の座標情報を含む情報であり、対象識別情報は、個々の点と歯識別子とを対応付けた情報であってもよい。この場合、個々の点の座標と歯識別子とが対となって記録されている情報を、対象形状情報であって対象識別情報でもある情報として把握することも可能である。ここで点を識別可能な状態とは、例えば、個々の点の情報として当該点を識別する識別子が含まれていることにより点を識別可能な状態であってもよいし、対象形状情報において個々の点の情報が存在する順番に基づいて点を識別可能な状態であってもよい。また、例えば、対象形状情報は、複数の点の座標情報を含む情報であり、対象識別情報は、各歯を示す点群が存在する空間を特定する情報と当該歯を識別する歯識別子とを対応付けた情報などであってもよい。なお、本実施の形態において、対象形状情報と、歯列形状情報とは、共に一の歯の形状と当該歯に隣り合う歯の形状とを含むという点で、同様の3Dデータである。また、対象識別情報と、歯列識別情報とは、共に、3Dデータのうち各歯の形状を示す部分(範囲)と当該部分がどの歯を示すものであるかを識別するための情報とを含む点で、同様の情報ということができる。すなわち、歯列形状情報及び歯列識別情報の組み合わせと、対象識別情報及び対象形状情報の組み合わせとは、いずれも、複数の歯のそれぞれについての、歯の形状とその歯がどの歯であるかを識別する情報とを含む情報であるということができる。換言すると、複数の歯のそれぞれについての、歯の形状とその歯がどの歯であるかを識別する情報とを含む情報のうち、学習部141における処理に用いられるものを歯列形状情報や歯列識別情報と呼び、第一取得部151により取得されるものを対象形状情報や対象識別情報と呼ぶ。 The format of each data of the target identification information and the target shape information does not matter. Both may be separate information, and by associating a tooth identifier with each point of the point group included in the target identification information, the range of points indicating the tooth in the point group is associated with each tooth. May be configured to be able to identify. That is, the distinction between the target identification information and the target shape information does not have to be strict, and indicates information in which a tooth identifier is associated with each point in the point group or a tooth shape divided for each tooth. Information including 3D data (for example, point group data labeled which tooth corresponds to) may be interpreted as information in which target identification information and target shape information are combined. For example, the target shape information may be information including coordinate information of each point in a state where individual points can be identified, and the target identification information may be information in which individual points are associated with tooth identifiers. In this case, it is also possible to grasp the information recorded as a pair of the coordinates of each point and the tooth identifier as information that is both the target shape information and the target identification information. Here, the state in which points can be identified may be, for example, a state in which points can be identified by including an identifier for identifying the points as information on each point, or individual points can be identified in the target shape information. The points may be identifiable based on the order in which the information of the points exists. Further, for example, the target shape information is information including coordinate information of a plurality of points, and the target identification information includes information for specifying a space in which a point cloud indicating each tooth exists and a tooth identifier for identifying the tooth. It may be associated information or the like. In the present embodiment, the target shape information and the dentition shape information are similar 3D data in that they both include the shape of one tooth and the shape of a tooth adjacent to the tooth. Further, the target identification information and the dentition identification information both include a portion (range) indicating the shape of each tooth in the 3D data and information for identifying which tooth the portion indicates. In terms of inclusion, it can be said to be similar information. That is, the combination of the dentition shape information and the dentition identification information and the combination of the target identification information and the target shape information are both the shape of the tooth and which tooth is the tooth for each of the plurality of teeth. It can be said that the information includes the information that identifies the tooth. In other words, among the information including the tooth shape and the information for identifying which tooth the tooth is for each of the plurality of teeth, the information used for the processing in the learning unit 141 is the dentition shape information or the tooth. It is called column identification information, and what is acquired by the first acquisition unit 151 is called target shape information or target identification information.
 本実施の形態において、第一取得部151は、歯科用スキャンシステム910から取得した対象形状情報に基づいて、対象識別情報を生成し、取得する。第一取得部151は、対象形状情報が示す点群のうち、各歯に対応する点群が含まれる領域を特定する。そして、第一取得部151は、歯毎に、特定した領域と歯を識別する歯識別子とを対応付けて、対象識別情報を生成する。 In the present embodiment, the first acquisition unit 151 generates and acquires the target identification information based on the target shape information acquired from the dental scan system 910. The first acquisition unit 151 specifies a region including the point cloud corresponding to each tooth in the point cloud indicated by the target shape information. Then, the first acquisition unit 151 generates the target identification information by associating the specified region with the tooth identifier that identifies the tooth for each tooth.
 本実施の形態において、第一取得部151による対象識別情報の生成は、例えば、以下のように機械学習の手法を利用して行わればよい。すなわち、予め、対象形状情報を入力とし、当該対象形状情報に対する対象識別情報(ラベル付けが行われた点群を示す情報であってもよい)を出力とする学習器を、機械学習の手法を用いて構成する。具体的には、例えば、対象形状情報と対象識別情報との組(例えば、ラベル付けがされていない点群を示す情報とラベル付けが行われた点群を示す情報との組)を2以上用意し、当該2組以上の情報を機械学習の学習器を構成するためのモジュールに与えて学習器を構成し、構成した学習器を格納部110に蓄積する。本実施の形態においては、機械学習の手法には、特に、点群のセグメンテーションと各セグメントについての分類とを行うことができるフレームワークを用いるのが好適である。例えば、「PointNet++」(http://stanford.edu/ ̄rqi/pointnet2/)や「PointNet」(http://stanford.edu/ ̄rqi/pointnet/)などの公知のフレームワークを利用することができる。
 また、第一取得部151は、Dynamic Graph Convolutional Neural Network(DGCNN)とMask R-CNNとを組み合わせて実現してもよい。DGCNNでは、歯茎と歯列とを分離する。そして、Mask R-CNNで、複数の視点から歯列を投影させた画像を用いて、歯列を構成する各歯を識別する。
In the present embodiment, the target identification information may be generated by the first acquisition unit 151 by using, for example, a machine learning method as follows. That is, a machine learning method is used for a learning device that inputs target shape information in advance and outputs target identification information (which may be information indicating a labeled point cloud) for the target shape information. Configure using. Specifically, for example, there are two or more pairs of target shape information and target identification information (for example, a pair of information indicating an unlabeled point group and information indicating a labeled point group). The two or more sets of information are prepared and given to a module for configuring a learning device for machine learning to configure the learning device, and the configured learning device is stored in the storage unit 110. In the present embodiment, it is particularly preferable to use a framework capable of segmenting a point cloud and classifying each segment as a machine learning method. For example, known frameworks such as "PointNet ++" (http://stanford.edu/ ̄rqi/pointnet2/) and "PointNet" (http://stanford.edu/ ̄rqi/pointnet/) can be used. can.
Further, the first acquisition unit 151 may be realized by combining the Dynamic Graph Convolutional Neural Network (DGCNN) and the Mask R-CNN. In DGCNN, the gums and dentition are separated. Then, the Mask R-CNN identifies each tooth constituting the dentition by using an image obtained by projecting the dentition from a plurality of viewpoints.
 なお、第一取得部151は、対象形状情報に対応する予め生成された対象識別情報を、外部の装置などから受信することにより取得するように構成されていてもよい。 The first acquisition unit 151 may be configured to acquire the target identification information generated in advance corresponding to the target shape information by receiving it from an external device or the like.
 第二取得部152は、学習器格納部111に格納されている学習器を用いて、第一取得部151が取得した対象識別情報及び対象形状情報から、注目歯に対応する歯科修復物の形状を表すための修復物情報を生成し、取得する。また、第二取得部152は、修復物情報を用いて歯科修復物の形状を表す3Dデータを生成し、取得する。 The second acquisition unit 152 uses the learning device stored in the learning device storage unit 111 to obtain the shape of the dental restoration corresponding to the tooth of interest from the target identification information and the target shape information acquired by the first acquisition unit 151. Generates and obtains restoration information to represent. In addition, the second acquisition unit 152 generates and acquires 3D data representing the shape of the dental restoration using the restoration information.
 本実施の形態において、修復物情報の生成は、例えば以下のようにして行われる。すなわち、第二取得部152は、第一取得部151により取得された情報に基づいて、学習器に入力する入力情報を生成する。入力情報を生成すると、第二取得部152は、学習器格納部111に格納されている学習器に入力情報を入力し、出力情報を生成し、取得する。この場合、第二取得部152は、第一取得部151が取得した対象識別情報に対応する学習器を学習器格納部111から取得し、その学習器を用いて出力情報を生成する。換言すると、第二取得部152は、注目歯となっている歯の歯識別子を用いて学習器格納部111から取得した学習器を用いて出力情報を生成する。そして、第二取得部152は、出力情報に基づいて修復物情報を生成し、取得する。すなわち、第二取得部152は、機械学習の手法を利用することにより、修復物情報を取得する。なお、機械学習の手法は、なんらかの数字データから数字データを出力する回帰問題に適用可能なものを用いることができ、例えば、深層学習(例えばDeep Feed Forward Neural Networksなど)、ランダムフォレスト、多項式回帰、SGD回帰、LASSO回帰、及びRidge回帰などを適用可能である。機械学習には、各種の公知の機械学習フレームワークにおける関数や、種々の既存のライブラリを用いることができる。 In the present embodiment, the restoration information is generated as follows, for example. That is, the second acquisition unit 152 generates input information to be input to the learner based on the information acquired by the first acquisition unit 151. When the input information is generated, the second acquisition unit 152 inputs the input information to the learning device stored in the learning device storage unit 111, generates and acquires the output information. In this case, the second acquisition unit 152 acquires the learning device corresponding to the target identification information acquired by the first acquisition unit 151 from the learning device storage unit 111, and generates output information using the learning device. In other words, the second acquisition unit 152 generates output information using the learner acquired from the learner storage unit 111 using the tooth identifier of the tooth that is the tooth of interest. Then, the second acquisition unit 152 generates and acquires the restoration information based on the output information. That is, the second acquisition unit 152 acquires the restoration information by using the machine learning method. A machine learning method that can be applied to a regression problem that outputs numerical data from some numerical data can be used, for example, deep learning (for example, Deep Feed Forward Neural Networks, etc.), random forest, polypoly regression, etc. SGD regression, LASSO regression, Ridge regression and the like can be applied. For machine learning, functions in various known machine learning frameworks and various existing libraries can be used.
 より具体的には、入力情報の生成や出力情報に基づく修復物情報の生成は、学習部141において行われるのと同様の手法を用いて行われる。例えば、第二取得部152は、第一取得部151が取得した対象識別情報及び対象形状情報に基づいて、多次元のベクトルを表す第一ベクトル情報を生成する。そして、生成した第一ベクトル情報に対して次元削減を行うことにより、第一ベクトル情報により表されるベクトルよりも低次元の特徴ベクトルを表す第二ベクトル情報を生成し、入力情報とする。また、例えば、第二取得部152は、学習器を用いて得られた出力情報について、入力情報の生成時に行った次元削減の逆変換を行い、逆変換された情報を修復物情報として生成する。なお、本実施の形態において、2以上の近接歯のそれぞれの形状に基づいて1つの第一ベクトル情報や第二ベクトル情報が生成されるようにしてもよい。また、2以上の近接歯に対応する2以上の第一ベクトル情報や2以上の第二ベクトル情報が生成されるようにしてもよい。 More specifically, the generation of input information and the generation of restoration information based on output information are performed using the same method as that performed in the learning unit 141. For example, the second acquisition unit 152 generates the first vector information representing a multidimensional vector based on the target identification information and the target shape information acquired by the first acquisition unit 151. Then, by reducing the dimension of the generated first vector information, the second vector information representing the feature vector having a lower dimension than the vector represented by the first vector information is generated and used as the input information. Further, for example, the second acquisition unit 152 performs the inverse transformation of the dimension reduction performed at the time of generating the input information on the output information obtained by using the learner, and generates the inverse transformed information as the restoration information. .. In this embodiment, one first vector information or second vector information may be generated based on the shape of each of the two or more adjacent teeth. Further, two or more first vector information and two or more second vector information corresponding to two or more adjacent teeth may be generated.
 ここで、本実施の形態において、入力情報の生成時において、第一ベクトル情報を生成する場合に、第二取得部152は、メッシュレジストレーション(点群レジストレーションということもある)を行う。メッシュレジストレーションは、例えば以下のようにして行われる。 Here, in the present embodiment, when the first vector information is generated at the time of generating the input information, the second acquisition unit 152 performs mesh registration (sometimes referred to as point cloud registration). Mesh registration is performed, for example, as follows.
 すなわち、第二取得部152は、点群のうち各歯を示す3Dデータについて、基準形状情報格納部112に格納されている対応する歯の基準形状情報(以下、テンプレートという)を用いて、点群数をテンプレートに対応する所定数に調整する。そして、第二取得部152は、全体の形状がテンプレートに沿うように、点群の各点の位置を調整する。具体的には、各歯の全体形状を維持したまま、点群の各点が近くの点から離れ過ぎないようにするとともに、点群データとテンプレートとのずれが少なくなるようにする。また、点群データとテンプレートのマーカー座標同士のずれが少なくなるようにする。これにより、高精度の修復物情報を取得することができるようになる。なお、メッシュレジストレーションの具体的な手法は、公知のものを用いることができる。
 ただし、公知のメッシュレジストレーションでは、テンプレートの各頂点を、各歯に対応する点群データ(3Dデータ)の各頂点のうち最も近い頂点に対応付けてから移動させる。これにより、各歯に対応する点群データをテンプレートの点群数で表現できるようになる。しかしながら、各歯の3Dデータがテンプレートで表現されることで、各歯の3Dデータで表現されていた凹凸が埋められることがある。そこで、各歯に対応する点群データの頂点のうち、曲率が閾値以下(凸方向を正、凹方向を負とする)の頂点に関しては、点群データの頂点からテンプレートの頂点に対応付け、それ以外のデータに関しては、テンプレートの頂点から点群データの頂点に対応付けるようにして、メッシュレジストレーションを行つてもよい。これにより、噛み合わせのよい修復物情報を生成することができる。
That is, the second acquisition unit 152 uses the reference shape information (hereinafter referred to as a template) of the corresponding tooth stored in the reference shape information storage unit 112 for the 3D data indicating each tooth in the point group to make a point. Adjust the number of groups to the predetermined number corresponding to the template. Then, the second acquisition unit 152 adjusts the position of each point in the point cloud so that the entire shape follows the template. Specifically, while maintaining the overall shape of each tooth, each point in the point cloud should not be too far from a nearby point, and the deviation between the point cloud data and the template should be reduced. In addition, the deviation between the point cloud data and the marker coordinates of the template should be reduced. This makes it possible to acquire highly accurate restoration information. As a specific method of mesh registration, a known method can be used.
However, in the known mesh registration, each vertex of the template is associated with the nearest vertex of the point cloud data (3D data) corresponding to each tooth before being moved. As a result, the point cloud data corresponding to each tooth can be expressed by the number of point clouds in the template. However, by expressing the 3D data of each tooth with a template, the unevenness expressed by the 3D data of each tooth may be filled. Therefore, among the vertices of the point cloud data corresponding to each tooth, the vertices whose curvature is equal to or less than the threshold (the convex direction is positive and the concave direction is negative) are associated with the vertices of the template from the vertices of the point cloud data. For other data, mesh registration may be performed by associating the vertices of the template with the vertices of the point cloud data. As a result, it is possible to generate restoration information with good meshing.
 なお、学習部141も、機械学習を行う際に行う入力情報の生成時において、第一ベクトル情報を生成する場合に、歯列識別情報及び歯列形状情報について、同様にメッシュレジストレーションを行うようにすればよい。 It should be noted that the learning unit 141 also performs mesh registration on the dentition identification information and the dentition shape information in the same manner when generating the first vector information at the time of generating the input information performed during machine learning. It should be.
 また、修復物情報を用いた歯科修復物の形状を表す3Dデータの取得は、例えば以下のようにして行われる。すなわち、第二取得部152は、注目歯に対応する修復物情報を取得すると、当該注目歯の基準形状情報を、基準形状情報格納部112から取得する。そして、第二取得部152は、取得した基準形状情報と、修復物情報とを用いて、歯科修復物の形状を示す3Dデータを生成する。 In addition, acquisition of 3D data representing the shape of the dental restoration using the restoration information is performed, for example, as follows. That is, when the second acquisition unit 152 acquires the restoration information corresponding to the attention tooth, the second acquisition unit 152 acquires the reference shape information of the attention tooth from the reference shape information storage unit 112. Then, the second acquisition unit 152 generates 3D data indicating the shape of the dental restoration by using the acquired reference shape information and the restoration information.
 なお、本実施の形態においては、第二取得部152が3Dデータを取得した場合に、学習部141は、当該3Dデータを用いて学習器を生成する。この場合において、学習部141は、第一取得部151が取得した対象形状情報と、第二取得部が取得した修復物情報又は3Dデータとに基づいて、歯科修復物と周囲の歯との干渉に関する評価を行う。そして、学習部141は、その評価結果に基づいてコスト関数のペナルティ項のパラメータを変更する。これにより、さらに、その後において学習器を用いて生成される修復物情報が、所定のルールを反映したものになる。 In the present embodiment, when the second acquisition unit 152 acquires the 3D data, the learning unit 141 generates a learning device using the 3D data. In this case, the learning unit 141 interferes with the dental restoration and the surrounding teeth based on the target shape information acquired by the first acquisition unit 151 and the restoration information or 3D data acquired by the second acquisition unit. To evaluate. Then, the learning unit 141 changes the parameter of the penalty term of the cost function based on the evaluation result. As a result, the restoration information subsequently generated by using the learner reflects a predetermined rule.
 出力部160は、送信部170等を用いて他の装置に情報を送信することにより情報を出力したり、例えば情報処理システム100に設けられたディスプレイデバイスに情報を表示することなどにより情報を出力したりする。なお、出力部160は、ディスプレイやスピーカー等の出力デバイスを含むと考えても含まないと考えてもよい。出力部160は、出力デバイスのドライバーソフト又は、出力デバイスのドライバーソフトと出力デバイス等で実現されうる。 The output unit 160 outputs information by transmitting information to another device using the transmission unit 170 or the like, or outputs information by displaying the information on a display device provided in the information processing system 100, for example. To do. The output unit 160 may or may not include an output device such as a display or a speaker. The output unit 160 can be realized by the driver software of the output device, the driver software of the output device, the output device, or the like.
 本実施の形態において、出力部160は、第二取得部152が取得した3Dデータを出力する。 In the present embodiment, the output unit 160 outputs the 3D data acquired by the second acquisition unit 152.
 送信部170は、情報を、ネットワークを介して歯科修復物生産システム900を構成する他の装置に送信する。送信部170は、例えば、出力部160により出力される情報の送信を行う。送信部170は、通常、無線又は有線の通信手段で実現されるが、放送手段で実現されてもよい。 The transmission unit 170 transmits the information to other devices constituting the dental restoration production system 900 via the network. The transmission unit 170 transmits, for example, the information output by the output unit 160. The transmission unit 170 is usually realized by wireless or wired communication means, but may be realized by broadcasting means.
 図3は、同情報処理システム100の動作の一例を示すフローチャートである。 FIG. 3 is a flowchart showing an example of the operation of the information processing system 100.
 情報処理システム100は、例えば以下のようにして、歯科修復物の3Dデータの出力に関する動作を行う。 The information processing system 100 performs an operation related to the output of 3D data of the dental restoration, for example, as follows.
 (ステップS101)学習部141は、後述する「学習器取得蓄積処理」により、学習器を取得し、学習器格納部111に蓄積する。なお、ステップS101による処理は、学習器格納部111に学習器が予め蓄積されている場合には省力することが可能である。 (Step S101) The learning unit 141 acquires a learning device by the "learning device acquisition / storage process" described later, and stores the learning device in the learning device storage unit 111. The process according to step S101 can save labor when the learner is stored in the learner storage unit 111 in advance.
 (ステップS102)第一取得部151は、注目歯の形状を含む対象形状情報を取得する。 (Step S102) The first acquisition unit 151 acquires target shape information including the shape of the tooth of interest.
 (ステップS103)第一取得部151は、後述する「識別情報取得処理」により、各歯の情報を抽出する処理を行い、対象形状情報に基づいて、対象識別情報を取得する。 (Step S103) The first acquisition unit 151 performs a process of extracting information on each tooth by a "identification information acquisition process" described later, and acquires the target identification information based on the target shape information.
 (ステップS104)第二取得部152は、出力情報を取得する(出力情報取得処理)。 (Step S104) The second acquisition unit 152 acquires output information (output information acquisition process).
 (ステップS105)第二取得部152は、出力情報である次元削減された情報について逆変換を行うことで、修復物情報を取得する。 (Step S105) The second acquisition unit 152 acquires the restoration information by performing inverse transformation on the dimension-reduced information which is the output information.
 (ステップS106)第二取得部152は、注目歯の基準形状情報を、基準形状情報格納部112から取得する。 (Step S106) The second acquisition unit 152 acquires the reference shape information of the tooth of interest from the reference shape information storage unit 112.
 (ステップS107)第二取得部152は、取得した修復物情報と基準形状情報とに基づいて、注目歯についての歯科修復物の形状を示す3Dデータを取得する。 (Step S107) The second acquisition unit 152 acquires 3D data indicating the shape of the dental restoration for the tooth of interest based on the acquired restoration information and the reference shape information.
 (ステップS108)出力部160は、取得した3Dデータを、CADにより取り扱うことができるように出力する。これにより、ユーザは、CAD/CAMシステムを利用して、データの編集をCAD上で行ったり、造形装置920により歯科修復物を造形したりすることができる。 (Step S108) The output unit 160 outputs the acquired 3D data so that it can be handled by CAD. As a result, the user can edit the data on the CAD by using the CAD / CAM system, and can model the dental restoration by the modeling device 920.
 図4は、同情報処理システム100が行う学習器取得蓄積処理の一例を示すフローチャートである。 FIG. 4 is a flowchart showing an example of the learning device acquisition / accumulation process performed by the information processing system 100.
 (ステップS121)学習部141は、同一の注目歯に関する修復物情報を含む2以上の学習対象情報を、学習対象情報格納部113から取得する。 (Step S121) The learning unit 141 acquires two or more learning target information including restoration information related to the same attention tooth from the learning target information storage unit 113.
 (ステップS122)学習部141は、各学習対象情報の歯列識別情報及び歯列形状情報に基づいて、メッシュレジストレーションを実行する。また、学習部141は、メッシュレジストレーションにより生成した情報に基づいて、第一ベクトル情報を生成する。 (Step S122) The learning unit 141 executes mesh registration based on the dentition identification information and the dentition shape information of each learning target information. Further, the learning unit 141 generates the first vector information based on the information generated by the mesh registration.
 (ステップS123)学習部141は、第一ベクトル情報について次元削減を行い、第二ベクトル情報を生成する。第二ベクトル情報は、学習器への入力情報となる。 (Step S123) The learning unit 141 reduces the dimension of the first vector information and generates the second vector information. The second vector information is input information to the learner.
 (ステップS124)学習部141は、各学習対象情報の修復物情報を示すベクトルについて次元削減を行い、第三ベクトル情報を生成する。第三ベクトル情報は、学習器の出力情報となる。 (Step S124) The learning unit 141 reduces the dimension of the vector indicating the restoration information of each learning target information, and generates the third vector information. The third vector information becomes the output information of the learner.
 (ステップS125)学習部141は、各学習対象情報についての第二ベクトル情報と第三ベクトル情報との組合せを用いて機械学習を行う。学習部141は、機械学習を行うことにより、学習器を取得する。 (Step S125) The learning unit 141 performs machine learning using a combination of the second vector information and the third vector information for each learning target information. The learning unit 141 acquires a learning device by performing machine learning.
 (ステップS126)学習部141は、取得した学習器を、注目歯の歯識別子に対応付けて学習器格納部111に蓄積する。図3に示す処理に戻る。 (Step S126) The learning unit 141 stores the acquired learning device in the learning device storage unit 111 in association with the tooth identifier of the tooth of interest. The process returns to the process shown in FIG.
 図5は、同情報処理システム100が行う識別情報取得処理の一例を示すフローチャートである。 FIG. 5 is a flowchart showing an example of the identification information acquisition process performed by the information processing system 100.
 (ステップS141)第一取得部151は、点群データを入力としラベル付けが行われた点群データを出力とする学習データを用いた機械学習により構成された学習器を格納部110から取得する。 (Step S141) The first acquisition unit 151 acquires from the storage unit 110 a learner configured by machine learning using the learning data that inputs the point cloud data and outputs the labeled point cloud data. ..
 (ステップS142)第一取得部151は、取得した学習器に、取得した対象形状情報を入力する。 (Step S142) The first acquisition unit 151 inputs the acquired target shape information into the acquired learner.
 (ステップS143)第一取得部151は、学習器の出力である対象識別情報を取得する。これにより、第一取得部151は、例えば、点群にラベル付けが行われた対象形状情報(対象識別情報)を取得することができる。図3に示す処理に戻る。 (Step S143) The first acquisition unit 151 acquires the target identification information which is the output of the learner. As a result, the first acquisition unit 151 can acquire, for example, the target shape information (target identification information) in which the point cloud is labeled. The process returns to the process shown in FIG.
 図6は、同情報処理システム100が行う出力情報取得処理の一例を示すフローチャートである。 FIG. 6 is a flowchart showing an example of the output information acquisition process performed by the information processing system 100.
 (ステップS161)第二取得部152は、対象識別情報に対応する各歯について、基準形状情報格納部112から基準形状情報を取得する。 (Step S161) The second acquisition unit 152 acquires the reference shape information from the reference shape information storage unit 112 for each tooth corresponding to the target identification information.
 (ステップS162)第二取得部152は、対象識別情報に対応する各歯について、取得した基準形状情報を用いてメッシュレジストレーションを行う。 (Step S162) The second acquisition unit 152 performs mesh registration for each tooth corresponding to the target identification information using the acquired reference shape information.
 (ステップS163)第二取得部152は、メッシュレジストレーションにより生成した情報を用いて、第一ベクトル情報を生成する。 (Step S163) The second acquisition unit 152 generates the first vector information using the information generated by the mesh registration.
 (ステップS164)第二取得部152は、第一ベクトル情報について次元削減を行い、第二ベクトル情報を生成する。第二ベクトル情報は、学習器への入力情報となる。 (Step S164) The second acquisition unit 152 reduces the dimension of the first vector information and generates the second vector information. The second vector information is input information to the learner.
 (ステップS165)第二取得部152は、注目歯に対応する学習器である学習器を学習器格納部111から取得する。 (Step S165) The second acquisition unit 152 acquires a learning device, which is a learning device corresponding to the tooth of interest, from the learning device storage unit 111.
 (ステップS166)第二取得部152は、取得した学習器に入力情報を入力して、出力情報を取得する。図3に示す処理に戻る。 (Step S166) The second acquisition unit 152 inputs the input information to the acquired learner and acquires the output information. The process returns to the process shown in FIG.
 図7は、同第二取得部152により取得される歯科修復物の形状を示す3Dデータの具体例について説明する図である。 FIG. 7 is a diagram illustrating a specific example of 3D data showing the shape of the dental restoration acquired by the second acquisition unit 152.
 図7においては、1つの注目歯Eと、その近接歯A,B,C,Dとが、模式的に示されている。注目歯Eは、例えば下側の歯の1つであり、近接歯A,Dは、その注目歯Eの両隣に、注目歯Eに隣り合うように位置している歯である。近接歯B,Cは、上側に位置する歯であり、上下方向において注目歯Eに隣り合う歯であるといえ、注目歯Eとのかみ合わせの対象になる。 In FIG. 7, one attention tooth E and its adjacent teeth A, B, C, and D are schematically shown. The attention tooth E is, for example, one of the lower teeth, and the proximity teeth A and D are teeth located on both sides of the attention tooth E so as to be adjacent to the attention tooth E. Proximity teeth B and C are teeth located on the upper side, and can be said to be adjacent to the attention tooth E in the vertical direction, and are subject to engagement with the attention tooth E.
 このような注目歯Eに対して用いる歯科修復物としては、左右に隣り合う近接歯A,Dに少しだけ干渉し、また、上下に隣り合う近接歯B,Cに対してかみ合わせの妨げにならない形状を有することが求められる。本実施の形態においては、注目歯Eに対応する学習器により、注目歯Eについての適切な形状を有する歯科修復物の3Dデータを得ることができる。 As a dental restoration used for such a noteworthy tooth E, it slightly interferes with the adjacent teeth A and D adjacent to the left and right, and does not interfere with the engagement with the adjacent teeth B and C adjacent to the upper and lower sides. It is required to have a shape. In the present embodiment, the learning device corresponding to the attention tooth E can obtain 3D data of the dental restoration having an appropriate shape for the attention tooth E.
 すなわち、学習器は、近接歯A,B,C,Dに対応する歯列形状情報と注目歯Eに対応する修復物情報とを含む学習対象情報により生成される。そして、近接歯A,B,C,Dに対応する対象形状情報を用いて、第一取得部151が次元削減などの処理を行うことにより得た第二ベクトル情報(図において、近接歯A,B,C,Dのそれぞれについて示されるλ_1,λ_2,…,λ_n(アンダーバーはそれに続く文字が下付き文字であることを示す))を入力情報として、学習器により、注目歯Eについての特徴を示すベクトルである出力情報(λbar_1,λbar_2,…,λbar_n(barは図7においてλの上部の横線を意味する))が得られる。出力情報について次元削減の逆変換などの処理を行うことにより、歯科修復物の形状に対応する修復物情報を生成することができる。なお、一の注目歯について出力情報を得るとき、学習器を得る際に用いられた近接歯に対応する歯列形状情報の組合せと、入力情報を得る際の近接歯に対応する対象形状情報の組合せとは、一致している必要がある。すなわち、一の注目歯について、用いられた歯列形状情報の組合せが互いに異なる複数種類の学習器が存在する場合においては、利用される対象形状情報の組合せに対応する歯列形状情報の組合せを用いて生成された学習器を用いるようにすればよい。また、図に示される例においては、近接歯A,B,C,Dのそれぞれについての第二ベクトル情報が示されているが、このように2以上の第二ベクトル情報が入力情報として用いられるようにしてもよいし、近接歯A,B,C,Dの形状に基づいて1つの第二ベクトル情報が構成され、それが入力情報として用いられるようにしてもよい。 That is, the learning device is generated by the learning target information including the dentition shape information corresponding to the adjacent teeth A, B, C, and D and the restoration information corresponding to the attention tooth E. Then, the second vector information obtained by the first acquisition unit 151 performing processing such as dimension reduction using the target shape information corresponding to the proximity teeth A, B, C, and D (in the figure, the proximity teeth A, Using λ_1, λ_2, ..., λ_n (underscore indicates that the character following it is a subscript) shown for each of B, C, and D as input information, the characteristics of the tooth E of interest are described by the learner. The output information (λbar_11, λbar_2, ..., λbar_n (bar means the horizontal line above λ in FIG. 7)), which is the vector shown, is obtained. By performing processing such as inverse transformation of dimension reduction on the output information, it is possible to generate restoration information corresponding to the shape of the dental restoration. When obtaining output information for one noteworthy tooth, the combination of the dentition shape information corresponding to the close tooth used when obtaining the learner and the target shape information corresponding to the close tooth when obtaining the input information. The combination must match. That is, when there are a plurality of types of learners having different combinations of dentition shape information used for one tooth of interest, a combination of dentition shape information corresponding to the combination of target shape information to be used is used. The learning device generated by using the learning device may be used. Further, in the example shown in the figure, the second vector information for each of the adjacent teeth A, B, C, and D is shown, and thus two or more second vector information is used as the input information. Alternatively, one second vector information may be constructed based on the shapes of the adjacent teeth A, B, C, and D, and may be used as input information.
 以上説明したように、本実施の形態においては、任意に選択された選択歯に近接する複数の歯の歯列形状情報から得られた複数の第二ベクトル情報と、当該選択歯の歯列形状情報から得られた出力情報とにより調整された学習器に、注目歯に近接する複数の近接歯の対象形状情報から得られた1又は2以上の第二ベクトル情報を入力して、注目歯に対応する出力情報が生成される。そして、生成された出力情報から注目歯に対応する歯科修復物の形状を表すための修復物情報が生成される。選択歯の歯列形状情報からの出力情報の生成は、例えばPCAにより行われ、出力情報からの修復物情報の生成は、例えば、PCAの逆変換より行われる。本実施の形態によれば、容易に、歯科修復物の形状を表す3Dデータを出力することができる。すなわち、従来相当な工数がかかっていた作業者による歯科修復物のモデリング作業を不要とすることができたり、モデリングに必要な工数を大幅に削減できたりし、かつ、機械学習による高精度な歯科修復物の形状を表す3Dデータを得ることができる。また、従来の方法では、作業者によりスキルが異なるため、また、歯科修復物を生産する作業者によって、歯科修復物の出来上がり具合に差が生じやすい(歯科修復物の品質が属人的である)という問題がある。これに対して、本実施の形態によれば、情報処理システム100により学習器に基づいて各患者のスキャンデータ等に応じた歯科修復物の形状を表す3Dデータを出力させることができるので、歯科修復物の生産工程における属人的な部分を削減することができる。 As described above, in the present embodiment, a plurality of second vector information obtained from the dentition shape information of a plurality of teeth adjacent to the arbitrarily selected selected tooth and the dentition shape of the selected tooth. One or two or more second vector information obtained from the target shape information of a plurality of adjacent teeth close to the attention tooth is input to the learner adjusted by the output information obtained from the information, and the attention tooth is input. The corresponding output information is generated. Then, from the generated output information, restoration information for expressing the shape of the dental restoration corresponding to the tooth of interest is generated. The generation of the output information from the dentition shape information of the selected tooth is performed by, for example, the PCA, and the generation of the restoration information from the output information is performed, for example, by the inverse transformation of the PCA. According to this embodiment, 3D data representing the shape of the dental restoration can be easily output. That is, it is possible to eliminate the need for the modeling work of the dental restoration by the worker, which conventionally required a considerable amount of man-hours, to significantly reduce the man-hours required for modeling, and to perform high-precision dentistry by machine learning. 3D data representing the shape of the restoration can be obtained. In addition, in the conventional method, the skill differs depending on the worker, and the degree of completion of the dental restoration tends to differ depending on the worker who produces the dental restoration (the quality of the dental restoration is personal). ). On the other hand, according to the present embodiment, the information processing system 100 can output 3D data representing the shape of the dental restoration according to the scan data of each patient based on the learning device, so that the dentistry can be output. It is possible to reduce the personal part in the production process of the restoration.
 また、本実施の形態によれば、機械学習のコスト関数のペナルティ項を利用することにより、歯科修復物と周囲の歯との干渉に関するルールに応じた3Dデータを得られるようにすることができる。学習器を歯毎に用意しておき、注目歯に対応する学習器を用いて修復物情報を生成することができる。したがって、より高精度な歯科修復物の形状を表す3Dデータを出力することができる。 Further, according to the present embodiment, by using the penalty term of the cost function of machine learning, it is possible to obtain 3D data according to the rule regarding the interference between the dental restoration and the surrounding teeth. .. A learning device can be prepared for each tooth, and restoration information can be generated using the learning device corresponding to the tooth of interest. Therefore, it is possible to output 3D data representing the shape of the dental restoration with higher accuracy.
 なお、本実施の形態において、学習器の生成や、第二取得部による修復物情報の生成に際しては、患者の属性などの属性情報も用いて生成された入力情報が用いられてもよい。例えば、各患者の、性別や、年齢や、自然人類学的な分類に関する分類情報や、生活習慣に関する情報などが用いられてもよい。この場合、基準形状情報格納部112に格納されている基準形状情報のうち、属性情報に対応する基準形状情報が修復物情報の生成に用いられるようにしてもよい。 In the present embodiment, when the learning device is generated or the restoration information is generated by the second acquisition unit, the input information generated by using the attribute information such as the patient's attribute may be used. For example, classification information on each patient's gender, age, natural anthropological classification, lifestyle-related information, and the like may be used. In this case, among the reference shape information stored in the reference shape information storage unit 112, the reference shape information corresponding to the attribute information may be used for generating the restoration information.
 なお、本実施の形態における処理は、ソフトウェアで実現してもよい。そして、このソフトウェアをソフトウェアダウンロード等により配布してもよい。また、このソフトウェアをCD-ROMなどの記録媒体に記録して流布してもよい。 Note that the processing in this embodiment may be realized by software. Then, this software may be distributed by software download or the like. Further, this software may be recorded on a recording medium such as a CD-ROM and disseminated.
 なお、本実施の形態における、情報処理システム100を実現するソフトウェアは、以下のようなプログラムである。つまり、このプログラムは、修復対象である1以上の注目歯に用いる歯科修復物を生産するための情報を出力するためのプログラムであって、互いに近接する2以上の歯の形状を含む歯列形状情報を用いて得られた、2以上の歯のうち一部の歯に対応する歯科修復物の形状を表すための修復物情報を取得するための学習器が格納される学習器格納部を備える情報処理システムが有するコンピュータを、注目歯の形状と注目歯に隣り合う1以上の近接歯の形状とを含む対象形状情報と、注目歯及び近接歯のそれぞれに対応する対象形状情報を識別する対象識別情報とを取得する第一取得部と、学習器を用いて、第一取得部が取得した対象識別情報及び対象形状情報から注目歯に対応する歯科修復物の形状を表すための修復物情報を取得する第二取得部として機能させるための、プログラムである。 The software that realizes the information processing system 100 in this embodiment is the following program. That is, this program is a program for outputting information for producing a dental restoration used for one or more attention teeth to be restored, and is a dentition shape including two or more tooth shapes adjacent to each other. It is provided with a learning device storage unit for storing a learning device for acquiring restoration information for representing the shape of a dental restoration corresponding to a part of two or more teeth obtained by using the information. A computer possessed by an information processing system is used to identify target shape information including the shape of the attention tooth and the shape of one or more adjacent teeth adjacent to the attention tooth and target shape information corresponding to each of the attention tooth and the proximity tooth. The first acquisition unit that acquires the identification information, and the restoration information for expressing the shape of the dental restoration corresponding to the tooth of interest from the target identification information and the target shape information acquired by the first acquisition unit using the learning device. This is a program for functioning as a second acquisition unit.
 (実施の形態2) (Embodiment 2)
 実施の形態2の概要を、上述の実施の形態1とは異なる部分について説明する。実施の形態2では、基本的には実施の形態1と同様の構成を有する情報処理システムが用いられるが、対象識別情報を取得する際の動作の一部が、実施の形態1とは異なっている。 The outline of the second embodiment will be described with respect to the parts different from the first embodiment described above. In the second embodiment, an information processing system having basically the same configuration as that of the first embodiment is used, but a part of the operation when acquiring the target identification information is different from that of the first embodiment. There is.
 すなわち、実施の形態2においては、ユーザは、第一取得部151の動作により実現されるソフトウェアであるアノテーションツールを用いることができる。第一取得部151は、アノテーションツールを用いたユーザの入力操作と対象形状情報とに基づいて、対象識別情報を取得することができる。 That is, in the second embodiment, the user can use the annotation tool which is the software realized by the operation of the first acquisition unit 151. The first acquisition unit 151 can acquire the target identification information based on the user's input operation using the annotation tool and the target shape information.
 すなわち、実施の形態2において、第一取得部151は、取得した対象形状情報について、点群に含まれる各点とその周囲の点との関係を確認する。そして、所定の関係を有する点群の部分を特定し、点群の区分け(セグメンテーション)を行うことにより、対象形状情報が示す点群のうち各歯に対応する点群が含まれる領域を推定する。 That is, in the second embodiment, the first acquisition unit 151 confirms the relationship between each point included in the point cloud and the surrounding points with respect to the acquired target shape information. Then, by identifying the part of the point cloud having a predetermined relationship and performing segmentation of the point cloud, the area including the point cloud corresponding to each tooth is estimated from the point cloud indicated by the target shape information. ..
 実施の形態2においては、第一取得部151は、点群を用いて構成されるメッシュの各頂点における曲率(例えば最小曲率)を算出し、算出した曲率に基づいて、点群の区分けを行う。具体的には、第一取得部151は、曲率が所定の第一閾値よりも大きい部分(曲率半径が所定の第二閾値よりも小さい部分)を特定する。そして、特定した部分を3次元空間において単線化する処理を行い、区分けを行う境界を確定する。単線化を行う処理には、例えば、「Skeletonize」(https://scikit-image.org/docs/dev/auto_examples/edges/plot_skeleton.html)などのライブラリを用いることが好適であるが、これに限られない。このような曲率に基づいて区分けを行う境界を推定する処理を行うことにより、個々の歯同士の境界や、歯と歯茎との境界を精度良く推定することができる。 In the second embodiment, the first acquisition unit 151 calculates the curvature (for example, the minimum curvature) at each vertex of the mesh formed by using the point cloud, and classifies the point cloud based on the calculated curvature. .. Specifically, the first acquisition unit 151 specifies a portion having a curvature larger than a predetermined first threshold value (a portion having a radius of curvature smaller than a predetermined second threshold value). Then, a process of converting the specified portion into a single line in the three-dimensional space is performed, and the boundary to be divided is determined. For the process of single-tracking, for example, it is preferable to use a library such as "Skeletonize" (https://scikit-image.org/docs/dev/auto_expamples/edges/prot_skeleton.html). Not limited. By performing the process of estimating the boundary for dividing based on such curvature, it is possible to accurately estimate the boundary between individual teeth and the boundary between teeth and gums.
 なお、上述の場合において、アノテーションツールで第一の閾値や第二の閾値を変更する調整操作をユーザにより受け付けて、受け付けた調整操作に応じて点群の区分けを行うようにしてもよい。この場合、アノテーションツールのユーザーインターフェースにおいて、ユーザがポインティングデバイス等を用いてスライドさせることができるスライダを有するスライダバーを設け、スライダバー中のスライダの位置に応じて閾値が設定されるようにしてもよい。また、出力部160により対象形状情報により示される形状と区分け予定の領域を示す情報とをディスプレイデバイスに出力しながら調整操作を受け付けて、調整操作が行われる度に即時的にディスプレイデバイス上の表示態様を調整操作に応じて変更するようにしてもよい。 In the above case, the annotation tool may accept the adjustment operation for changing the first threshold value or the second threshold value by the user, and classify the point cloud according to the received adjustment operation. In this case, even if the user interface of the annotation tool is provided with a slider bar having a slider that the user can slide using a pointing device or the like, the threshold value is set according to the position of the slider in the slider bar. good. Further, the output unit 160 accepts the adjustment operation while outputting the shape indicated by the target shape information and the information indicating the area to be divided to the display device, and immediately displays on the display device each time the adjustment operation is performed. The mode may be changed according to the adjustment operation.
 点群の区分けが行われ、各歯に対応する点群が含まれる領域が推定されると、ユーザによる、各領域についてどの歯を示す領域であるかを対応付けるためのラベル付け操作が、アノテーションツールを用いて行われる。ラベル付け操作の結果に基づいて、各領域の点群と、歯識別子とが対応付けられる。 When the point cloud is divided and the area including the point cloud corresponding to each tooth is estimated, the annotation tool is used by the user to perform a labeling operation for associating which tooth is indicated for each area. Is done using. Based on the result of the labeling operation, the point cloud of each region is associated with the tooth identifier.
 具体的には、例えば、第一取得部151は、出力部160によって、推定した領域に含まれる点群と領域に含まれない点群とを互いに異なる表示態様でディスプレイに表示するとともに、ユーザにより入力されるアノテーション情報を取得する。互いに異なる表示態様で表示されるとは、例えば、表示色が異なる、点の大きさが異なる、背景色や背景模様が異なる、などであるが、これらに限られない。アノテーション情報は、各領域が示す歯を特定する情報である。第一取得部151は、ユーザにより入力されたアノテーション情報に基づいて、各歯に対応する点群が含まれる領域を特定することができる。第一取得部151は、アノテーション情報に基づいて、各領域に含まれる点群と当該領域に対応する歯識別子とを対応付ける。 Specifically, for example, the first acquisition unit 151 displays the point cloud included in the estimated area and the point cloud not included in the area on the display by the output unit 160 in different display modes, and the user may display the point cloud. Get the input annotation information. Displaying in different display modes includes, for example, different display colors, different dot sizes, different background colors and different background patterns, but is not limited to these. The annotation information is information that identifies the tooth indicated by each region. The first acquisition unit 151 can specify the area including the point group corresponding to each tooth based on the annotation information input by the user. The first acquisition unit 151 associates the point cloud included in each region with the tooth identifier corresponding to the region based on the annotation information.
 このように、実施の形態2においては、アノテーションツールを用いることで、対象形状情報に対応する対象識別情報を取得することができる。 As described above, in the second embodiment, the target identification information corresponding to the target shape information can be acquired by using the annotation tool.
 なお、実施の形態2においても、実施の形態1と同様に、第一取得部151は、対象形状情報に関して、メッシュレジストレーションを行うが、これに限られない。 Also in the second embodiment, similarly to the first embodiment, the first acquisition unit 151 performs mesh registration with respect to the target shape information, but the present invention is not limited to this.
 図8は、実施の形態2にかかる第一取得部151が行う動作の一例を示す図である。 FIG. 8 is a diagram showing an example of the operation performed by the first acquisition unit 151 according to the second embodiment.
 図8においては、第一取得部151が行う動作のうち、アノテーションツールを用いた対象識別情報の取得に関する動作が示されている。図8に示される処理は、実施の形態2において実行される図3に示される処理において、対象識別情報を取得する処理(ステップS103として行われる。 In FIG. 8, among the operations performed by the first acquisition unit 151, the operations related to the acquisition of the target identification information using the annotation tool are shown. The process shown in FIG. 8 is performed as a process of acquiring target identification information (step S103) in the process shown in FIG. 3 executed in the second embodiment.
 (ステップS241)第一取得部151は、取得した対象形状情報を読み込む。 (Step S241) The first acquisition unit 151 reads the acquired target shape information.
 (ステップS242)第一取得部151は、読み込んだ対象形状情報について、各頂点における曲率を算出する。 (Step S242) The first acquisition unit 151 calculates the curvature at each vertex of the read target shape information.
 (ステップS243)第一取得部151は、設定されている曲率の閾値を取得する。 (Step S243) The first acquisition unit 151 acquires the set threshold value of the curvature.
 (ステップS244)第一取得部151は、取得した閾値に基づいて、点群の領域を分割する分割線を算出する。また、第一取得部151は、分割線に関するユーザからの調整操作を受け付け、調整操作に基づいて分割線を算出し直す。 (Step S244) The first acquisition unit 151 calculates a dividing line that divides the area of the point cloud based on the acquired threshold value. Further, the first acquisition unit 151 receives an adjustment operation from the user regarding the dividing line, and recalculates the dividing line based on the adjustment operation.
 (ステップS245)第一取得部151は、算出された分割線に基づいて、領域分割を行う。これにより、各歯に属する領域や、歯茎等に属する領域が区画される。 (Step S245) The first acquisition unit 151 divides the area based on the calculated dividing line. As a result, the area belonging to each tooth and the area belonging to the gums and the like are partitioned.
 (ステップS246)第一取得部151は、ユーザによる領域の統合操作を受け付け、点群の区画に反映させる。 (Step S246) The first acquisition unit 151 accepts the area integration operation by the user and reflects it in the point cloud section.
 (ステップS247)第一取得部151は、ユーザによるラベル付け操作を受け付ける。これにより、各領域がどの歯に属するかが対応付けられる。 (Step S247) The first acquisition unit 151 accepts a labeling operation by the user. Thereby, which tooth each region belongs to is associated.
 (ステップS248)第一取得部151は、ラベル付け操作の受け付け結果に基づいて、対象識別情報を取得する。これにより、ラベル付け内容と、対象形状情報とが、対応付けて取得される。 (Step S248) The first acquisition unit 151 acquires the target identification information based on the acceptance result of the labeling operation. As a result, the labeling content and the target shape information are acquired in association with each other.
 図9は、同情報処理システム100においてユーザが使用することができるアノテーションツールの具体例について説明する図である。 FIG. 9 is a diagram illustrating a specific example of an annotation tool that can be used by a user in the information processing system 100.
 図9においては、例えば、情報処理システム100のディスプレイデバイスに表示されたアノテーションツールの操作画面の一例が示されている。操作画面においては、読み込まれている対象形状情報を3D空間上に示した表示データG1と、データを操作するための種々のコマンドとそれぞれ対応付けられた2以上のボタンを含む操作カラムG2とが含まれる。本実施の形態においては、操作カラムG2のボタンを操作したり、表示データG1において各データを部分的に選択する操作を行ったりすることにより、対象形状情報に含まれるデータを操作したり、分割線の調整操作を行ったりすることができる。なお、ユーザが行った操作や、対象形状情報に対して加えられた操作などに応じて、表示データG1の表示態様が適宜変更される。これにより、表示態様の変更があった部分にユーザの注意を集めることができ、ユーザに操作が反映されたことを覚知させることができる。したがって、ユーザは、直感的な操作を行うことができる。 FIG. 9 shows, for example, an example of an operation screen of the annotation tool displayed on the display device of the information processing system 100. On the operation screen, the display data G1 showing the read target shape information in the 3D space and the operation column G2 including two or more buttons associated with various commands for operating the data are displayed. included. In the present embodiment, the data included in the target shape information can be manipulated or divided by operating the buttons of the operation column G2 or performing the operation of partially selecting each data in the display data G1. You can adjust the line. The display mode of the display data G1 is appropriately changed according to the operation performed by the user, the operation applied to the target shape information, and the like. As a result, the user's attention can be focused on the portion where the display mode has been changed, and the user can be made aware that the operation has been reflected. Therefore, the user can perform an intuitive operation.
 本具体例において、操作カラムG2には、曲率に関する閾値を変更するためのスライダバーG3が含まれる。ユーザは、スライダバーG3においてスライダの位置を調整することにより、直感的に、閾値を調整する操作を行うことができる。 In this specific example, the operation column G2 includes a slider bar G3 for changing a threshold value related to curvature. The user can intuitively adjust the threshold value by adjusting the position of the slider on the slider bar G3.
 また操作カラムG2には、領域の結合を行うためのマージボタンG4が含まれる。ユーザは、例えば、表示データG1に含まれる2以上の領域を選択する操作を行った上で、マージボタンG4を操作することにより、選択した2以上の領域を1つの領域に結合させることができる。なお、結合された領域を分解できるようにしてもよい。 The operation column G2 also includes a merge button G4 for joining regions. For example, the user can combine two or more selected areas into one area by operating the merge button G4 after performing an operation of selecting two or more areas included in the display data G1. .. The combined region may be decomposed.
 本具体例において、操作画面には、ラベル選択トレイG11と、ラベル付与ボタンG12とが含まれる。ラベル選択トレイG11は、例えば、ユーザによるラベル付けに用いられうる各歯識別子に対応する2以上のラベル選択ボタンが含まれる。ユーザは、例えば、表示データG1に含まれる領域を選択する操作を行った上で、ラベル選択トレイG11の一のラベル選択ボタンを選択する操作を行う。そして、その状態でラベル付与ボタンG12を操作することにより、選択した領域について、選択したラベル選択ボタンに対応する歯識別子を付与すること(ラベル付けをすること)ができる。 In this specific example, the operation screen includes a label selection tray G11 and a label assignment button G12. The label selection tray G11 includes, for example, two or more label selection buttons corresponding to each tooth identifier that can be used for labeling by the user. For example, the user performs an operation of selecting an area included in the display data G1 and then an operation of selecting a label selection button on the label selection tray G11. Then, by operating the label assignment button G12 in that state, a tooth identifier corresponding to the selected label selection button can be assigned (labeled) to the selected area.
 なお、アノテーションツールの操作画面はこれに限られず、適宜設定することができる。 The operation screen of the annotation tool is not limited to this, and can be set as appropriate.
 実施の形態2においても、実施の形態1と同様の効果を得ることができる。実施の形態2においては、上述のようにアノテーションツールを用いてユーザによる点群の区分けやラベル付けに関する操作を受け付けることができる。したがって、従来ではユーザが加工を行う際に時間がかかっていた、複数の連結した歯を含む対象形状情報の処理を行う場合でも、容易に、歯毎の形状を特定する操作を行うことができる。また、ユーザが、容易に、歯毎に当該歯を識別する歯識別子と特定した領域とを対応付ける操作を行うことができる。このようにGUIを利用してユーザの操作を受け付けることにより、より直感的にユーザが操作を行うことができる。 Also in the second embodiment, the same effect as that of the first embodiment can be obtained. In the second embodiment, as described above, the annotation tool can be used to accept operations related to point cloud classification and labeling by the user. Therefore, even when processing the target shape information including a plurality of connected teeth, which has conventionally taken a long time for the user to process, the operation of specifying the shape of each tooth can be easily performed. .. In addition, the user can easily perform an operation of associating a tooth identifier that identifies the tooth with a specified area for each tooth. By accepting the user's operation using the GUI in this way, the user can perform the operation more intuitively.
 なお、実施の形態2における、アノテーションツールを実現するソフトウェアは、以下のようなプログラムである。つまり、このプログラムは、口腔内の少なくとも一部分の形状を表す点群を示す対象形状情報を処理するためのプログラムであって、情報処理システム100が有するコンピュータに、対象形状情報が示す点群のうち各歯に対応する点群が含まれる領域を、点群に含まれる各点とその周囲の点との関係に基づいて推定させ、推定させ領域に含まれる点群と領域に含まれない点群とを、互いに異なる表示態様でディスプレイに表示させるものである。また、このプログラムは、上記コンピュータに、ユーザにより入力される情報(選択されたラベル等)を取得させ、ユーザにより入力された情報に基づいて、各歯に対応する点群が含まれる領域を特定させ、歯毎に歯を識別する歯識別子と特定した領域とを対応付けた対象識別情報を取得させるものである。 The software that realizes the annotation tool in the second embodiment is the following program. That is, this program is a program for processing the target shape information indicating the point cloud representing the shape of at least a part of the oral cavity, and among the point clouds indicated by the target shape information on the computer of the information processing system 100. The area including the point cloud corresponding to each tooth is estimated based on the relationship between each point included in the point cloud and the surrounding points, and the point cloud included in the estimated area and the point cloud not included in the area are estimated. Are displayed on the display in different display modes. In addition, this program causes the computer to acquire information input by the user (selected label, etc.), and identifies an area including a point group corresponding to each tooth based on the information input by the user. The target identification information in which the tooth identifier that identifies the tooth is associated with the specified area is acquired for each tooth.
 なお、アノテーションツールにおいて、領域の統合操作を受け付けることができるようにしてもよい。 Note that the annotation tool may be able to accept area integration operations.
 実施の形態2に係るアノテーションツールは、学習器を用いて修復物情報を生成する機能を有する情報処理システム100に限られず、点群(メッシュ化されたデータを含む)等の3Dデータに関する処理を行う種々の装置において実行可能であってもよい。この場合、各領域に含まれる点群とその領域に対応付けた識別子との対応関係を出力し、他の装置において利用することが可能となる。 The annotation tool according to the second embodiment is not limited to the information processing system 100 having a function of generating restoration information using a learning device, and processes related to 3D data such as a point cloud (including meshed data). It may be feasible in the various devices that perform. In this case, the correspondence between the point cloud included in each area and the identifier associated with the area can be output and used in other devices.
 (その他) (others)
 図10は、上記実施の形態におけるコンピュータシステム800の概観図である。図11は、同コンピュータシステム800のブロック図である。 FIG. 10 is an overview view of the computer system 800 according to the above embodiment. FIG. 11 is a block diagram of the computer system 800.
 これらの図においては、本明細書で述べたプログラムを実行して、上述した実施の形態の情報処理システム等を実現するコンピュータの構成が示されている。上述の実施の形態は、コンピュータハードウェア及びその上で実行されるコンピュータプログラムで実現されうる。 In these figures, the configuration of a computer that executes the program described in the present specification and realizes the information processing system and the like of the above-described embodiment is shown. The above-described embodiment can be realized by computer hardware and a computer program executed on the computer hardware.
 コンピュータシステム800は、CD-ROMドライブを含むコンピュータ801と、キーボード802と、マウス803と、モニタ804とを含む。 The computer system 800 includes a computer 801 including a CD-ROM drive, a keyboard 802, a mouse 803, and a monitor 804.
 コンピュータ801は、CD-ROMドライブ8012に加えて、MPU8013と、CD-ROMドライブ8012等に接続されたバス8014と、ブートアッププログラム等のプログラムを記憶するためのROM8015と、MPU8013に接続され、アプリケーションプログラムの命令を一時的に記憶するとともに一時記憶空間を提供するためのRAM8016と、アプリケーションプログラム、システムプログラム、及びデータを記憶するためのハードディスク8017とを含む。ここでは、図示しないが、コンピュータ801は、さらに、LANへの接続を提供するネットワークカードを含んでもよい。 In addition to the CD-ROM drive 8012, the computer 801 is connected to the MPU 8013, the bus 8014 connected to the CD-ROM drive 8012, the ROM 8015 for storing programs such as the bootup program, and the MPU 8013. It includes a RAM 8016 for temporarily storing program instructions and providing a temporary storage space, and a hard disk 8017 for storing application programs, system programs, and data. Although not shown here, the computer 801 may further include a network card that provides a connection to the LAN.
 コンピュータシステム800に、上述した実施の形態の情報処理システム等の機能を実行させるプログラムは、CD-ROM8101に記憶されて、CD-ROMドライブ8012に挿入され、さらにハードディスク8017に転送されてもよい。これに代えて、プログラムは、図示しないネットワークを介してコンピュータ801に送信され、ハードディスク8017に記憶されてもよい。プログラムは実行の際にRAM8016にロードされる。プログラムは、CD-ROM8101又はネットワークから直接、ロードされてもよい。 The program that causes the computer system 800 to execute the functions of the information processing system and the like according to the above-described embodiment may be stored in the CD-ROM 8101, inserted into the CD-ROM drive 8012, and further transferred to the hard disk 8017. Alternatively, the program may be transmitted to the computer 801 via a network (not shown) and stored on the hard disk 8017. The program is loaded into RAM 8016 at run time. The program may be loaded directly from the CD-ROM 8101 or the network.
 プログラムは、コンピュータ801に、上述した実施の形態の情報処理システム等の機能を実行させるオペレーティングシステム(OS)、又はサードパーティープログラム等を、必ずしも含まなくてもよい。プログラムは、制御された態様で適切な機能(モジュール)を呼び出し、所望の結果が得られるようにする命令の部分のみを含んでいればよい。コンピュータシステム800がどのように動作するかは周知であり、詳細な説明は省略する。 The program does not necessarily have to include an operating system (OS) or a third-party program that causes the computer 801 to execute functions such as the information processing system of the above-described embodiment. The program need only include a portion of the instruction that calls the appropriate function (module) in a controlled manner to obtain the desired result. It is well known how the computer system 800 works, and detailed description thereof will be omitted.
 なお、上記プログラムにおいて、情報を送信する送信ステップや、情報を受信する受信ステップなどでは、ハードウェアによって行われる処理、例えば、送信ステップにおけるモデムやインターフェースカードなどで行われる処理(ハードウェアでしか行われない処理)は含まれない。 In the above program, in the transmission step of transmitting information and the receiving step of receiving information, processing performed by hardware, for example, processing performed by a modem or interface card in the transmission step (only performed by hardware). Processing that is not done) is not included.
 また、上記実施の形態において、一の装置に存在する2以上の構成要素は、物理的に一の媒体で実現されてもよい。 Further, in the above embodiment, the two or more components existing in one device may be physically realized by one medium.
 また、上記実施の形態において、各処理(各機能)は、単一の装置(システム)によって集中処理されることによって実現されてもよく、あるいは、複数の装置によって分散処理されることによって実現されてもよい。複数の装置による分散処理が行われる場合、分散処理を行う複数の装置により構成されるシステム全体を1つの「装置」として把握することも可能である。上述の実施の形態において、情報処理システムは、取得した修復物情報を用いて、歯科修復物の形状を表す3Dデータを取得するが、これに限られない。情報処理システムは、歯科修復物の形状を表すための修復物情報を取得するが、当該修復物情報やそれに基づいて生成した情報を、歯科修復物を生産するための情報として外部装置などに出力するようにしてもよい。外部装置では、例えば、歯科用CAD/CAMシステムなどを利用して、修復物情報やそれに基づいて生成された情報に基づいて、歯科修復物の形状を表す3Dデータを取得したり歯科修復物を造形するための3Dデータを取得したりしてもよい。 Further, in the above embodiment, each process (each function) may be realized by centralized processing by a single device (system), or may be realized by distributed processing by a plurality of devices. You may. When the distributed processing is performed by a plurality of devices, it is also possible to grasp the entire system composed of the plurality of devices performing the distributed processing as one "device". In the above-described embodiment, the information processing system acquires 3D data representing the shape of the dental restoration by using the acquired restoration information, but the present invention is not limited to this. The information processing system acquires restoration information for representing the shape of the dental restoration, and outputs the restoration information and the information generated based on the restoration information to an external device or the like as information for producing the dental restoration. You may try to do it. In the external device, for example, a dental CAD / CAM system or the like is used to acquire 3D data representing the shape of the dental restoration or to obtain the dental restoration based on the restoration information and the information generated based on the restoration information. 3D data for modeling may be acquired.
 また、上記実施の形態において、各構成要素間で行われる情報の受け渡しは、例えば、その情報の受け渡しを行う2個の構成要素が物理的に異なるものである場合には、一方の構成要素による情報の出力と、他方の構成要素による情報の受け付けとによって行われてもよく、又は、その情報の受け渡しを行う2個の構成要素が物理的に同じものである場合には、一方の構成要素に対応する処理のフェーズから、他方の構成要素に対応する処理のフェーズに移ることによって行われてもよい。 Further, in the above embodiment, the transfer of information performed between the respective components depends on, for example, one of the components when the two components that transfer the information are physically different. It may be performed by outputting information and accepting information by the other component, or if the two components that pass the information are physically the same, one component. It may be performed by moving from the processing phase corresponding to the above to the processing phase corresponding to the other component.
 また、上記実施の形態において、各構成要素が実行する処理に関係する情報、例えば、各構成要素が受け付けたり、取得したり、選択したり、生成したり、送信したり、受信したりした情報や、各構成要素が処理で用いる閾値や数式、アドレス等の情報等は、上記説明で明記していなくても、図示しない記録媒体において、一時的に、又は長期にわたって保持されていてもよい。また、その図示しない記録媒体への情報の蓄積を、各構成要素、又は、図示しない蓄積部が行ってもよい。また、その図示しない記録媒体からの情報の読み出しを、各構成要素、又は、図示しない読み出し部が行ってもよい。 Further, in the above embodiment, information related to the processing executed by each component, for example, information received, acquired, selected, generated, transmitted, or received by each component. In addition, information such as threshold values, mathematical formulas, and addresses used by each component in processing may be temporarily or for a long period of time in a recording medium (not shown) even if it is not specified in the above description. In addition, each component or a storage unit (not shown) may store information on a recording medium (not shown). Further, each component or a reading unit (not shown) may read the information from the recording medium (not shown).
 また、上記実施の形態において、各構成要素等で用いられる情報、例えば、各構成要素が処理で用いる閾値やアドレス、各種の設定値等の情報がユーザによって変更されてもよい場合には、上記説明で明記していなくても、ユーザが適宜、それらの情報を変更できるようにしてもよく、又は、そうでなくてもよい。それらの情報をユーザが変更可能な場合には、その変更は、例えば、ユーザからの変更指示を受け付ける図示しない受付部と、その変更指示に応じて情報を変更する図示しない変更部とによって実現されてもよい。その図示しない受付部による変更指示の受け付けは、例えば、入力デバイスからの受け付けでもよく、通信回線を介して送信された情報の受信でもよく、所定の記録媒体から読み出された情報の受け付けでもよい。 Further, in the above embodiment, when the information used in each component or the like, for example, the information such as the threshold value and the address used in the processing by each component and various setting values may be changed by the user, the above Although not specified in the description, the user may or may not be able to change the information as appropriate. When the information can be changed by the user, the change is realized by, for example, a reception unit (not shown) that receives a change instruction from the user and a change unit (not shown) that changes the information in response to the change instruction. You may. The reception unit (not shown) may accept the change instruction from, for example, an input device, information transmitted via a communication line, or information read from a predetermined recording medium. ..
 本発明は、以上の実施の形態に限定されることなく、種々の変更が可能であり、それらも本発明の範囲内に包含されるものである。 The present invention is not limited to the above embodiments, and various modifications can be made, and these are also included in the scope of the present invention.
 上述の複数の実施の形態を適宜組み合わせた実施の形態を構成してもよい。例えば、上述のいずれかの実施の形態のそれぞれの構成要素について、適宜、他の実施の形態の構成要素と置換したり組み合わせたりしてもよい。また、上述の実施の形態のうち、一部の構成要素や機能が省略されていてもよい。 An embodiment may be configured by appropriately combining the above-mentioned plurality of embodiments. For example, each component of any of the above embodiments may be optionally replaced or combined with a component of another embodiment. In addition, some components and functions may be omitted from the above-described embodiments.
 なお、上述の実施の形態において、学習器は、機械学習により得られた学習器であるが、これに限られない。 In the above-described embodiment, the learning device is a learning device obtained by machine learning, but the learning device is not limited to this.
 学習器は、例えば、入力された、注目歯を含む2以上の歯の形状を示す情報等に基づく入力ベクトルと、注目歯に適用される修復物情報との対応関係を示すテーブルであってもよい。この場合、第二取得部は、対象形状情報に基づく特徴ベクトルに対応する修復物情報をテーブル中から取得するようにしてもよい。また、第二取得部は、テーブル中の2以上の入力ベクトルと各入力ベクトルの重み付けなどを行うパラメータとを用いて対象形状情報に基づく特徴ベクトルに近似するベクトルを生成し、生成に用いた各入力ベクトルに対応する修復物情報とパラメータとを用いて、注目歯に適用される修復物情報を取得するようにしてもよい。 The learner may be, for example, a table showing the correspondence between the input vector based on the input information indicating the shapes of two or more teeth including the attention tooth and the restoration information applied to the attention tooth. good. In this case, the second acquisition unit may acquire the restoration information corresponding to the feature vector based on the target shape information from the table. In addition, the second acquisition unit generates a vector that approximates the feature vector based on the target shape information by using two or more input vectors in the table and parameters that weight each input vector, and each used for the generation. The restoration information and parameters corresponding to the input vector may be used to acquire the restoration information applied to the tooth of interest.
 また、学習器は、例えば、入力された、注目歯を含む2以上の歯の形状を示す情報等に基づく入力ベクトルと、注目歯に適用される修復物情報を生成するための情報との関係を表す関数などであってもよい。この場合、第二取得部は、例えば、対象形状情報に基づく特徴ベクトルに対応する情報を関数により求めて、求めた情報を用いて修復物情報を取得するなどしてもよい。 Further, the learner has a relationship between, for example, an input vector based on input information indicating the shape of two or more teeth including the tooth of interest and information for generating restoration information applied to the tooth of interest. It may be a function representing. In this case, the second acquisition unit may, for example, obtain information corresponding to the feature vector based on the target shape information by a function, and acquire the restoration information using the obtained information.
 以上のように、本発明にかかる情報処理システムは、容易に、歯科修復物の形状を表す3Dデータを出力することができるという効果を有し、情報処理システム等として有用である。 As described above, the information processing system according to the present invention has the effect of being able to easily output 3D data representing the shape of the dental restoration, and is useful as an information processing system or the like.
 100 情報処理システム
 110 格納部
 111 学習器格納部
 112 基準形状情報格納部
 120 受信部
 130 受付部
 140 処理部
 141 学習部
 151 第一取得部
 152 第二取得部(修復物情報取得部の一例)
 160 出力部
 170 送信部
 900 歯科修復物生産システム
 910 歯科用スキャンシステム
 920 造形装置
100 Information processing system 110 Storage unit 111 Learning device storage unit 112 Reference shape information storage unit 120 Reception unit 130 Reception unit 140 Processing unit 141 Learning unit 151 First acquisition unit 152 Second acquisition unit (Example of restoration information acquisition unit)
160 Output unit 170 Transmission unit 900 Dental restoration production system 910 Dental scanning system 920 Modeling equipment

Claims (14)

  1.  修復対象である1以上の注目歯に用いる歯科修復物を生産するための情報を出力する情報処理システムであって、
     互いに近接する2以上の歯の形状を含む歯列形状情報を用いて得られた、前記2以上の歯のうち一部の歯に対応する歯科修復物の形状を表すための修復物情報を取得するための学習器が格納される学習器格納部と、
     前記注目歯に近接する1以上の近接歯の形状を含む対象形状情報と、前記対象形状情報にその形状が含まれている歯を識別する対象識別情報とを取得する第一取得部と、
     前記学習器を用いて、前記第一取得部が取得した前記対象識別情報及び前記対象形状情報から前記注目歯に対応する修復物情報を取得する第二取得部と、
     を備える、情報処理システム。
    An information processing system that outputs information for producing a dental restoration used for one or more attention teeth to be restored.
    Acquire restoration information for representing the shape of a dental restoration corresponding to a part of the two or more teeth obtained by using the dentition shape information including the shapes of two or more teeth adjacent to each other. A learning device storage unit that stores learning devices for
    A first acquisition unit that acquires target shape information including the shape of one or more adjacent teeth close to the attention tooth and target identification information for identifying a tooth whose shape is included in the target shape information.
    Using the learning device, the second acquisition unit that acquires the restoration information corresponding to the attention tooth from the target identification information and the target shape information acquired by the first acquisition unit, and the second acquisition unit.
    Information processing system equipped with.
  2.  前記注目歯に対応する、予め用意された基準形状を示す基準形状情報が格納される基準形状情報格納部を備え、
     前記修復物情報は、前記基準形状と前記注目歯に対応する歯科修復物の形状との差分に対応する情報であり、
     前記第二取得部は、前記修復物情報と前記基準形状情報とを用いて、前記歯科修復物の形状を表す3Dデータを取得する、
     請求項1に記載の情報処理システム。
    A reference shape information storage unit for storing reference shape information indicating a reference shape prepared in advance corresponding to the attention tooth is provided.
    The restoration information is information corresponding to the difference between the reference shape and the shape of the dental restoration corresponding to the attention tooth.
    The second acquisition unit acquires 3D data representing the shape of the dental restoration by using the restoration information and the reference shape information.
    The information processing system according to claim 1.
  3.  前記学習器は、2以上の学習対象情報を用いて行われる機械学習により得られた情報であり、
     前記学習対象情報は、
     前記歯列形状情報と、
     前記歯列形状情報に含まれる2以上の歯のそれぞれを識別する歯列識別情報と、
     前記2以上の歯のうち一部の歯に適用される歯科修復物の形状を表すための修復物情報とを含む、
     請求項1又は2に記載の情報処理システム。
    The learning device is information obtained by machine learning performed using two or more learning target information.
    The learning target information is
    The dentition shape information and
    The dentition identification information for identifying each of the two or more teeth included in the dentition shape information, and the dentition identification information.
    Includes restoration information to represent the shape of the dental restoration applied to some of the two or more teeth.
    The information processing system according to claim 1 or 2.
  4.  前記学習器格納部には、1以上の注目歯に対応する歯科修復物の形状を表すための修復物情報と、前記歯列識別情報と、前記注目歯に近接する2以上の歯の形状を含む歯列形状情報とを含む2以上の学習対象情報を用いた機械学習により得られた学習器が、注目歯毎に格納されており、
     前記第二取得部は、前記第一取得部が取得した対象識別情報に対応する学習器を用いて、前記第一取得部が取得した前記対象識別情報及び前記対象形状情報から前記修復物情報を取得する、
     請求項3に記載の情報処理システム。
    In the learning device storage unit, restoration information for expressing the shape of the dental restoration corresponding to one or more attention teeth, the dentition identification information, and the shape of two or more teeth close to the attention tooth are provided. A learning device obtained by machine learning using two or more learning target information including the including tooth row shape information is stored for each tooth of interest.
    The second acquisition unit uses a learning device corresponding to the target identification information acquired by the first acquisition unit to obtain the restoration information from the target identification information and the target shape information acquired by the first acquisition unit. get,
    The information processing system according to claim 3.
  5.  前記第二取得部は、前記第一取得部が取得した前記対象識別情報及び前記対象形状情報に基づいて多次元のベクトルを表す第一ベクトル情報を生成し、生成した第一ベクトル情報に対して次元削減を行うことにより第二ベクトル情報を生成し、生成した第二ベクトル情報を用いて前記修復物情報を取得する、
     請求項3又は4に記載の情報処理システム。
    The second acquisition unit generates first vector information representing a multidimensional vector based on the target identification information and the target shape information acquired by the first acquisition unit, and with respect to the generated first vector information. Second vector information is generated by performing dimension reduction, and the restoration information is acquired using the generated second vector information.
    The information processing system according to claim 3 or 4.
  6.  前記学習部は、前記歯科修復物と周囲の歯との間の干渉の評価を含む、形状に関する評価に基づいて、ペナルティ項を含むコスト関数のパラメータが調整されたものである、
     請求項3から5のいずれかに記載の情報処理システム。
    The learning unit adjusts the parameters of the cost function including the penalty term based on the evaluation of the shape, including the evaluation of the interference between the dental restoration and the surrounding teeth.
    The information processing system according to any one of claims 3 to 5.
  7.  前記第二取得部が前記3Dデータを取得した場合に、前記学習部は、前記第一取得部が取得した前記対象形状情報と、前記第二取得部が取得した前記修復物情報又は前記3Dデータとに基づいて、前記評価を行い、その評価結果に基づいて前記ペナルティ項のパラメータを変更する、
     請求項6に記載の情報処理システム。
    When the second acquisition unit acquires the 3D data, the learning unit uses the target shape information acquired by the first acquisition unit and the restoration information or the 3D data acquired by the second acquisition unit. Based on the above, the evaluation is performed, and the parameter of the penalty term is changed based on the evaluation result.
    The information processing system according to claim 6.
  8.  前記歯列形状情報は、歯科修復物に対応する歯の形状及びその歯に隣り合う全ての歯の形状を含む、口腔内の少なくとも一部の形状を表す情報であり、
     前記対象形状情報は、注目歯の形状及びその注目歯に近接する近接歯の形状を含む、口腔内の少なくとも一部の形状を表す情報である、
     請求項1から7のいずれかに記載の情報処理システム。
    The dentition shape information is information representing at least a part of the shape in the oral cavity including the shape of the tooth corresponding to the dental restoration and the shape of all the teeth adjacent to the tooth.
    The target shape information is information representing at least a part of the shape in the oral cavity, including the shape of the tooth of interest and the shape of a tooth close to the tooth of interest.
    The information processing system according to any one of claims 1 to 7.
  9.  前記対象形状情報は、口腔内の少なくとも一部分の形状を表す点群を示す情報であり、
     前記第一取得部は、前記対象形状情報が示す点群のうち各歯に対応する点群が含まれる領域を特定し、歯毎に当該歯を識別する歯識別子と特定した領域とを対応付けた前記対象識別情報を取得する、
     請求項1から8のいずれかに記載の情報処理システム。
    The target shape information is information indicating a point cloud representing the shape of at least a part of the oral cavity.
    The first acquisition unit specifies an area including a point group corresponding to each tooth in the point group indicated by the target shape information, and associates a tooth identifier that identifies the tooth with the specified area for each tooth. Acquire the target identification information
    The information processing system according to any one of claims 1 to 8.
  10.  前記第一取得部は、
     前記対象形状情報が示す点群のうち各歯に対応する点群が含まれる領域を、当該点群に含まれる各点とその周囲の点との関係に基づいて推定し、
     推定した領域に含まれる点群と当該領域に含まれない点群とを、互いに異なる表示態様でディスプレイに表示するとともにユーザにより入力される前記表示態様に関する情報を取得し、
     ユーザにより入力された情報に基づいて、各歯に対応する点群が含まれる領域を特定する、
     請求項9に記載の情報処理システム。
    The first acquisition unit
    The area including the point cloud corresponding to each tooth in the point cloud indicated by the target shape information is estimated based on the relationship between each point included in the point cloud and the surrounding points.
    The point cloud included in the estimated area and the point cloud not included in the area are displayed on the display in different display modes, and information on the display mode input by the user is acquired.
    Identify the area containing the point group corresponding to each tooth based on the information entered by the user.
    The information processing system according to claim 9.
  11.  前記第一取得部は、特定した各領域に含まれる点群をディスプレイに表示するとともに、各領域に含まれる点群についてユーザにより入力されるラベル付け情報を取得し、ユーザにより入力された情報に基づいて前記対象識別情報を取得する、
     請求項10に記載の情報処理システム。
    The first acquisition unit displays the point cloud included in each of the specified areas on the display, acquires the labeling information input by the user for the point cloud included in each area, and uses the information input by the user as the information. Acquire the target identification information based on
    The information processing system according to claim 10.
  12.  修復対象である1以上の注目歯に用いる歯科修復物を生産するための情報を出力する情報処理システムであって、
     任意に選択された選択歯に近接する複数の歯のそれぞれの形状を示す情報から得られた複数のベクトル情報と、前記選択歯の形状を示す情報から得られた出力情報とにより調整された学習器が格納される学習器格納部と、
     前記学習器を用いて、注目歯に隣り合う複数の歯のそれぞれの形状を示す情報から得られた複数のベクトル情報に基づいて前記注目歯に対応する出力情報を取得し、取得した前記出力情報から前記注目歯に対応する歯科修復物の形状を表すための修復物情報を取得する修復物情報取得部と、
     を備える、情報処理システム。
    An information processing system that outputs information for producing a dental restoration used for one or more attention teeth to be restored.
    Learning adjusted by a plurality of vector information obtained from information indicating the shape of each of a plurality of teeth adjacent to an arbitrarily selected selected tooth and output information obtained from information indicating the shape of the selected tooth. The learning device storage unit where the device is stored and
    Using the learner, the output information corresponding to the attention tooth is acquired based on a plurality of vector information obtained from the information indicating the shape of each of the plurality of teeth adjacent to the attention tooth, and the acquired output information is obtained. The restoration information acquisition unit that acquires restoration information for expressing the shape of the dental restoration corresponding to the attention tooth from
    Information processing system equipped with.
  13.  互いに近接する2以上の歯の形状を含む歯列形状情報を用いて得られた、前記2以上の歯のうち一部の歯に対応する歯科修復物の形状を表すための修復物情報を取得するための学習器が格納される学習器格納部を備える情報処理システムが有する第一取得部及び第二取得部により実現される、修復対象である1以上の注目歯に用いる歯科修復物を生産するための情報を出力する情報処理方法であって、
     前記第一取得部が、前記注目歯に隣り合う1以上の近接歯の形状を含む対象形状情報と、前記対象形状情報について前記対象形状情報にその形状が含まれている歯を識別する対象識別情報とを取得する第一取得ステップと、
     前記第二取得部が、前記学習器を用いて、前記第一取得部が取得した前記対象識別情報及び前記対象形状情報から前記注目歯に対応する歯科修復物の形状を表すための修復物情報を取得する第二取得ステップとを含む、
     情報処理方法。
    Acquire restoration information for representing the shape of a dental restoration corresponding to a part of the two or more teeth obtained by using the dentition shape information including the shapes of two or more teeth adjacent to each other. Produces dental restorations used for one or more notable teeth to be restored, which are realized by the first acquisition unit and the second acquisition unit of the information processing system including the learning device storage unit for storing the learning device. It is an information processing method that outputs information for
    The first acquisition unit identifies the target shape information including the shapes of one or more adjacent teeth adjacent to the attention tooth and the target identification of the target shape information for identifying the teeth whose shape is included in the target shape information. The first acquisition step to acquire information and
    Restoration information for the second acquisition unit to represent the shape of the dental restoration corresponding to the attention tooth from the target identification information and the target shape information acquired by the first acquisition unit using the learning device. Including the second acquisition step to acquire
    Information processing method.
  14.  修復対象である1以上の注目歯に用いる歯科修復物を生産するための情報を出力するためのプログラムであって、
     互いに近接する2以上の歯の形状を含む歯列形状情報を用いて得られた、前記2以上の歯のうち一部の歯に対応する歯科修復物の形状を表すための修復物情報を取得するための学習器が格納される学習器格納部を備える情報処理システムが有するコンピュータを、
     前記注目歯に隣り合う1以上の近接歯の形状とを含む対象形状情報と、前記対象形状情報について前記対象形状情報にその形状が含まれている歯を識別する対象識別情報とを取得する第一取得部と、
     前記学習器を用いて、前記第一取得部が取得した前記対象識別情報及び前記対象形状情報から前記注目歯に対応する歯科修復物の形状を表すための修復物情報を取得する第二取得部として機能させるための、
     プログラム。
    A program for outputting information for producing a dental restoration used for one or more attention teeth to be restored.
    Acquire restoration information for representing the shape of a dental restoration corresponding to a part of the two or more teeth obtained by using the dentition shape information including the shapes of two or more teeth adjacent to each other. A computer in an information processing system equipped with a learning device storage unit in which a learning device for storing a learning device is stored.
    The first to acquire the target shape information including the shapes of one or more adjacent teeth adjacent to the attention tooth and the target identification information for identifying the teeth whose shape is included in the target shape information for the target shape information. With one acquisition department
    Using the learning device, the second acquisition unit acquires the restoration information for expressing the shape of the dental restoration corresponding to the attention tooth from the target identification information and the target shape information acquired by the first acquisition unit. To function as
    program.
PCT/JP2021/000625 2020-01-21 2021-01-12 Information processing system, information processing method, and program WO2021149530A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021573072A JP7390669B2 (en) 2020-01-21 2021-01-12 Information processing system, information processing method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-007455 2020-01-21
JP2020007455 2020-01-21

Publications (1)

Publication Number Publication Date
WO2021149530A1 true WO2021149530A1 (en) 2021-07-29

Family

ID=76992970

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/000625 WO2021149530A1 (en) 2020-01-21 2021-01-12 Information processing system, information processing method, and program

Country Status (2)

Country Link
JP (1) JP7390669B2 (en)
WO (1) WO2021149530A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000107203A (en) * 1998-10-06 2000-04-18 Shiyuukai Manufacture of dental prosthesis
WO2014141369A1 (en) * 2013-03-11 2014-09-18 富士通株式会社 Program for design of dental prostheses, device for design of dental prostheses, and method for design of dental prostheses
US20180028294A1 (en) * 2016-07-27 2018-02-01 James R. Glidewell Dental Ceramics, Inc. Dental cad automation using deep learning
JP2019000234A (en) * 2017-06-13 2019-01-10 デンタルサポート株式会社 Prosthesis three-dimensional model generation device, prosthesis making system, prosthesis three-dimensional model generation method and prosthesis three-dimensional model generation program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2009035142A1 (en) * 2007-09-13 2010-12-24 株式会社アドバンス Dental prosthesis measurement processing system
JP7110120B2 (en) * 2016-06-21 2022-08-01 ノベル バイオケア サーヴィシィズ アーゲー Method for estimating at least one of shape, position and orientation of dental restoration
CN108735292B (en) * 2018-04-28 2021-09-17 四川大学 Removable partial denture scheme decision method and system based on artificial intelligence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000107203A (en) * 1998-10-06 2000-04-18 Shiyuukai Manufacture of dental prosthesis
WO2014141369A1 (en) * 2013-03-11 2014-09-18 富士通株式会社 Program for design of dental prostheses, device for design of dental prostheses, and method for design of dental prostheses
US20180028294A1 (en) * 2016-07-27 2018-02-01 James R. Glidewell Dental Ceramics, Inc. Dental cad automation using deep learning
JP2019000234A (en) * 2017-06-13 2019-01-10 デンタルサポート株式会社 Prosthesis three-dimensional model generation device, prosthesis making system, prosthesis three-dimensional model generation method and prosthesis three-dimensional model generation program

Also Published As

Publication number Publication date
JP7390669B2 (en) 2023-12-04
JPWO2021149530A1 (en) 2021-07-29

Similar Documents

Publication Publication Date Title
US11735306B2 (en) Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches
US20220218449A1 (en) Dental cad automation using deep learning
US8457772B2 (en) Method for planning a dental component
US8200462B2 (en) Dental appliances
US20090148816A1 (en) Design of dental appliances
US20220008175A1 (en) Method for generating dental models based on an objective function
JP2010532681A (en) Video auxiliary boundary marking for dental models
JP6974879B2 (en) Model generation for dental simulation
US20220304782A1 (en) Configuring dental workflows through intelligent recommendations
JP2024031920A (en) Method of automatically generating prosthesis from three-dimensional scan data, and computer-readable recording medium recording program for causing computer to perform the same
WO2024042192A1 (en) Generation of a three-dimensional digital model of a replacement tooth
US20200297245A1 (en) Motion adjustment prediction system
WO2021149530A1 (en) Information processing system, information processing method, and program
EP2142968B1 (en) A method for the manufacturing of a reproduction of an encapsulated three-dimensional physical object and objects obtained by the method
KR102610716B1 (en) Automated method for generating prosthesis from three dimensional scan data and computer readable medium having program for performing the method
US20230068829A1 (en) Compensating deviations using a simulation of a manufacturing
EP4113373A1 (en) Dental procedures
US20230390035A1 (en) Oral image processing device and oral image processing method
EP4348591A1 (en) Image processing method
EP4395693A1 (en) Compensating deviations using a simulation of a manufacturing
JP2024041065A (en) Image data processing method and image data processing system
KR20220145598A (en) An intraoral image processing apparatus, an intraoral image processing method
JP2018064821A (en) Data generating program, data generating method, information processing apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21744700

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021573072

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21744700

Country of ref document: EP

Kind code of ref document: A1