WO2021090921A1 - Système, programme et procédé de mesure du mouvement maxillaire d'un sujet - Google Patents

Système, programme et procédé de mesure du mouvement maxillaire d'un sujet Download PDF

Info

Publication number
WO2021090921A1
WO2021090921A1 PCT/JP2020/041567 JP2020041567W WO2021090921A1 WO 2021090921 A1 WO2021090921 A1 WO 2021090921A1 JP 2020041567 W JP2020041567 W JP 2020041567W WO 2021090921 A1 WO2021090921 A1 WO 2021090921A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
point
face
subject
movement
Prior art date
Application number
PCT/JP2020/041567
Other languages
English (en)
Japanese (ja)
Inventor
十河 基文
善之 木戸
一徳 野▲崎▼
一典 池邉
山口 哲
雅也 西願
Original Assignee
国立大学法人大阪大学
株式会社アイキャット
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人大阪大学, 株式会社アイキャット filed Critical 国立大学法人大阪大学
Priority to JP2021515056A priority Critical patent/JP7037159B2/ja
Publication of WO2021090921A1 publication Critical patent/WO2021090921A1/fr
Priority to JP2022006480A priority patent/JP2022074153A/ja

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • A61C19/045Measuring instruments specially adapted for dentistry for recording mandibular movement, e.g. face bows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present invention relates to a system, a program, and a method for measuring the jaw movement of a subject. In addition, it relates to systems, programs, and methods for building motion point trajectory trained models used to measure a subject's jaw motion.
  • Jaw movements include “marginal movement measurement” to check the range of movement of the jawbone when not chewing (non-functional) and “masticatory jaw” to measure mandibular movement (chewing movement) during mastication (during functioning).
  • exercise measurement In the field of dentistry, it is applied to patients with temporomandibular disorders and patients who need prosthodontics for missing teeth.
  • Patent Document 1 discloses a method for measuring jaw movement. This technique requires the patient to have a headframe and mandibular markers fixed. In the conventional method for measuring jaw movement as shown in Patent Document 1, the patient had to go directly to the hospital in order to attach the instrument necessary for the measurement to the patient. This can be a heavy burden and effort for patients, doctors and the like. Moreover, all of these appliances are dedicated and can be very expensive and costly.
  • An object of the present invention is to provide a system or the like capable of easily measuring the jaw movement of a subject.
  • the system for measuring jaw movement includes an acquisition means for acquiring a plurality of consecutive images of the subject's face during jaw movement, and a correction that at least corrects the coordinate system of the subject's face.
  • a means, an extraction means for extracting at least an exercise point in the lower jaw region of the face, and a generation means for generating at least an exercise point locus information indicating the locus of the exercise point by tracking the exercise point are provided.
  • the extraction means has a first extraction means for extracting motion points in the lower jaw region of the face and a second extraction means for extracting fixed points in the upper facial region of the face from the plurality of images.
  • the correction means includes extraction means, and the correction means corrects the coordinate system based on the fixed point and a predefined face reference position template.
  • the first extraction means extracts a plurality of feature portions in the plurality of images, and among the plurality of feature portions, the coordinate change within a predetermined period moves the feature portion within a predetermined range.
  • the second extraction means extracts the feature portion whose coordinate change within a predetermined period is less than a predetermined threshold value as a fixed point among the plurality of feature portions.
  • the correction means generates first correction means for correcting the coordinate system of the subject's face and fixed point locus information indicating the locus of the fixed point by tracking the fixed point.
  • the second generation means and the second correction means for correcting the motion point locus information based on the fixed point locus information are provided.
  • the correction means is a reference coordinate system trained model that has been subjected to a process of learning the reference coordinate system of the faces of a plurality of subjects, and the reference coordinate system trained model is an input image.
  • a reference coordinate system trained model is provided, which is configured to correct the coordinate system of the subject's face inside to the reference coordinate system.
  • the reference coordinate system trained model takes a difference between the coordinate system of the subject's face in the input image and the reference coordinate system, and the input is based on the difference.
  • the coordinate system is corrected by converting the image.
  • the conversion process includes an affine transformation.
  • the extraction means extracts a plurality of feature portions in the plurality of images, and extracts pixels of the plurality of feature portions whose coordinate changes within a predetermined period are within a predetermined range as motion points. ..
  • the correction means comprises a base face model generating means for generating a base face model of the subject's face and reflecting the subject's face in the plurality of images in the base face model.
  • the correction means includes a jaw movement face model generation means for generating a jaw movement face model of a subject, and the correction means corrects the coordinate system of the jaw movement face model based on the coordinate system of the base face model. Correct the coordinate system.
  • the extraction means extracts motion points in the jaw motion face model or the base face model.
  • the system of the present invention further includes an evaluation means for generating jaw movement evaluation information indicating the evaluation of the jaw movement of the subject based on at least the generated movement point trajectory information.
  • system of the present invention further comprises a reference point configured to be placed in the mandibular region of the subject, and the extraction means uses the reference point in the plurality of images as a motion point. Extract.
  • the second extraction means further comprises a reference point configured to be installed in the upper face region of the subject, and the second extraction means extracts the reference point in the plurality of images as a fixed point. ..
  • the reference point is configured to represent a specific point on the reference point.
  • a program for measuring jaw movement is executed in a system including a processor unit, and the program acquires a plurality of continuous images of a subject's face during jaw movement.
  • Motion point trajectory information indicating the trajectory of the motion point by at least correcting the coordinate system of the subject's face, extracting at least the motion points in the lower jaw region of the face, and tracking the motion points.
  • the processor unit is made to perform a process including at least generating.
  • the method for measuring jaw movement is to acquire a plurality of consecutive images of the subject's face during jaw movement and to at least correct the coordinate system of the subject's face. , At least extracting the motion points in the lower jaw region of the face, and generating at least the motion point trajectory information indicating the trajectory of the motion points by tracking the motion points.
  • the system for measuring jaw movement is an acquisition means for acquiring a plurality of consecutive images of the face of a subject during jaw movement, and the lower jaw of the face based on the plurality of images.
  • the generation means for generating at least the motion point trajectory information indicating the trajectory of the motion points by tracking the motion points, and the motion point trajectory information.
  • An evaluation means for generating jaw movement evaluation information indicating the evaluation of the jaw movement of the subject is a movement point trajectory learned model subjected to processing for learning the movement point trajectory information of a plurality of subjects.
  • the model having learned the locus of motion points includes an evaluation means that is configured to correlate the input locus information of motion points with the evaluation of jaw movement.
  • a program for measuring jaw movement is executed in a system including a processor unit, and the program acquires a plurality of continuous images of a subject's face during jaw movement. Based on the plurality of images, at least the motion points in the lower jaw region of the face are extracted, and by tracking the motion points, at least the motion point trajectory information indicating the trajectory of the motion points is generated. , Jaw motion evaluation showing the evaluation of the jaw motion of the subject based on at least the motion point trajectory information by using the motion point trajectory trained model that has been processed to learn the motion point trajectory information of a plurality of subjects.
  • the processor unit is subjected to a process including the generation of information, wherein the motion point trajectory trained model is configured to correlate the input motion point trajectory information with the evaluation of the jaw motion. Let me do it.
  • the method for measuring jaw movement is to acquire a plurality of consecutive images of the face of a subject during jaw movement, and based on the plurality of images, the lower jaw region of the face. Processing to at least generate motion point trajectory information indicating the trajectory of the motion point by extracting at least the motion points in the motion point and tracking the motion points, and learning motion point trajectory information of a plurality of subjects. This is to generate jaw movement evaluation information indicating the evaluation of the jaw movement of the subject based on at least the movement point locus information by using the movement point locus trained model.
  • the trained model includes that the input motion point trajectory information is configured to correlate with the evaluation of jaw motion.
  • the system for constructing a motion point trajectory trained model used for measuring the jaw motion of a subject is a motion point obtained by tracking the motion points of a plurality of subjects.
  • the learning process is supervised learning
  • the acquisition means acquires evaluations of jaw movements of the plurality of subjects
  • the acquired evaluations of jaw movements are used as output teacher data.
  • the learning process is unsupervised learning
  • the system of the present invention further comprises a classification means for classifying the output of the constructed motion point locus trained model.
  • a program for constructing a motion point trajectory trained model used for measuring the jaw motion of a subject is executed in a system including a processor unit, and the program is a plurality of subjects.
  • the program is a plurality of subjects.
  • the processor unit is made to perform a process including constructing a model in which the locus of motion points has been learned.
  • a method for constructing a motion point trajectory trained model used to measure a subject's jaw motion is to track the trajectory of the motion points obtained by tracking the motion points of a plurality of subjects. It includes at least acquiring the motion point trajectory information to be shown, and constructing a motion point trajectory trained model by learning processing using the motion point trajectory information of at least the plurality of subjects as input teacher data.
  • the present invention it is possible to provide a system or the like capable of easily measuring the jaw movement of a subject.
  • the user can measure jaw movements at any place, for example, at a company, at home, or the like, in addition to a medical institution.
  • the figure which shows an example of the structure of the processor part 120 in one Embodiment The figure which shows an example of the result of the extraction of a plurality of feature portions of a face by the extraction means 122.
  • the figure which shows an example of the structure of the processor part 130 in one Embodiment The figure which shows an example of the structure of the processor part 140 in another embodiment.
  • mandible region refers to the region on the mandible of the face.
  • the "upper facial region” refers to an region other than the lower jaw region of the face. That is, the facial region is divided into a “mandibular region” and an “upper facial region”.
  • the "face coordinate system” means the coordinate system defined in the face.
  • the "face coordinate system” includes, for example, a horizontal system (x-axis), a sagittal system (y-axis), and a coronary system (z-axis).
  • the horizontal system in the “face coordinate system” is defined along, for example, the eye-ear plane (Frankfurt plane), the nasal hearing line (or Campel plane), the hip plane, the occlusal plane, or both pupil lines.
  • the sagittal system of the "face coordinate system” is defined along, for example, the median line, the mid-sagittal plane, and the like.
  • the coronary system of the "face coordinate system” is defined, for example, along the orbital plane.
  • the planes or lines that define the horizontal, sagittal, and coronal systems are not limited to those described above and can be defined along any plane or line.
  • the "reference coordinate system of the face” refers to the coordinate system of the face normally possessed by a person facing the front.
  • the “face reference coordinate system” is derived by learning multiple facial images.
  • the "movement point” means a point or region on a part that is moved by jaw movement.
  • the "fixed point” means a point or region on a part that does not move due to jaw movement.
  • the "base face model” refers to a three-dimensional model of a face, and is a so-called “3D avatar”.
  • the "jaw movement face model” that reflects the actual jaw movement is obtained.
  • the “jaw movement face model” is a so-called “moving 3D avatar” according to the jaw movement in the moving image.
  • FIG. 1 shows an example of a flow 10 for simply measuring the jaw movement of a patient using one embodiment of the present invention.
  • the dentist simply captures a video of the patient 20 chewing, and the jaw movement of the patient 20 is measured (that is, the jaw movement during mastication is measured), and the measurement result of the jaw movement is performed. Is provided to the dentist.
  • a dentist uses a terminal device 300 (for example, a smartphone, a tablet, etc.) to take a video of the patient 20 chewing. Since a moving image can be regarded as a plurality of continuous images (still images), the "moving image” and the “consecutive plurality of images” are used synonymously in the present specification.
  • step S1 the captured moving image is provided to the server device 30. It does not matter how the moving image is provided to the server device 30.
  • the moving image may be provided to the server device 30 via a network (eg, the Internet, LAN, etc.).
  • the moving image may be provided to the server device 30 via a storage medium (eg, removable media).
  • the server device 30 processes the moving image provided in step S1.
  • the processing by the server device 30 generates the measurement result of the jaw movement of the patient 20.
  • the measurement result of the jaw movement may be, for example, information representing the trajectory of the jaw movement.
  • the measurement result of the jaw movement may be, for example, jaw movement evaluation information for evaluating the jaw movement of the patient 20 generated based on the information representing the trajectory of the jaw movement.
  • the jaw movement evaluation information includes, for example, information on whether or not the trajectory of the jaw movement is a normal pattern.
  • step S2 the measurement result of the jaw movement generated by the server device 30 is provided to the dentist. It does not matter which mode the measurement result of jaw movement is provided.
  • the jaw movement measurement results may be provided via a network or via a storage medium.
  • the measurement results of jaw movement may be provided via a paper medium.
  • the dentist confirms the measurement result of the jaw movement of the patient 20 provided in step S2.
  • the dentist can easily measure the jaw movement of the patient 20 simply by photographing the state of chewing of the patient 20 with the terminal device 300 without requiring a specific instrument.
  • the jaw movement during mastication is measured in the flow 10, but the present invention is not limited to this.
  • the limit motion measurement can be performed.
  • a dentist uses a terminal device 300 to take a video of a patient 20 moving his / her jaw greatly (for example, a state in which his / her mouth is wide open, a state in which his / her jaw is pushed forward, etc.). It can be achieved by processing the captured moving image in the server device 30.
  • the dentist has described the flow 10 in which the measurement result of the jaw movement can be provided only by taking a moving image of the jaw movement of the patient 20, but the present invention is not limited to this.
  • the patient 20 can use his / her own terminal device 300 to take a video of himself / herself chewing, or the patient 20 can use the terminal device 300 to give another person himself / herself.
  • the patient 20's jaw movement measurement results provided to the doctor, dentist, caregiver, or patient 20's family or 20 patients by simply having them take a video of the mastication. Is.
  • the terminal device 300 can perform the processing by the server device 30 described above.
  • the server device 30 can be omitted and the terminal device 300 can operate standalone.
  • the measurement result of the jaw movement is obtained from the terminal device 300 to the doctor in the hospital without going through the server device 30.
  • it may be provided to a dentist, a caregiver, or 20 family members or 20 patients themselves.
  • the above-mentioned flow 10 can be realized by using the computer system 100 of the present invention described later.
  • FIG. 2 shows an example of the configuration of a computer system 100 for measuring the jaw movement of a subject.
  • the computer system 100 is connected to the database unit 200. Further, the computer system 100 may be connected to at least one terminal device 300 via the network 400.
  • the network 400 can be any kind of network.
  • the network 400 may be, for example, the Internet or a LAN.
  • the network 400 may be a wired network or a wireless network.
  • An example of the computer system 100 is a server device, but the present invention is not limited to this.
  • An example of the terminal device 300 is, but is not limited to, a computer held by a user (for example, a terminal device) or a computer installed in a hospital (for example, a terminal device).
  • the computer server device or terminal device
  • the terminal device can be any type of terminal device such as a smartphone, tablet, personal computer, smart glasses and the like.
  • the computer system 100 includes an interface unit 110, a processor unit 120, and a memory 170 unit.
  • the interface unit 110 exchanges information with the outside of the computer system 100.
  • the processor unit 120 of the computer system 100 can receive information from the outside of the computer system 100 via the interface unit 110, and can transmit the information to the outside of the computer system 100.
  • the interface unit 110 can exchange information in any format.
  • the interface unit 110 includes, for example, an input unit that enables information to be input to the computer system 100. It does not matter in what manner the input unit enables the information to be input to the computer system 100. For example, when the input unit is a touch panel, the user may input information by touching the touch panel. Alternatively, when the input unit is a mouse, the user may input information by operating the mouse. Alternatively, when the input unit is a keyboard, the user may input information by pressing a key on the keyboard. Alternatively, when the input unit is a microphone, the user may input information by inputting voice into the microphone. Alternatively, when the input unit is a camera, the information captured by the camera may be input.
  • the information may be input by reading the information from the storage medium connected to the computer system 100.
  • the input unit is a receiver
  • the receiver may input information by receiving information from the outside of the computer system 100 via the network 400.
  • the interface unit 110 includes, for example, an output unit that enables information to be output from the computer system 100. It does not matter in what manner the output unit enables the information to be output from the computer system 100. For example, when the output unit is a display screen, information may be output to the display screen. Alternatively, when the output unit is a speaker, the information may be output by the voice from the speaker. Alternatively, when the output unit is a data writing device, the information may be output by writing the information to the storage medium connected to the computer system 100. Alternatively, when the output unit is a transmitter, the transmitter may output information by transmitting information to the outside of the computer system 100 via the network 400. In this case, the type of network does not matter. For example, the transmitter may transmit information via the Internet or may transmit information via LAN.
  • the output unit can output the motion point locus information generated by the computer system 100.
  • the output unit can output the jaw movement evaluation information generated by the computer system 100.
  • the processor unit 120 executes the processing of the computer system 100 and controls the operation of the entire computer system 100.
  • the processor unit 120 reads the program stored in the memory unit 170 and executes the program. This allows the computer system 100 to function as a system that performs the desired steps.
  • the processor unit 120 may be implemented by a single processor or may be implemented by a plurality of processors.
  • the memory unit 170 stores a program required to execute the processing of the computer system 100, data required to execute the program, and the like.
  • the memory unit 170 includes a program for causing the processor unit 120 to perform a process for measuring the jaw movement of the subject (for example, a program for realizing the processes shown in FIGS. 8 and 9 described later), and a subject's jaw.
  • a process for constructing a motion point trajectory trained model used for measuring motion (for example, a program that realizes the process shown in FIG. 10 described later) may be stored.
  • the program may be pre-installed in the memory unit 170.
  • the program may be installed in the memory unit 170 by being downloaded via the network. In this case, the type of network does not matter.
  • the memory unit 170 may be implemented by any storage means.
  • the database unit 200 can store, for example, a plurality of consecutive images of the faces of the subjects during jaw movement for each of the plurality of subjects.
  • a plurality of continuous images of the subject's face during jaw movement may be transmitted from the terminal device 300 to the database unit 200 via the network 400, or may be, for example, a photographing means provided in the computer system 100. It may have been taken by.
  • a plurality of consecutive images of faces during jaw movement of a plurality of subjects can be used to construct a face reference coordinate system trained model described later.
  • a plurality of consecutive images of the face during jaw movement or movement point trajectory information derived from those images is associated with the evaluation of the jaw movement of the subject.
  • the stored data can be used to build a motion point trajectory trained model, which will be described later.
  • the database unit 200 may store, for example, motion point locus information or jaw motion evaluation information output by the computer system 100.
  • the terminal device 300 includes at least a shooting means such as a camera.
  • the photographing means may have any configuration as long as it is possible to capture at least a plurality of consecutive images.
  • the imaging means is used to capture a plurality of consecutive images of the subject's face during jaw movement.
  • the image to be captured may be an image including two-dimensional information (length x width) or an image including three-dimensional information (length x width x depth).
  • FIG. 3 shows an example of the configuration of the processor unit 120 in one embodiment.
  • the processor unit 120 includes an acquisition unit 121, an extraction unit 122, and a generation unit 123.
  • the acquisition means 121 is configured to acquire a plurality of consecutive images of the subject's face during jaw movement.
  • the acquisition means 121 can acquire, for example, a plurality of consecutive images of the subject's face during jaw movement stored in the database unit 200 via the interface unit 110.
  • the acquisition means 121 can acquire, for example, a plurality of consecutive images received from the terminal device 300 via the interface unit 110.
  • the acquisition means 121 is configured to acquire motion point trajectory information indicating the trajectory of the motion point obtained by tracking the motion points of a plurality of subjects in order to construct a model in which the motion point trajectory has been learned, which will be described later. You may.
  • the motion point trajectory information may be, for example, the motion point trajectory information acquired by using the computer system 100 of the present invention, or the motion point trajectory information obtained from any known jaw motion measuring device. May be good.
  • the acquisition means 121 can acquire, for example, the motion point locus information stored in the database unit 200 via the interface unit 110.
  • the acquisition means 121 may be further configured to acquire evaluations of jaw movements of a plurality of subjects in order to construct a motion point trajectory learned model described later.
  • the acquisition means 121 can acquire, for example, the evaluation of the jaw movement stored in the database unit 200 via the interface unit 110.
  • the extraction means 122 is configured to extract at least the movement points in the lower jaw region of the face.
  • the extraction means 122 can, for example, extract a motion point from an image acquired by the acquisition means 121.
  • the extraction means 122 can, for example, extract a motion point from the output of the correction means 131 described later.
  • the extraction means 122 may receive an input of where the motion point in the image is, and extract the motion point based on the input, or may automatically extract the motion point without receiving the input. May be extracted.
  • the extraction means 122 can automatically extract, for example, a point or region on the midline of the face as a motion point.
  • the points on the midline in the mandibular region of the face are suitable for evaluating jaw movements because they move significantly due to jaw movements.
  • the extraction means 122 may automatically extract, for example, a point at the tip of the lower jaw as a movement point.
  • the extraction means 122 when the extraction means 122 automatically extracts the motion points, the extraction means 122 first detects a plurality of feature portions of the face for each of the plurality of images.
  • One of the plurality of feature portions of the face can be represented, for example, as one or more pixels in the image.
  • the plurality of characteristic parts of the face may be, for example, parts corresponding to eyes, eyebrows, nose, mouth, eyebrows, chin, ears, throat, contour and the like.
  • the extraction of the plurality of feature portions of the face by the extraction means 122 can be performed, for example, by using a trained model that has been subjected to a process of learning the feature portions using the face images of a plurality of subjects.
  • the extraction of a plurality of feature parts of a face by the extraction means 122 can be performed using the face recognition application "Open face” (Tadas Baltrusaitis, Peter Robinson, Louis-Philippe Morency, "OpenFace: an open source facial behavior”. analysis toolkit ”, 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp.1-10, 2016).
  • Openface is software that can output three-dimensional face information (length x width x depth) from an image including two-dimensional face information (length x width).
  • the extraction means 122 may, for example, perform color enhancement processing on the image and extract a portion having a color tone different from that of the surroundings as a feature portion. This is particularly preferable when extracting a portion having a color tone that is clearly different from that of the surroundings such as eyebrows.
  • FIG. 4 shows an example of the result of extracting a plurality of feature parts of the face by the extraction means 122.
  • the extraction means 122 uses the face recognition application "Openface” to extract a plurality of feature portions of the face. Each of the extracted plurality of feature portions is displayed with a black dot 4000.
  • "Open face” the contour is extracted as a feature portion, and a portion without a conspicuous feature inside the contour (for example, between the cheek, mouth and chin, etc.) cannot be extracted.
  • the extraction means 122 extracts the point 4100 in the mandibular region as an exercise point from the extracted plurality of feature portions.
  • the extraction means 122 can, for example, extract a portion of the extracted plurality of feature portions whose coordinate change within a predetermined period is within a predetermined range as a motion point.
  • the predetermined period can be, for example, all or a part of the period in which a plurality of consecutive images are taken.
  • the predetermined range may be, for example, a range of about 5 mm to about 20 mm.
  • the motion point 4100 at the lower end of the mandible in the mandibular region can be extracted.
  • the generation means 123 is configured to generate at least motion point locus information indicating the locus of the motion point by tracking the motion point.
  • the motion point locus information is information indicating the trajectory of the motion point within a predetermined period.
  • the predetermined period can be, for example, all or a part of the period in which a plurality of consecutive images are taken.
  • the generation means 123 can generate motion point locus information by, for example, specifying the coordinates in an image of a motion point in each of a plurality of consecutive images and tracing the coordinates between the plurality of consecutive images. ..
  • the coordinates of the motion point may be two-dimensional coordinates or three-dimensional coordinates.
  • the coordinates of the motion point are two-dimensional coordinates
  • the coordinates of the motion point are two-dimensional coordinates
  • the coordinates of the motion point are three-dimensional coordinates.
  • the coordinates of the motion point are It can be three-dimensional coordinates.
  • the generated motion point trajectory information can be output to the outside of the computer system 100 via the interface unit 110.
  • the contour of the subject's face in the image changes, and the extracted feature portion also changes.
  • the plurality of featured portions of the extracted face are displaced between the plurality of consecutive images.
  • FIG. 5 is a diagram comparing the results of extracting facial feature portions using Openface for two frames in a moving image when the angle between the face and the camera is shifted.
  • the white line is an extension line of the nasal muscle, and is a characteristic portion from which black dots are extracted. Since the distance between the feature portion on the mandible indicated by the arrow and the white line is different in the two frames, it is confirmed that the extracted feature portion is not the same in the two frames.
  • the motion points extracted based on the feature portions will also be different between the plurality of consecutive images. In this case, it becomes impossible to generate accurate motion point trajectory information. Therefore, it is preferable to correct the movement and inclination of the subject during imaging so that the extracted motion points do not differ between the plurality of consecutive images.
  • the extraction means 122 may extract the motion point by recognizing the gauge point installed in the mandibular region.
  • the reference point may be, for example, a sticker or ink.
  • the extraction means 122 extracts the reference point as a motion point, the extracted motion point can be the same among a plurality of consecutive images.
  • the gauge points may be configured to represent specific points on the gauge points.
  • the reference point recognized from each of the plurality of consecutive images by the extraction means 122 may be slightly deviated between the plurality of consecutive images. This is because the extraction means 122 recognizes an arbitrary point in the reference point. In this case, the extracted motion points are slightly different between a plurality of consecutive images, which may lead to an error in the motion point trajectory information.
  • the specific point is preferably of a size recognized as a point by the extraction means 122.
  • the reference point can have a pattern representing a specific point.
  • the pattern can be, for example, a dot pattern (eg, an AR marker (eg, ArUco marker) as shown in FIG. 11A, a QR code®, etc.).
  • the dot pattern represents the corner of each dot or the center of the pattern as a specific point.
  • the pattern can be, for example, a center of gravity symbol in mechanical drawing (for example, a symbol obtained by dividing a circle into four equal parts as shown in FIG. 11B).
  • the center of gravity symbol in the mechanical drawing represents the center as a specific point.
  • the reference point is color-coded (for example, the color of the specific point is the complementary color of the other part of the reference point, or the color of the specific point is the complementary color of the flesh color) to represent the specific point. You may do so.
  • the reference point may represent a specific point by making its size sufficiently small.
  • the three-dimensional coordinates of the reference point can be derived by recognizing at least four points on the dot pattern. Derivation of three-dimensional coordinates can be achieved by using techniques known in the field of AR (augmented reality) and the like.
  • the three-dimensional coordinates of the reference point correspond to the three-dimensional coordinates of the motion point.
  • the extraction means 122 may extract the feature portion by using an application other than "Open face".
  • the application is capable of extracting a consistent feature portion between a plurality of images. This is because the same effect as the case of using the above-mentioned reference point, that is, the effect that the extracted motion points can be the same among a plurality of consecutive images can be obtained.
  • the extraction means 122 can, for example, extract a feature portion from an image including three-dimensional information (length x width x depth) of the face.
  • the extraction means 122 can also extract the feature portion from the image including the three-dimensional information (length x width x depth) of the face, so that the extracted motion points can be the same among a plurality of consecutive images.
  • the extraction means 122 extracts the feature portion using an application capable of extracting the reference point as a motion point, or extracting a consistent feature portion between a plurality of images, or extracts the feature portion of the face 3
  • the extracted motion points can be the same among a plurality of consecutive images.
  • the subject's movement or inclination during imaging is present, the subject's movement or inclination is included in the locus of the motion point, and accurate motion point trajectory information can be generated. become unable. Therefore, it is preferable to correct the movement and inclination of the subject during imaging.
  • FIG. 6A shows an example of the configuration of the processor unit 130 in one embodiment.
  • the processor unit 130 may have a configuration for making corrections so that the extracted motion points do not differ between a plurality of consecutive images.
  • the processor unit 130 is a processor unit included in the computer system 100 as an alternative to the processor unit 120 described above.
  • FIG. 6A the same elements as those shown in FIG. 3 are assigned the same reference numbers, and the description thereof will be omitted here.
  • the processor unit 130 includes acquisition means 121, correction means 131, extraction means 122, and generation means 123.
  • the correction means 131 is configured to at least correct the coordinate system of the subject's face based on the plurality of images acquired by the acquisition means 121.
  • the correction means 131 can correct the coordinate system of the subject's face for each of the plurality of images so that the coordinate system of the subject's face matches for each of the plurality of images.
  • the correction means 131 can correct the coordinate system of the face, for example, by performing an affine transformation on each of the plurality of images.
  • a plurality of images are images including three-dimensional information (length x width x depth)
  • three-dimensional affine transformation can be performed.
  • the motion points extracted from each of the plurality of images come to match among the plurality of images.
  • the correction means 131 utilizes a reference coordinate system trained model that has been processed to learn the reference coordinate systems of the faces of a plurality of subjects, for each of the plurality of images, of the faces of the subjects.
  • the coordinate system can be corrected.
  • the reference coordinate system trained model is configured to output an image obtained by correcting the coordinate system of the subject's face in the input image to the reference coordinate system.
  • the reference coordinate system trained model can be constructed using any machine learning model.
  • the reference coordinate system trained model can be constructed by, for example, supervised learning.
  • supervised learning for example, an image of the subject's face can be used as input teacher data, and the reference coordinate system in the image can be used as output teacher data.
  • the trained model can recognize a reference coordinate system that is statistically estimated to have the faces of the plurality of subjects. become.
  • the difference between the subject's face coordinate system and the reference coordinate system is output.
  • the input image is converted (for example, scaling, rotation, shearing, translation, etc.) so that the difference between the coordinate system of the subject's face and the reference coordinate system becomes zero or less than a predetermined threshold value.
  • the trained model By constructing the trained model in, the reference coordinate system trained model can be generated.
  • the image conversion process can be performed using, for example, affine transformation.
  • the reference coordinate system trained model can be constructed by, for example, unsupervised learning.
  • unsupervised learning for example, facial images of a plurality of subjects can be used as input teacher data.
  • the trained model can recognize the face coordinate system common to many images as a reference coordinate system that is presumed to be statistically possessed by the plurality of subjects. become able to.
  • an image of the subject's face is input to such a trained model, the difference between the subject's face coordinate system and the reference coordinate system is output.
  • the input image is converted (for example, scaling, rotation, shearing, translation, etc.) so that the difference between the coordinate system of the subject's face and the reference coordinate system becomes zero or less than a predetermined threshold value.
  • the trained model By constructing the trained model in, the reference coordinate system trained model can be generated.
  • the image conversion process can be performed using, for example, affine transformation.
  • the image used for the input teacher data may be, for example, an image including two-dimensional information (length x width), but an image including three-dimensional information (length x width x depth) is preferable. This is because the constructed reference coordinate system trained model can output an image including three-dimensional information.
  • An image containing three-dimensional information (length x width x depth) can be acquired using, for example, an RGB-D camera.
  • the extraction means 122 extracts the motion points in the lower jaw region of the face in the coordinate system corrected by the correction means 131.
  • the generation means 123 generates motion point locus information indicating the locus of the motion point by tracking the extracted motion points. Since the motion points extracted from each of the plurality of images are acquired in the same coordinate system, the generated motion point trajectory information becomes more accurate information.
  • FIG. 6B shows an example of the configuration of the processor unit 140 in another embodiment.
  • the processor unit 140 may have a configuration for correcting the coordinate system of the subject's face by using a fixed point that does not move in the jaw movement as a reference.
  • the processor unit 140 is a processor unit included in the computer system 100 as an alternative to the processor unit 120 described above.
  • the same reference numbers are assigned to the same components as those described above in FIGS. 3 and 6A, and the description thereof will be omitted here.
  • the processor unit 140 includes acquisition means 121, correction means 131, extraction means 122, and generation means 123.
  • the extraction means 122 includes a first extraction means 141 and a second extraction means 142.
  • the second extraction means 142 is configured to extract fixed points in the upper facial region of the face based on a plurality of images.
  • the second extraction means 142 may, for example, receive an input of where the fixed point in the image is, and extract the fixed point based on the input, or automatically without receiving the input.
  • a fixed point may be extracted.
  • the fixed point can be a point or region on a site such as the forehead, glabellar, glabellar, or nose tip that does not move with jaw movement.
  • the fixation point can be a point or region on an anatomical feature (eg, eyebrows, glabellar, external canthus (outer corner of the eye), internal canthus, pupil, tragus, nosecap, etc.).
  • the fixed points preferably have at least 3 pixels on the image, and the fixed points may be, for example, one region having at least 3 pixels, or three points each having at least 1 pixel. May be good. This is because if there are at least 3 pixels, the correction means 131, which will be described later, can use these three as a reference to correct a plurality of images so as to correspond to the face reference position template.
  • the fixed points can be at least three points that are separated from each other, in which case the coordinate system of the subject's face is based on the fixed points and a predefined face reference position template by the correction means 133 described below. The error that can occur when correcting the above can be reduced.
  • the fixed point can be an area that covers at least a portion of the upper facial area of the face (eg, a curved surface that covers the forehead), in which case the fixed point and a pre-defined face reference by the correction means 133 described below.
  • the error that can occur when correcting the coordinate system of the subject's face based on the position template can be reduced.
  • the second extraction means 142 when the second extraction means 142 automatically extracts a fixed point, the second extraction means 142 first extracts a plurality of images in the same manner as the extraction means 122 described above with reference to FIG. For each of the above, a plurality of feature parts of the face are detected.
  • the extraction of the plurality of feature portions of the face by the second extraction means 142 can be performed, for example, by using a trained model that has been subjected to a process of learning the feature portions using the face images of a plurality of subjects. , Can be done using the face recognition application "Openface".
  • the second extraction means 142 extracts a point or region in the upper facial region as a fixed point among the plurality of extracted feature portions.
  • the second extraction means 142 can, for example, extract a portion of the extracted plurality of feature portions whose coordinate change within a predetermined period is less than a predetermined threshold value as a fixed point.
  • the predetermined period can be, for example, all or a part of the period in which a plurality of consecutive images are taken.
  • the predetermined threshold can be, for example, about 1 mm.
  • the second extraction means 142 may, for example, perform color enhancement processing on the image to extract a portion having a color tone different from that of the surroundings as a feature portion. This is particularly preferable when extracting a portion having a color tone that is clearly different from that of the surroundings such as eyebrows.
  • the second extraction means 142 extracts the fixed point by recognizing the reference point installed in the upper face region. May be good.
  • a fixed point is also referred to as a fixed point to distinguish it from a point placed in the mandibular region.
  • the fixed point station can have the same configuration as the point point installed in the mandibular region.
  • the fixed point reference point may be, for example, a sticker or ink.
  • the fixed point point may be used separately from the point set in the mandibular region, but the fixed point point is preferably used in combination with the point set in the mandibular region.
  • the fixed point gauge may be configured to represent a specific point on the fixed point gauge.
  • the fixed point reference point can have a pattern representing a specific point.
  • the pattern can be, for example, a dot pattern (eg, an AR marker (eg, ArUco marker) as shown in FIG. 11A, a QR code®, etc.).
  • the dot pattern represents the corner of each dot or the center of the pattern as a specific point.
  • the pattern can be, for example, a center of gravity symbol in mechanical drawing (for example, a symbol obtained by dividing a circle into four equal parts as shown in FIG. 11B).
  • the center of gravity symbol in the mechanical drawing represents the center as a specific point.
  • a fixed point reference point is specified by color coding (for example, the color of a specific point is the complementary color of the color of another part of the reference point, or the color of the specific point is the complementary color of the skin color). It may represent a point.
  • the fixed point reference point may represent a specific point by making its size sufficiently small.
  • the three-dimensional coordinates of the fixed point reference point can be derived by recognizing at least four points on the dot pattern. Derivation of three-dimensional coordinates can be achieved by using techniques known in the field of AR (augmented reality) and the like.
  • the three-dimensional coordinates of the fixed point reference point correspond to the three-dimensional coordinates of the fixed point.
  • the correction means 131 is configured to correct the coordinate system of the subject's face based on the fixed point and the pre-defined face reference position template. For example, the correction means 131 transforms each of the plurality of images (eg, scaling, rotation, shearing, translation) so that each fixed point of the plurality of images moves to a corresponding point on the face reference position template. Move, etc.). Alternatively, for example, the correction means 131 may make each of the plurality of images zero or less than a predetermined threshold so that the distance between each fixed point of the plurality of images and the corresponding point on the face reference position template is zero or less than a predetermined threshold. Is converted.
  • the correction means 131 transforms each of the plurality of images (eg, scaling, rotation, shearing, translation) so that each fixed point of the plurality of images moves to a corresponding point on the face reference position template. Move, etc.).
  • the correction means 131 may make each of the plurality of images zero or less than a predetermined threshold so that the distance between
  • the correction means 131 transforms each of the plurality of images so that the plane defined by each fixed point of the plurality of images coincides with the corresponding plane on the face reference position template.
  • the coordinate system of the subject's face in the plurality of images after the conversion process becomes the coordinate system of the corrected face.
  • the image conversion process can be performed by, for example, affine transformation.
  • the correction means 131 when the fixed point is a region that covers at least a part of the upper facial region of the face (for example, a curved surface that covers the forehead), the correction means 131 includes the region and the corresponding region on the face reference position template.
  • Each of the plurality of images can be converted so that the error is minimized. This can be done, for example, using the method of least squares.
  • the correction means 131 may correct the coordinate system of the subject's face by using, for example, a motion point in addition to the fixed point.
  • the face reference position template is a template that defines the position of the face (that is, the face reference position) when the face is facing the front.
  • a face reference position template can be anatomically defined.
  • the face reference position template is a template that defines the position of the face when the ocular ear plane (Frankfurt plane), the nasal hearing line (or Campel plane), the hip plane, the occlusal plane, or both pupil lines are horizontal.
  • the face reference position template can be a template that defines the position of the face in a state where the median line or the mid-sagittal plane is vertical.
  • a face reference position template can be a template that defines the position of the face with the orbital plane facing forward.
  • the face reference position template may be an eye-ear plane (Frankfurt plane), a nasal hearing line (or Campel plane), a hip plane, an occlusal plane, or both pupil lines horizontal and / or a midline.
  • it can be a template that defines the position of the face with the midline sagittal plane vertical and / or with the orbital plane facing forward.
  • the face reference position template can be used to confirm the position of each part of the face when the face is facing forward.
  • the correction means 131 confirms the position of the face portion in the face reference position template, and moves a plurality of corresponding portions (for example, a portion where a fixed point is located) in the plurality of images to that position. Each of the images can be converted.
  • the correction means 131 confirms the orientation of various planes (eg, Frankfurt plane, Campel plane, etc.) in the face reference position template, and the orientation is defined by the corresponding planes (eg, fixed points) in the plurality of images.
  • various planes eg, Frankfurt plane, Campel plane, etc.
  • the orientation is defined by the corresponding planes (eg, fixed points) in the plurality of images.
  • Each of the plurality of images can be converted so as to move the plane.
  • the face reference position template can be implemented as a template used when taking a plurality of consecutive images.
  • the image is displayed only when the face reference position template is displayed on the screen of the terminal device 300 when a plurality of images are taken by the terminal device 300, and the orientation of the subject's face matches the face reference position template. Can be photographed.
  • the appearance of the subject's face in each of the plurality of captured images is consistent, and the possibility that the extracted motion points differ between the plurality of consecutive images can be reduced. Limiting such captured images is also a type of correction by the computer system 100.
  • the first extraction means 141 may have the same configuration as the extraction means 122 described above with reference to FIG.
  • the first extraction means 141 extracts the motion points in the lower jaw region of the face in the coordinate system corrected by the correction means 131.
  • the generation means 123 generates motion point locus information indicating the locus of the motion point by tracking the extracted motion points. Since the motion points extracted from each of the plurality of images are acquired in the same coordinate system, the generated motion point trajectory information becomes more accurate information.
  • more accurate motion point locus information can be generated by correcting the coordinate system of the face using the fixed point as a reference. For example, when a fixed point moves independently of the jaw movement due to the movement of the subject's body when taking a plurality of continuous images (for example, while performing the jaw movement when the eyebrows are set as the fixed point). If the fixed points do not match between multiple images due to the movement of the eyebrows up and down or the skin in the upper facial area, etc.), correct the facial coordinate system using the fixed points as a reference. In addition, it is preferable to make corrections so as to offset the movement of the fixed point. This is because the error of the locus of the moving point due to the movement of the fixed point is reduced, and more accurate information on the locus of the moving point can be generated.
  • the correction means 131 may further include a second generation means (not shown) and a second correction means (not shown).
  • the second generation means is configured to generate fixed point locus information indicating the locus of the fixed point by tracking the fixed point.
  • the second generation means may generate fixed point trajectory information for each of at least 3 pixels, for example, or at least 1 pixel out of at least 3 pixels (eg, the pixel closest to the midline, the uppermost pixel). Fixed point trajectory information may be generated for the pixels at, the bottommost pixels, randomly selected pixels, etc.).
  • the second generation means may generate fixed point locus information for, for example, a center of gravity of at least 3 pixels.
  • the fixed point locus information is information indicating the locus of a fixed point within a predetermined period.
  • the predetermined period can be, for example, all or a part of the period in which a plurality of consecutive images are taken.
  • the second generation means can generate the fixed point locus information by, for example, tracking the coordinates in the image of the fixed point in each of a plurality of consecutive images.
  • the second correction means is configured to correct the motion point trajectory information based on the fixed point trajectory information.
  • the second correction means may correct the motion point locus information by subtracting the locus of the fixed point from the locus of the motion point.
  • the fixed point locus information of the pixel having the smallest locus movement is used, and the locus of the fixed point is subtracted from the locus of the motion point to obtain the motion point.
  • the trajectory information can be corrected.
  • the locus of the fixed point is subtracted from the locus of the moving point by using the fixed point locus information of the pixel having the largest movement of the locus.
  • Information can be corrected.
  • the motion point trajectory information can be corrected by subtracting the motion point trajectory from the motion point trajectory using the fixed point trajectory information of the center of gravity of at least 3 pixels.
  • the second correction means may correct the motion point locus information by tracking the corrected motion point obtained by subtracting the coordinates of the fixed point from the coordinates of the motion point.
  • the coordinates of the pixel having the smallest locus movement among the fixed point locus information of at least 3 pixels may be subtracted from the coordinates of the motion point to obtain the corrected motion point.
  • the coordinates of the pixel having the largest locus movement may be subtracted from the coordinates of the motion point to obtain the corrected motion point.
  • the coordinates of the center of gravity of at least 3 pixels may be subtracted from the coordinates of the motion point to obtain the corrected motion point.
  • the motion point locus information corrected in this way does not include an error due to the movement of the fixed point, and can be more accurate information.
  • the motion point locus information is generated by correcting the coordinate system of the face, extracting the motion points in the corrected coordinate system, and tracking the motion points.
  • the motion point trajectory information is generated.
  • the timing of correction is not limited to this.
  • the motion point locus information may be generated by extracting the motion points, correcting the extracted motion points, and tracking the corrected motion points.
  • the motion point locus information may be generated by extracting the motion points and tracking the extracted motion points, and the generated motion point trajectory information may be corrected. This can be achieved by correcting the coordinate system of the motion point or the motion point locus information in the same manner as the correction of the coordinate system described above.
  • the motion point locus information generated by the generation means 123 includes information indicating motions other than the motion points of the lower jaw (for example, movement of the subject's body during imaging (for example, movement of the upper jaw) and noise due to tilt). In some cases, it does not contain information indicating movement other than the movement point of the lower jaw.
  • the motion point trajectory information is corrected after the motion point trajectory information is generated.
  • the coordinate system is corrected to extract the motion point before the motion point trajectory information is generated, or the motion point is extracted and extracted before the motion point tracking information is generated. This is the case of correcting the motion point.
  • FIG. 6C shows an example of the configuration of the processor unit 150 in another embodiment.
  • the processor unit 150 may have a configuration for correcting the coordinate system of the subject's face by using the subject's jaw movement face model.
  • the processor unit 150 is a processor unit included in the computer system 100 as an alternative to the processor unit 120 described above.
  • the same reference numbers are assigned to the same components as those described above in FIGS. 3 and 6A, and the description thereof will be omitted here.
  • the processor unit 150 includes acquisition means 121, correction means 131, extraction means 122, and generation means 123.
  • the correction means 131 includes a base face model generation means 151 and a jaw movement face model generation means 152.
  • the base face model generation means 151 is configured to generate a base face model of the subject's face.
  • the base face model creating means 151 may generate a base face model from, for example, a subject's face image (for example, a subject's face image stored in the database unit 200) acquired in advance.
  • the base face model may be generated from the image of the subject's face received from the terminal device 300 via the interface unit 110.
  • the face image used to generate the base face model is preferably a front-facing, stationary, expressionless image. This is because the generated base face model becomes an expressionless one facing the front, and it becomes easy to correspond to various movements.
  • the base face model generation means 151 can generate a base face model by using any known method.
  • the image used may be an image including two-dimensional information (length x width), but is preferably an image containing three-dimensional information (length x width x depth). This is because an image containing three-dimensional information can generate a base face model with higher accuracy more easily.
  • the jaw movement face model generation means 152 is configured to generate a jaw movement face model of a subject by reflecting the face of the subject in a plurality of images in the base face model.
  • the jaw movement face model generation means 152 generates a jaw movement face model of the subject by reflecting the face of the subject in a plurality of consecutive images received from the terminal device 300 via the interface unit 110 on the base face model.
  • the jaw movement face model generation means 152 can generate a jaw movement face model by using any known method.
  • a jaw movement face model can be created by deriving the coordinates of each part of the subject's face in a plurality of consecutive images and mapping the coordinates of each part on the base face model.
  • the generated jaw movement face model is a 3D avatar that moves according to the movement of the subject shown in a plurality of images.
  • the base face model generating means 151 and the jaw moving face model generating means 152 are subjected to the same processing as the processing for constructing the "animoji" implemented in iPhone (registered trademark) X, thereby performing the jaw moving face model. Can be generated.
  • the correction means 131 corrects the difference in the coordinate system between the base face model and the jaw movement face model.
  • the correction means 131 is configured to correct the coordinate system of the face by correcting the coordinate system of the jaw movement face model based on the coordinate system of the base face model.
  • the correction means 131 corrects the face coordinate system by converting the coordinate system of the jaw movement face model to the coordinate system of the base face model, for example, by performing an arbitrary coordinate conversion process.
  • the correction means 131 causes the jaw movement face model generation means 152 to perform coordinate conversion processing on the coordinates of each part of the subject's face in a plurality of consecutive images before the jaw movement face model generation means 152 generates the jaw movement face model. You may. As a result, the coordinate system of the generated jaw movement face model matches the coordinate system of the base face model.
  • the extraction means 122 extracts the motion points in the lower jaw region of the face in the coordinate system corrected by the correction means 131.
  • the extraction means 122 may, for example, extract the movement points in the base face model, extract the movement points in the jaw movement face model before the coordinate conversion, or extract the movement points in the jaw movement after the coordinate conversion.
  • the movement points may be extracted in the face model.
  • the motion points are reflected in the corresponding points of the jaw motion face model in the process of generating the jaw motion face model.
  • the generation means 123 generates motion point locus information indicating the locus of the motion point by tracking the extracted motion points. Since the motion points extracted from each of the plurality of images are acquired in the same coordinate system, the generated motion point trajectory information becomes more accurate information.
  • the motion point locus information is generated by tracking the motion points in the corrected coordinate system (that is, the coordinate system of the base face model), but in the present invention, the correction timing is set.
  • the motion point locus information may be generated by extracting the motion points in the coordinate system of the jaw motion face model, correcting the extracted motion points, and tracking the corrected motion points.
  • the corrected motion point trajectory information may be generated by extracting the motion points in the coordinate system of the jaw motion face model and correcting the motion point trajectory information generated by tracking the motion points. Good.
  • motion point locus information generated by extracting motion points in a plurality of images, reflecting the extracted motion points in a jaw motion face model, and tracking the reflected motion points or the reflected motion points is obtained.
  • the motion point trajectory information may be generated by the correction. This can be achieved by correcting the coordinate system of the motion point or the locus of the motion point in the same manner as the coordinate transformation process described above.
  • the motion point locus information generated by the generation means 123 includes information indicating motions other than the motion points of the lower jaw (for example, movement of the subject's body during imaging (for example, movement of the upper jaw) and noise due to tilt). In some cases, it does not contain information indicating movement other than the movement point of the lower jaw.
  • the motion point trajectory information is corrected after the motion point trajectory information is generated.
  • the coordinate system is corrected to extract the motion point before the motion point trajectory information is generated, or the motion point is extracted and extracted before the motion point tracking information is generated. This is the case of correcting the motion point.
  • FIG. 6D shows an example of the configuration of the processor unit 160 in another embodiment.
  • the processor unit 160 may have a configuration for evaluating the motion of the subject based on the generated motion point trajectory information.
  • the processor unit 160 is a processor unit included in the computer system 100 as an alternative to the processor unit 120 described above.
  • the same reference numbers are assigned to the same components as those described above in FIG. 3, and the description thereof will be omitted here.
  • the processor unit 160 includes acquisition means 121, extraction means 122, generation means 123, and evaluation means 161.
  • the evaluation means 161 is configured to generate jaw movement evaluation information indicating the evaluation of the jaw movement of the subject based on at least the movement point trajectory information.
  • the evaluation means 161 can generate jaw motion evaluation information by using the motion point trajectory learned model that has been processed to learn the motion point trajectory information of a plurality of subjects.
  • the motion point trajectory trained model is configured to correlate the input motion point trajectory information with the evaluation of jaw motion.
  • the motion point locus information generated by the generation means 123 may include noise due to the movement and inclination of the subject's body during imaging, and the motion point trajectory trained model includes the motion point trajectory including such noise.
  • the motion point trajectory information used for learning may be two-dimensional information (information indicating the trajectory of the motion point in a certain plane) or three-dimensional information (information indicating the trajectory of the motion point in a certain space). ), Or four-dimensional information (information indicating the locus of the moving point and the speed of the moving point in a certain space).
  • the velocity may be a scalar quantity, but is preferably a vector quantity.
  • the moving point locus trained model can be constructed using any machine learning model.
  • the motion point locus trained model can be, for example, a neural network model.
  • FIG. 7 shows an example of the structure of the neural network model 1610 that can be used by the evaluation means 161.
  • the neural network model 1610 has an input layer, at least one hidden layer, and an output layer.
  • the number of nodes in the input layer of the neural network model 1610 corresponds to the number of dimensions of the input data.
  • the hidden layer of the neural network model 1610 can contain any number of nodes.
  • the number of nodes in the output layer of the neural network model 1610 corresponds to the number of dimensions of the output data. For example, when evaluating whether or not there is an abnormality in jaw movement, the number of nodes in the output layer can be 1. For example, when evaluating which of the seven patterns the jaw movement locus is, the number of nodes in the output layer can be seven.
  • the neural network model 1610 can be pre-learned using the information acquired by the acquisition means 121.
  • the learning process is a process of calculating the weighting coefficient of each node of the hidden layer of the neural network model 1610 using the data acquired by the acquisition means 121.
  • the learning process is, for example, supervised learning.
  • supervised learning for example, the motion point trajectory information is used as input teacher data, the evaluation of the corresponding jaw movement is used as output teacher data, and the information of a plurality of subjects is used for each node of the hidden layer of the neural network model 1610.
  • By calculating the weighting coefficient of it is possible to construct a trained model capable of correlating the motion point trajectory information with the evaluation of jaw motion.
  • a set of (input teacher data, output teacher data) for supervised learning includes (first subject's motion point trajectory information, first subject's jaw motion evaluation), (second subject). (Movement point trajectory information of the second subject, evaluation of the jaw movement of the second subject), ... (Movement point trajectory information of the i-th subject, evaluation of the jaw movement of the i-th subject, ).
  • the motion point trajectory information newly acquired from the subject is input to the input layer of the trained neural network model, the evaluation of the jaw motion of the subject is output to the output layer.
  • the learning process is, for example, unsupervised learning.
  • unsupervised learning for example, for a plurality of subjects, the outputs are divided into a plurality of clusters by clustering a plurality of outputs when the motion point trajectory information is used as input teacher data.
  • each cluster is characterized based on the assessment of jaw movements of the subject to which it belongs.
  • a learned model capable of correlating the motion point trajectory information with the evaluation of jaw motion is constructed.
  • Clustering can be performed using, for example, any known technique.
  • the motion point trajectory information generated in the above-described example with reference to FIGS. 6A to 6C can be used, for example, to evaluate the jaw motion of the subject.
  • the processor unit 120, 130, 140, or 150 provides an evaluation means (not shown) that generates jaw movement evaluation information indicating the evaluation of the jaw movement of the subject based on at least the generated movement point trajectory information. You can prepare further.
  • the evaluation means may have the same configuration as the evaluation means 161 included in the processor unit 160, or may have a different configuration.
  • the jaw movement evaluation information can be generated by using the movement point trajectory learned model that has been processed to learn the movement point trajectory information of a plurality of subjects.
  • the correction means 131 is used to deal with the movement and inclination of the subject during imaging so that the extracted motion points do not differ between the plurality of consecutive images.
  • the configuration including the above has been described.
  • the movement and inclination of the subject during imaging are dealt with by using a learned model in which the locus of jaw movement including the body movement and inclination of the subject during imaging is learned.
  • the database unit 200 is provided outside the computer system 100, but the present invention is not limited thereto. It is also possible to provide at least a part of the database unit 200 inside the computer system 100. At this time, at least a part of the database unit 200 may be implemented by the same storage means as the storage means on which the memory unit 170 is mounted, or may be implemented by a storage means different from the storage means on which the memory unit 170 is mounted. You may. In any case, at least a part of the database unit 200 is configured as a storage unit for the computer system 100.
  • the configuration of the database unit 200 is not limited to a specific hardware configuration.
  • the database unit 200 may be composed of a single hardware component or may be composed of a plurality of hardware components.
  • the database unit 200 may be configured as an external hard disk device of the computer system 100, or may be configured as a storage on the cloud connected via a network.
  • the components of the processor units 120, 130, 140, 150 and 160 are provided in the same processor unit 120, 130, 140, 150 and 160.
  • the present invention is not limited to this.
  • a configuration in which each component of the processor units 120, 130, 140, 150, and 160 is distributed to a plurality of processor units is also within the scope of the present invention.
  • the plurality of processor units may be located in the same hardware component, or may be located in separate hardware components in the vicinity or remote.
  • the base face model generation means 151 of the processor unit 150 is preferably implemented by a processor unit different from other components. This is because the process of creating a base model, which has a heavy load, can be performed separately.
  • each component of the computer system 100 described above may be composed of a single hardware component or may be composed of a plurality of hardware components. When it is composed of a plurality of hardware parts, the mode in which each hardware part is connected does not matter. Each hardware component may be connected wirelessly or by wire.
  • the computer system 100 of the present invention is not limited to a specific hardware configuration. It is also within the scope of the present invention that the processor units 120, 130, 140, 150, 160 are configured by an analog circuit instead of a digital circuit.
  • the configuration of the computer system 100 of the present invention is not limited to the above-mentioned one as long as the function can be realized.
  • FIG. 8 is a flowchart showing an example (processing 800) of processing by the computer system 100 for measuring the jaw movement of a subject.
  • the process 800 is executed, for example, in the processor unit 130, 140 or 150 in the computer system 100.
  • step S801 the acquisition means 121 of the processor unit acquires a plurality of continuous images of the subject's face during jaw movement.
  • the acquisition means 121 can acquire, for example, a plurality of consecutive images of the subject's face during jaw movement stored in the database unit 200 via the interface unit 110.
  • the acquisition means 121 can acquire, for example, a plurality of consecutive images received from the terminal device 300 via the interface unit 110.
  • step S802 the correction means 131 of the processor unit at least corrects the coordinate system of the subject's face.
  • the correction means 131 of the processor unit can correct the coordinate system of the subject's face, for example, based on the image acquired in step S802.
  • the extraction means 122 of the processor unit extracts at least the motion points in the lower jaw region of the face in the coordinate system corrected in step S802 of each of the above-described embodiments.
  • the extraction means 122 can receive an input of where the motion point in the image is via the interface unit 110 (for example, from the terminal device 300), and can extract the motion point based on the input.
  • the extraction means 122 can automatically extract the motion point without receiving an input, for example.
  • the extraction means 122 detects, for example, a plurality of feature portions of the face for each of the plurality of images, and among the plurality of extracted feature portions, a portion whose coordinate change within a predetermined period is within a predetermined range. It can be extracted as a motion point.
  • the predetermined period can be, for example, all or a part of the period in which a plurality of consecutive images are taken.
  • the predetermined range can be, for example, about 5 mm to about 20 mm.
  • step S804 the generation means 123 of the processor unit tracks the motion point to generate at least the motion point trajectory information indicating the trajectory of the motion point.
  • the motion point locus information is information indicating the trajectory of the motion point within a predetermined period.
  • the predetermined period can be, for example, all or a part of the period in which a plurality of consecutive images are taken.
  • the generated motion point trajectory information becomes more accurate information.
  • step S802 the correction means 131 of the processor unit 140 corrects the coordinate system of the subject's face based on the fixed point and the pre-defined face reference position template.
  • step S8020 can be included before step S802.
  • the second extraction means 142 of the extraction means 122 of the processor unit 140 extracts fixed points in the upper facial region of the face based on the plurality of images acquired in step S801.
  • the second extraction means 142 receives, for example, an input of where the fixed point in the image is via the interface unit 110 (for example, from the terminal device 300), and extracts the fixed point based on the input. Can be done.
  • the second extraction means 142 can automatically extract a fixed point without receiving an input, for example.
  • the second extraction means 142 detects, for example, a plurality of feature portions of the face for each of the plurality of images, and among the plurality of extracted feature portions, the coordinate change within a predetermined period is less than a predetermined threshold value.
  • the part can be extracted as a fixed point.
  • the predetermined period can be, for example, all or a part of the period in which a plurality of consecutive images are taken.
  • the predetermined range can be, for example, about 5 mm to about 20 mm.
  • the second extraction means 142 can, for example, perform color enhancement processing on the image and extract a portion having a color tone different from the surroundings as a fixed point.
  • the fixed point extracted in step S8020 can be considered to be one of those extracted in the extraction step of step S803, and thus step S8020 can be considered to be part of step S803.
  • step S802 the correction means 131 of the processor unit 140 corrects the coordinate system of the subject's face based on the fixed point extracted in step S8020 and the face reference position template defined in advance.
  • the correction means 131 converts each of the plurality of images (for example,) so that the fixed point extracted in step S8020 moves to the corresponding point on the face reference position template for each of the plurality of images. Scale, rotate, shear, translate, etc.).
  • the correction means 131 makes the distance between the fixed point extracted in step S8020 and the corresponding point on the face reference position template zero or less than a predetermined threshold for each of the plurality of images.
  • each of the plurality of images is converted.
  • the correction means 131 transforms each of the plurality of images so that the plane defined by each fixed point of the plurality of images coincides with the corresponding plane on the face reference position template.
  • the face reference position template is a template that defines the position of the face when the face is facing the front, and can be defined anatomically.
  • step S802 the correction means 131 of the processor unit 130 uses a reference coordinate system trained model that has been processed to learn the reference coordinate system of the faces of a plurality of subjects, and uses a plurality of images.
  • the coordinate system of the subject's face can be corrected for each of the above.
  • the reference coordinate system trained model is configured to output an image obtained by correcting the coordinate system of the subject's face in the input image to the reference coordinate system.
  • the reference coordinate system trained model can be constructed by, for example, supervised learning.
  • supervised learning for example, an image of the subject's face can be used as input teacher data, and the reference coordinate system in the image can be used as output teacher data.
  • the trained model becomes able to recognize a reference coordinate system that is presumed to be statistically possessed by the faces of the plurality of subjects.
  • the difference between the subject's face coordinate system and the reference coordinate system is output.
  • the input image is converted (for example, scaling, rotation, shearing, translation, etc.) so that the difference between the coordinate system of the subject's face and the reference coordinate system becomes zero or less than a predetermined threshold value.
  • the trained model By constructing the trained model in, the reference coordinate system trained model can be generated.
  • the image conversion process can be performed using, for example, affine transformation.
  • the reference coordinate system trained model can be constructed by, for example, unsupervised learning.
  • unsupervised learning for example, facial images of a plurality of subjects can be used as input teacher data.
  • the trained model can recognize the face coordinate system common to many images as a reference coordinate system that is presumed to be statistically possessed by the plurality of subjects. become able to.
  • an image of the subject's face is input to such a trained model, the difference between the subject's face coordinate system and the reference coordinate system is output.
  • the input image is converted (for example, scaling, rotation, shearing, translation, etc.) so that the difference between the coordinate system of the subject's face and the reference coordinate system becomes zero or less than a predetermined threshold value.
  • the trained model By constructing the trained model in, the reference coordinate system trained model can be generated.
  • the image conversion process can be performed using, for example, affine transformation.
  • step S802 the correction means 131 of the processor unit 130 inputs a plurality of images acquired in step S801 into the reference coordinate system trained model, and the coordinate system of the subject's face in the input image becomes the reference coordinate system. A corrected image can be obtained.
  • step S802 the processor unit 150 correction means 131 corrects the coordinate system of the subject's face using the subject's jaw movement face model.
  • step S8021 and step S8022 can be included before step S802.
  • the base face model generation means 151 of the correction means 131 of the processor unit 150 generates the base face model of the subject's face.
  • the base face model creating means 151 may generate a base face model from, for example, a subject's face image (for example, a subject's face image stored in the database unit 200) acquired in advance.
  • the base face model may be generated from the image of the subject's face received from the terminal device 300 via the interface unit 110. If the base face model is generated in advance, step S8021 may be omitted.
  • the jaw movement face model generation means 152 of the correction means 131 of the processor unit 150 reflects the subject's face in the plurality of images acquired in step S801 on the base face model, thereby reflecting the subject's jaw movement face.
  • the generated jaw movement face model is a 3D avatar that moves according to the jaw movement of the subject shown in the plurality of images acquired in step S801.
  • step S802 the correction means 131 of the processor unit 150 corrects the coordinate system of the jaw movement face model generated in step S8022 based on the coordinate system of the base face model generated in step S8021.
  • the correction means 131 can convert the coordinate system of the jaw movement face model to the coordinate system of the base face model, for example, by performing an arbitrary coordinate conversion process.
  • the process 800 can include a process (step S805, step S806) for canceling the fixed point locus information that may be included in the motion point locus information.
  • step S805 the second generation means of the correction means 131 of the processor unit 140 generates fixed point locus information indicating the locus of the fixed point by tracking the fixed point extracted in step S8010.
  • the fixed point locus information is information indicating the locus of a fixed point within a predetermined period.
  • the predetermined period can be, for example, all or a part of the period in which a plurality of consecutive images are taken.
  • the second generation means can generate the fixed point locus information by, for example, tracking the coordinates in the image of the fixed point in each of a plurality of consecutive images.
  • step S806 the second correction means of the correction means 131 of the processor unit 140 corrects the motion point trajectory information based on the fixed point trajectory information.
  • the second correction means can correct the motion point locus information by subtracting the locus of the fixed point from the locus of the motion point.
  • the second correction means may correct the motion point locus information by tracking the corrected motion point obtained by subtracting the coordinates of the fixed point from the coordinates of the motion point.
  • the motion point locus information corrected in step S805 and step S806 can be regarded as one of those corrected in the correction step of step S802, and therefore step S805 and step S806 are a part of step S802. Can be considered to be.
  • the motion point locus information corrected in this way does not include an error due to the movement of the fixed point, and can be more accurate information.
  • the motion point locus information is generated by tracking the motion points in the corrected coordinate system, but in the present invention, the timing of the correction is not limited to this.
  • step S803 before step S802 motion points are extracted from a plurality of images acquired in step S801, and then in step S804, motion point trajectory information is generated by tracking the motion points, and then the step.
  • the motion point locus information generated in step S804 may be corrected.
  • step S803 before step S802 the motion points are extracted from the plurality of images acquired in step S801, then in step S802, the motion points extracted in step S803 are corrected, and then the step.
  • the motion point locus information may be generated by tracking the corrected motion point.
  • the motion point locus information generated by the generation means 123 includes information indicating motions other than the motion points of the lower jaw (for example, movement of the subject's body during imaging (for example, movement of the upper jaw) and noise due to tilt). In some cases, it does not contain information indicating movement other than the movement point of the lower jaw.
  • the former is a case where the motion point trajectory information is corrected after the motion point trajectory information is generated.
  • the coordinate system is corrected to extract the motion point before the motion point trajectory information is generated, or the motion point is extracted and the extracted motion point is extracted before the motion point tracking information is generated. This is the case for correction.
  • FIG. 9 is a flowchart showing another example (process 900) of processing by the computer system 100 for measuring the jaw movement of the subject.
  • Process 900 is a process for evaluating the jaw movement of the subject.
  • the process 900 is executed, for example, in the processor unit 160 in the computer system 100.
  • step S901 the acquisition means 121 of the processor unit 160 acquires a plurality of continuous images of the subject's face during jaw movement.
  • Step S901 is the same process as step S801.
  • step S902 the extraction means 122 of the processor unit 160 extracts at least the movement points in the lower jaw region of the face based on the plurality of images acquired in step S801. Extract the motion points in the mandibular region of the face.
  • the extraction means 122 can receive an input of where the motion point in the image is via the interface unit 110 (for example, from the terminal device 300), and can extract the motion point based on the input.
  • the extraction means 122 can automatically extract the motion point without receiving an input, for example.
  • the extraction means 122 detects, for example, a plurality of feature portions of the face for each of the plurality of images, and among the plurality of extracted feature portions, a portion whose coordinate change within a predetermined period is within a predetermined range. It can be extracted as a motion point.
  • the predetermined period can be, for example, all or a part of the period in which a plurality of consecutive images are taken.
  • the predetermined range can be, for example, about 5 mm to about 20 mm.
  • step S903 the generation means 123 of the processor unit 160 generates at least motion point locus information indicating the locus of the motion point by tracking the motion point extracted in step S902.
  • the motion point locus information is information indicating the trajectory of the motion point within a predetermined period.
  • the predetermined period can be, for example, all or a part of the period in which a plurality of consecutive images are taken.
  • step S904 the evaluation means 161 of the processor unit 160 generates jaw movement evaluation information indicating the evaluation of the subject's jaw movement, at least based on the movement point trajectory information generated in step S903.
  • the evaluation means 161 can generate jaw motion evaluation information by using the motion point trajectory learned model that has been processed to learn the motion point trajectory information of a plurality of subjects.
  • the motion point trajectory trained model is configured to correlate the input motion point trajectory information with the evaluation of jaw motion. For example, when the motion point trajectory information generated in step S903 is input to the motion point trajectory learned model, the estimated evaluation of the jaw motion of the subject can be output.
  • the motion point trajectory information generated in step S903 may include noise due to the movement and inclination of the subject's body during imaging, but the motion point trajectory trained model includes the motion point trajectory information including such noise.
  • the jaw movement evaluation information is accurately generated regardless of the noise caused by the movement or inclination of the subject's body during luck photography, which may be included in the movement point locus information. be able to.
  • the motion point locus trained model may be updated by performing the same process as the learning process described later.
  • FIG. 10 is a flowchart showing another example (process 1000) of processing by the computer system 100 for measuring the jaw movement of the subject.
  • the process 1000 is a process for constructing a motion point locus learned model used for measuring the jaw motion of the subject.
  • the process 1000 is executed, for example, in the processor unit 160 in the computer system 100.
  • step S1001 the acquisition means 121 of the processor unit 160 acquires at least the motion point trajectory information indicating the trajectory of the motion points obtained by tracking the motion points of a plurality of subjects.
  • the acquisition means 121 can acquire, for example, the motion point locus information stored in the database unit 200 via the interface unit 110.
  • the motion point trajectory information may be, for example, the motion point trajectory information acquired by using the computer system 100 of the present invention, or the motion point trajectory information obtained from any known jaw motion measuring device. May be good.
  • the acquisition means 121 may further acquire the evaluation of the jaw movements of a plurality of subjects.
  • the acquisition means 121 can acquire, for example, the evaluation of the jaw movement stored in the database unit 200 via the interface unit 110.
  • step S1002 the processor unit 160 constructs a movement point trajectory learned model by learning processing using at least the movement point trajectory information of a plurality of subjects acquired in step S1001 as input teacher data.
  • the motion point locus trained model can be, for example, a neural network model.
  • step S1002 when the moving point locus trained model is a neural network model, in step S1002, the weighting coefficient of each node in the hidden layer of the neural network model is calculated by using the data acquired in step S1001 by the learning process. It is calculated.
  • the learning process is, for example, supervised learning.
  • supervised learning for example, the motion point trajectory information is used as input teacher data, the evaluation of the corresponding jaw movement is used as output teacher data, and the information of a plurality of subjects is used for each node of the hidden layer of the neural network model 1610.
  • By calculating the weighting coefficient of it is possible to construct a trained model capable of correlating the motion point trajectory information with the evaluation of jaw motion.
  • the learning process is, for example, unsupervised learning.
  • unsupervised learning for example, for a plurality of subjects, a plurality of outputs when the motion point trajectory information is used as input teacher data are classified.
  • Classification can be performed using any known method, and by characterizing the classified output by the evaluation of jaw movement, a trained model capable of correlating the motion point trajectory information with the evaluation of jaw movement is available. Will be built.
  • Classification is performed by, for example, clustering.
  • the output is divided into a plurality of clusters by clustering a plurality of outputs.
  • each cluster is characterized based on the assessment of jaw movements of the subject to which it belongs.
  • a learned model capable of correlating the motion point trajectory information with the evaluation of jaw motion is constructed.
  • Clustering can be performed using, for example, any known technique.
  • each step shown in FIGS. 8 to 10 is the processor unit 120, the processor unit 130, the processor unit 140, the processor unit 150, or the processor unit 160 and the memory.
  • the present invention is not limited to this.
  • At least one of the processes of each step shown in FIGS. 8 to 10 may be realized by a hardware configuration such as a control circuit.
  • the computer system 100 is a server device connected to the terminal device 300 via the network 400 has been described as an example, but the present invention is not limited to this.
  • the computer system 100 can be any information processing device including a processor unit.
  • the computer system 100 can be a terminal device 300.
  • the computer system 100 may be a combination of the terminal device 300 and the server device.
  • the computer system 100 has been described as an implementation example of the present invention, but the present invention can also be implemented as, for example, a system including the computer system 100.
  • the system includes, for example, the computer system 100 described above and a gauge point configured to be placed in the mandibular region of the subject.
  • the gauge points may be configured to represent specific points on the gauge points, as described above.
  • the system may further include fixed point gauges configured to be placed in the subject's upper face area.
  • the fixed point gauge may also be configured to represent a particular point on the fixed point, as described above.
  • the present invention is useful as providing a system or the like capable of easily measuring the jaw movement of a subject.
  • Computer system 110 Interface unit 120, 130, 140, 150, 160 Processor unit 170 Memory unit 200 Database unit 300 Terminal equipment 400 Network

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Dentistry (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Epidemiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

La présente invention concerne un système permettant de mesurer un mouvement maxillaire, et le système est configuré pour effectuer : l'obtention d'une pluralité d'images consécutives du visage d'un sujet pendant le mouvement maxillaire (étape S801) ; la correction d'au moins le système de coordonnées du visage du sujet (étape S802) ; l'extraction d'au moins un point de mouvement dans une zone de mâchoire inférieure du visage (étape S803) ; et la génération d'au moins des informations de trajectoire de point de mouvement indiquant la trajectoire du point de mouvement par suivi du point de mouvement (étape S804).
PCT/JP2020/041567 2019-11-08 2020-11-06 Système, programme et procédé de mesure du mouvement maxillaire d'un sujet WO2021090921A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021515056A JP7037159B2 (ja) 2019-11-08 2020-11-06 被験者の顎運動を測定するためのシステム、プログラム、および方法
JP2022006480A JP2022074153A (ja) 2019-11-08 2022-01-19 被験者の顎運動を測定するためのシステム、プログラム、および方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-203439 2019-11-08
JP2019203439 2019-11-08

Publications (1)

Publication Number Publication Date
WO2021090921A1 true WO2021090921A1 (fr) 2021-05-14

Family

ID=75849068

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/041567 WO2021090921A1 (fr) 2019-11-08 2020-11-06 Système, programme et procédé de mesure du mouvement maxillaire d'un sujet

Country Status (2)

Country Link
JP (2) JP7037159B2 (fr)
WO (1) WO2021090921A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102388337B1 (ko) * 2022-01-25 2022-04-20 (주)힐링사운드 턱관절 질환 개선 서비스용 어플리케이션의 서비스 제공방법
CN115187457A (zh) * 2022-06-17 2022-10-14 先临三维科技股份有限公司 模型拼接方法、运动轨迹追踪方法、装置、设备及介质
WO2024110825A1 (fr) * 2022-11-21 2024-05-30 Olive Khurana Dispositif et système de surveillance d'habitude de mastication, et procédé associé

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001112743A (ja) * 1999-10-18 2001-04-24 Rikogaku Shinkokai 三次元顎運動表示装置、方法及び三次元顎運動表示プログラムを記憶した記憶媒体
JP2005349176A (ja) * 2004-05-14 2005-12-22 Rise Corp 顎運動解析方法及び顎運動解析システム
JP2007058846A (ja) * 2005-07-27 2007-03-08 Advanced Telecommunication Research Institute International リップシンクアニメーション作成用の統計確率モデル作成装置、パラメータ系列合成装置、リップシンクアニメーション作成システム、及びコンピュータプログラム
JP2010142285A (ja) * 2008-12-16 2010-07-01 Yoshida Dental Mfg Co Ltd 下顎前歯部運動追尾システム、下顎前歯部運動追尾装置および顎関節雑音分析装置
JP2010273756A (ja) * 2009-05-27 2010-12-09 Yoshida Dental Mfg Co Ltd 疼痛検出器を備える顎関節症診断支援システムおよび装置
JP2011138388A (ja) * 2009-12-28 2011-07-14 Canon Inc データ補正装置及び方法
US20120107763A1 (en) * 2010-10-15 2012-05-03 Adams Bruce W System , method and article for measuring and reporting craniomandibular biomechanical functions
JP2012516719A (ja) * 2009-02-02 2012-07-26 ジョイントヴュー・エルエルシー 非侵襲性診断システム
WO2014106519A1 (fr) * 2013-01-02 2014-07-10 Tionis E.K., Inhaber Dr. Stefan Spiering Dispositif de mesure et procédé permettant la détection optoélectronique du mouvement relatif entre la mâchoire supérieure et la mâchoire inférieure d'une personne
US20180336736A1 (en) * 2015-02-23 2018-11-22 Osstemimplant Co., Ltd. Method for simulating mandibular movement, device for same and recording medium for recording same

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5300795B2 (ja) 2010-06-28 2013-09-25 日本電信電話株式会社 顔表情増幅装置、表情認識装置、顔表情増幅方法、表情認識方法、及びプログラム
JP2019079204A (ja) 2017-10-23 2019-05-23 佐藤 良治 情報入出力制御システムおよび方法

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001112743A (ja) * 1999-10-18 2001-04-24 Rikogaku Shinkokai 三次元顎運動表示装置、方法及び三次元顎運動表示プログラムを記憶した記憶媒体
JP2005349176A (ja) * 2004-05-14 2005-12-22 Rise Corp 顎運動解析方法及び顎運動解析システム
JP2007058846A (ja) * 2005-07-27 2007-03-08 Advanced Telecommunication Research Institute International リップシンクアニメーション作成用の統計確率モデル作成装置、パラメータ系列合成装置、リップシンクアニメーション作成システム、及びコンピュータプログラム
JP2010142285A (ja) * 2008-12-16 2010-07-01 Yoshida Dental Mfg Co Ltd 下顎前歯部運動追尾システム、下顎前歯部運動追尾装置および顎関節雑音分析装置
JP2012516719A (ja) * 2009-02-02 2012-07-26 ジョイントヴュー・エルエルシー 非侵襲性診断システム
JP2010273756A (ja) * 2009-05-27 2010-12-09 Yoshida Dental Mfg Co Ltd 疼痛検出器を備える顎関節症診断支援システムおよび装置
JP2011138388A (ja) * 2009-12-28 2011-07-14 Canon Inc データ補正装置及び方法
US20120107763A1 (en) * 2010-10-15 2012-05-03 Adams Bruce W System , method and article for measuring and reporting craniomandibular biomechanical functions
WO2014106519A1 (fr) * 2013-01-02 2014-07-10 Tionis E.K., Inhaber Dr. Stefan Spiering Dispositif de mesure et procédé permettant la détection optoélectronique du mouvement relatif entre la mâchoire supérieure et la mâchoire inférieure d'une personne
US20180336736A1 (en) * 2015-02-23 2018-11-22 Osstemimplant Co., Ltd. Method for simulating mandibular movement, device for same and recording medium for recording same

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102388337B1 (ko) * 2022-01-25 2022-04-20 (주)힐링사운드 턱관절 질환 개선 서비스용 어플리케이션의 서비스 제공방법
CN115187457A (zh) * 2022-06-17 2022-10-14 先临三维科技股份有限公司 模型拼接方法、运动轨迹追踪方法、装置、设备及介质
WO2024110825A1 (fr) * 2022-11-21 2024-05-30 Olive Khurana Dispositif et système de surveillance d'habitude de mastication, et procédé associé

Also Published As

Publication number Publication date
JPWO2021090921A1 (ja) 2021-11-25
JP2022074153A (ja) 2022-05-17
JP7037159B2 (ja) 2022-03-16

Similar Documents

Publication Publication Date Title
US12079944B2 (en) System for viewing of dental treatment outcomes
WO2021090921A1 (fr) Système, programme et procédé de mesure du mouvement maxillaire d'un sujet
CN107485844B (zh) 一种肢体康复训练方法、系统及嵌入式设备
Hutton et al. Dense surface point distribution models of the human face
JP7200439B1 (ja) アバター表示装置、アバター生成装置及びプログラム
JP6124308B2 (ja) 動作評価装置及びそのプログラム
JP2018538593A (ja) 表情検出機能を備えたヘッドマウントディスプレイ
US20150320343A1 (en) Motion information processing apparatus and method
CN113728363B (zh) 基于目标函数生成牙科模型的方法
CN110584775A (zh) 气道模型生成系统及插管辅助系统
CN112232128B (zh) 基于视线追踪的老年残障人士照护需求识别方法
JP2018007792A (ja) 表情認知診断支援装置
KR20170125264A (ko) 치아 움직임 추적 장치 및 그 방법
WO2023189309A1 (fr) Programme informatique, procédé et dispositif de traitement d'informations
KR102247481B1 (ko) 나이 변환된 얼굴을 갖는 직업영상 생성 장치 및 방법
Jolly et al. Posture Correction and Detection using 3-D Image Classification
TWI644285B (zh) Acupuncture visualization Chinese medicine system and method thereof by using AR technology
JP4823298B2 (ja) 三次元形状復元方法とその装置及びプログラム
JP2021099666A (ja) 学習モデルの生成方法
WO2023203385A1 (fr) Systèmes, procédés et dispositifs d'analyse statique et dynamique faciale et orale
US11992731B2 (en) AI motion based smart hometraining platform
Ridwan et al. Synthesizing Dynamic Facial Expressions of the Patient Model with Projection Mapping for the Nurse Training Simulator
WO2023062762A1 (fr) Programme d'estimation, procédé d'estimation et dispositif de traitement d'informations
JP7405809B2 (ja) 推定装置、推定方法、および推定プログラム
JP2019139608A (ja) 画像生成装置及び画像生成プログラム

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021515056

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20883734

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20883734

Country of ref document: EP

Kind code of ref document: A1