WO2020201623A1 - Determining tooth color shade based on image obtained with a mobile device - Google Patents

Determining tooth color shade based on image obtained with a mobile device Download PDF

Info

Publication number
WO2020201623A1
WO2020201623A1 PCT/FI2020/050203 FI2020050203W WO2020201623A1 WO 2020201623 A1 WO2020201623 A1 WO 2020201623A1 FI 2020050203 W FI2020050203 W FI 2020050203W WO 2020201623 A1 WO2020201623 A1 WO 2020201623A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
tooth
color
training
embeddings
Prior art date
Application number
PCT/FI2020/050203
Other languages
French (fr)
Inventor
Tomi HOTAKAINEN
Jukka KUOSMANEN
Ville Sarja
Karthik Balu
Original Assignee
Lumi Dental Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lumi Dental Oy filed Critical Lumi Dental Oy
Priority to EP20717242.0A priority Critical patent/EP3948786A1/en
Publication of WO2020201623A1 publication Critical patent/WO2020201623A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C13/00Dental prostheses; Making same
    • A61C13/08Artificial teeth; Making same
    • A61C13/082Cosmetic aspects, e.g. inlays; Determination of the colour
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/46Measurement of colour; Colour measuring devices, e.g. colorimeters
    • G01J3/463Colour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/46Measurement of colour; Colour measuring devices, e.g. colorimeters
    • G01J3/50Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors
    • G01J3/508Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors measuring the colour of teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Definitions

  • the present invention relates to a method, a system and a computer program product related to determining color shade of a tooth of a subject. More particularly, the invention relates to a method, a system and a computer program product related to determining color shade of a tooth in order to enable manufacturing of artificial teeth, an artificial tooth or a dental crown (corona artificialis) in correct color and/or in order to enable determining change in tooth color and/or in order to provide indication that the tooth may suffer from an abnormality.
  • Natural teeth are formed by layers of materials having different optical characteristics.
  • the enamel of the outer part of the tooth affects greatly on the color shade of the tooth, but also the dentine under the enamel may affect the color shade, especially if the enamel is particularly thin and/or translucent. Thickness of the enamel varies. This complexity of the structure of the teeth causes the color shade definition to become a challenging task. Description of the related art
  • Imaging-based solutions for tooth color shade detection recognize the problem caused by effect of lighting conditions that may significantly affect the colors appearing in the image.
  • Two main types of solutions are known in the art for calibrating the color shades: lighting conditions are standardized for example using a lighting device that closes any ambient light away that would affect the acquired image, and/or one or more standard reference colors or color shades are included in the same image with the subject tooth or teeth. These calibration methods may be used either separately or combined.
  • International patent application W017166326 A1 discloses a method and device for realizing color comparison of artificial tooth using an image.
  • a standardizing color comparison environment is provided having a standard light source. The image, as such is sent to artificial tooth production center and a technician compares the image shown on a color corrected monitor to a selection of colors that are under a like standard light source.
  • An object is to provide a method and apparatus so as to solve the problem of determining tooth color shade.
  • the objects of the present invention are achieved with a method performed in a server according to the claim 1 and with a method performed in a mobile communication device according to the claim 11.
  • the objects of the present invention are further achieved with a computer program product according to claim 15, a data-processing system comprising means for carrying out the method steps, with a data-processing apparatus according to the claim 17 and a mobile communication device according to claim 18.
  • a computer-implemented method of defining color shade of a tooth iusing a camera of a mobile communication device comprises receiving an image of a tooth of a subject, wherein the received image comprises a part of an image acquired with a plain camera of the mobile communication device and obtaining indication of lighting conditions of the received image.
  • the method also comprises selecting applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color and applying K-means clustering to the received image to obtain color code embeddings of the received image.
  • the method further comprises adjusting the color code embeddings of the received image based on the indication of lighting conditions, comparing the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with color code embeddings having lowest distance to received image's color code embeddings and defining the color shade of the tooth in the received image to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the received image.
  • the method also comprises communicating tooth color information indicative of the defined tooth color shade back to the mobile communication device from which the image was received.
  • area of the received image is divided into a matrix with a plurality of cells, each cell representing one part of the tooth, and wherein color code embeddings are defined for each cell of the matrix.
  • the training database is one of a private training database and a global training database.
  • the received image is associated with a label indicating lighting conditions in which the received image was acquired.
  • the adjusting the color code embeddings comprises deducting from the obtained color embeddings of the received image a difference between color code embeddings of a global reference image and a calibration image acquired with the mobile communication device in lighting conditions that approximately correspond to the lighting conditions in the received image.
  • the applicable training images comprise a plurality of training images each associated with a label that indicates that the respective training image has been acquired in approximately similar lighting conditions with the received image and the calibration image and a label indicating actual color shade of a model tooth shown in the respective training image.
  • the adjusting the color code embeddings comprises calculating a magnitude difference between color code embeddings of the received image and color code embeddings of a calibration image acquired with the mobile communication device in approximately similar lighting conditions.
  • applicable training images comprise a plurality of training images associated with magnitude difference information, and the training image having the lowest distance is defined by comparing the magnitude difference of the received image and magnitude differences associated with the applicable training images.
  • the method further comprises exporting at least part of the tooth color information to another application or data processing system.
  • the method further comprises at least one of: providing the defined tooth color information to be used as basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade, comparing the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and providing an indication that the defined tooth color information indicates an abnormality in the tooth.
  • a data-processing apparatus comprising means for carrying out the method according to any of the above aspects.
  • a computer-implemented method of defining color shade of a tooth using a camera of a mobile communication device comprises acquiring an image of teeth of a subject with a plain camera of the mobile communication device, receiving, via the user interface of the mobile communication device, determination of an area in the acquired image that comprises one tooth, pre-processing the acquired image to produce an image of the one tooth for uploading, and associating a label with the image that indicates lighting conditions in which the image was acquired.
  • the method further comprises uploading the image of the tooth to a server for obtaining indication of lighting conditions of the received image on basis of the associated label, for selecting applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color, for applying K-means clustering to the received image to obtain color code embeddings of the uploaded image, for comparing the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with lowest distance to uploaded image's color code embeddings, and for defining the color shade of tooth shown in the uploaded image to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the uploaded image.
  • the method also comprises receiving by the mobile communication device tooth color information indicative of the defined color shade of the tooth of the subject comprised in the uploaded image.
  • area of the uploaded image is divided into a matrix with a plurality of cells, each cell representing one part of the tooth, and wherein color code embeddings are defined for each cell of the matrix.
  • the training database is one of a private training database and a global training database.
  • the method further comprises associating, before uploading, the uploaded image with a label indicating lighting conditions in which the image was acquired.
  • the method further comprises at least one of: providing the defined tooth color information to be used basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade, indicating a result of comparison of the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and providing an indication that the defined tooth color information indicates an abnormality in the tooth.
  • a mobile communication device comprising means for carrying out the method according to any one of the the eleventh to fifteenth method aspect.
  • a computer program product having instructions which when executed cause a computing device or system to perform a method according to any one of the above method aspects.
  • a data-processing system comprising means for carrying out the method according to any of the above method aspects.
  • the present invention is based on the idea of acquiring an image of teeth with a standard camera provided in a mobile communication device and using artificial intelligence obtainable with use of a combination of machine learning methods for teaching and enabling the system to recognize exact colour shade of the tooth of previously unknown colour shown in the image.
  • the determined colour shade may then be utilized for manufacturing artificial teeth or tooth crown.
  • the present invention has the advantage that the dentist can use a simple, plain mobile communication device camera, for example a mobile phone or tablet computer camera, to acquire a digital image of the teeth of a patient, while the system enables accurately determining color shade of the tooth on basis of the acquired digital image with high reliability without requiring exact calibration of parameters that have an effect on the colors appearing in the acquired digital image.
  • Figure 1 is a schematic representation of a system
  • Figures 2a to 2d illustrate pre-processing of the images.
  • Figure 3 illustrates an exemplary process of determining a tooth shade as seen by a user
  • Figure 4 illustrates an exemplary process of determining tooth shade as seen by the system
  • Figure 5 illustrates a process of handling a training image
  • Figure 6 illustrates a process of defining tooth color shade
  • the figure 1 illustrates a system according to the invention.
  • This exemplary system serves three different users, for example dentists, each having their own mobile devices (100a, 100b, 100c), each equipped with a camera and with mobile data connectivity. Any number of users may use the actual system.
  • Each mobile device (100a, 100b, 100c) is capable of connecting to a server (110) over a data connection.
  • the server (110) carries responsibility of the intelligence in the system.
  • the server comprises or is associated with at least one image storage means (120, 125a, 125b, 125c, 130, 140).
  • the system utilizes at least three types of images, namely training images, calibration images and production images.
  • the term 'training image' refers to an image that is used for training an artificial intelligence mapping function to map an image representing a tooth or teeth of a patient to a particular tooth shade.
  • the term 'calibration image' refers to an image that is used for calibrating lighting conditions. Lighting conditions refer to normal ambient lighting conditions at the venue, without any specially designed apparatus attached or associated with the mobile phone or its camera for standardizing the lighting conditions.
  • the term 'production image' refers to an image representing teeth/tooth of an actual patient, the color of which is to be determined.
  • each of the training image, calibration image and production image comprise an image of a single tooth that covers at least majority of the visible area of a single tooth.
  • Training images comprise one or more collections of images used for training the tooth color shade determining system.
  • the term training database refers to a collection of labeled training images, and training data associated with the respective training images as training data. Training data may be associated with the training images as labels. Training databases may be logically divided into private training databases (125a, 125b, 125c) and global training databases (120).
  • labels associated with each training image preferably comprise an identifier of the user, one or more identifiers of the training environment and attributes of the tooth color shade.
  • the identifier of the user may be for example a username.
  • Identifiers of the training environment may comprise time at which the image was taken, name of the venue as given by the user, GPS coordinates, light source, model of the camera, phone model and calibration image name.
  • Light source may for example define name and/or model of the light source in the venue as given by the user.
  • Attributes of the tooth color shade include the color shade to which the image is/was compared with (for example Al) and color code embeddings. If the image was taken from a standard tooth shade guide, the attributes of the tooth color shade include the known standard shade. If the training image is taken from a real tooth, there is a non-zero likelihood that there is an error in the training data.
  • the global training database preferably comprises the same data as a private training database, except the user identification data and the name of the venue as a user entered data.
  • Training images in a private training database may comprise identification of the user, but training images in a global training database (120) should not comprise any user identification for ensuring user privacy.
  • Images, including training images, calibration images and production images are preferably labeled differently according to their intended use, origin and other information associated with the images for example by the user, and processing, storage and use of the images by the server (110) is based on the labeling.
  • Labeling enables flexible use of images. For example, images from a private training database may be included in the global training database by relabeling. However, to maintain full control of privacy of users as well as quality and content of the global training database (120), such inclusion is preferably only performed by operators or supervisors of the system on consent of the respective user.
  • the term label refers to any type of additional data associated with images. Labels may also be referred to as metadata.
  • a training database (120, 125a, 1250b, 125c) may be used for color shade determination of production images acquired using camera of any one of the mobile devices (100a, 100b, 100c).
  • each user can build their own private training database (125a, 125b, 125c) accessible only by the respective user using his/her mobile device (100a, 100b, 100c) or by other internet-capable device that is capable of authenticating the user.
  • the private or global training databases comprise data associated with each of a plurality of training images that are used to teach the system to correctly recognize color shade of a tooth.
  • training databases 120, 125a, 125b, 125c
  • training databases 120, 125a, 125b, 125c
  • storage means 130
  • storage means 130
  • storage means 130
  • storage means 140
  • production images may reside in same or different physical storage devices as known in the art.
  • the server (110) may be a single physical server or a plurality of physical servers, or the server (110) may be implemented in a virtual server or in a cluster of virtual servers each providing service to one or more users (100a, 100b, 100c) as known in the art.
  • the training images stored in the private or global training databases are used for training an artificial intelligence mapping function to map an image representing a tooth or teeth of a patient to a particular tooth shade.
  • a plurality of images (200) of teeth of the patient is acquired by the user with the normal, plain camera of the mobile device.
  • Each acquired image is pre-processed at the user device before the training image is sent to the server.
  • the pre-processing will be described in more detail later on.
  • images used for a particular private training database are always acquired using the same mobile device of the user, which device comprises a camera and a tooth color shade application for processing and labeling the images.
  • the tooth color shade application running at the server may be referred to as 'the server application'.
  • the user may choose to build several private training databases for different lighting conditions. This may be the case if the user works and takes pictures in different venues, or if amount of ambient light in a particular venue varies greatly depending on for example time of day or time of year.
  • Figures 2a to 2d illustrate examples of pre-processing of acquired images. The same process is in principle applicable to all types of images used in this system, in other words for training images, calibration images and production images.
  • the acquired image shows mouth and teeth of a subject.
  • the image may show a model tooth on arbitrary background.
  • the user After acquiring the image with the camera of the mobile device, the user preferably indicates an area showing a single tooth (210) in the acquired image (200).
  • the user indicates the selection of the single tooth (210) by cropping the image (200) as illustrated with the grey shading in the figure 2b so that only the single, selected tooth (210) is shown in the cropped image (200') ⁇
  • the user may be enabled to select a part (200") of the image as illustrated in the figures 2c and 2d, and the user application may automatically crop the image by removing image data outside the area selected by the user.
  • information on the selected area may be included in the acquired image, which is uploaded in its entirety to the server, wherein only the selected area will be processed for tooth color determining.
  • the term uploaded image refers to the cropped image (200') or the selected part of the image (200") that is uploaded to the server for processing.
  • the user application is preferably operable for performing the cropping of the image or selecting the area in the image for automatic cropping.
  • the user could use other image processing software in the user device for cropping the image and only then associate the cropped image with the user application.
  • the uploaded image (200', 200") showing the selected tooth (210) may be divided into a matrix (220) having a plurality of cells, and color shade is defined for each of these cells.
  • the cells of the matrix (220) have approximately of equal areas.
  • Defining the matrix (220) is preferably performed at the server, by the server application, the matrix (220) may also be defined by the user application. Dividing the area of the uploaded image (200', 200") into a plurality of cells enables taking into account variation of the tooth color shade between different parts of the selected tooth (210).
  • a 3*3 matrix may be used as illustrated in the figure 2b, and according to another exemplary embodiment, a 3*4 matrix as illustrated in the figures 2c and 2d.
  • a smaller or larger matrix (220) may be used, for example 2*2, 2*3, 4*4, 4*5, 4*6, 5*5, or 5*6 matrix. Decreasing the size of the matrix reduces the accuracy of color shade determining in different parts of the tooth, but tests have indicated that sufficiently accurate color shade determining of different parts of the tooth can be achieved for example with a 3*3 matrix.
  • the cropped image or the selected area of the image is preferably rectangular, as seen commonly in the art, but it can also be free form as illustrated in the figure 2d to facilitate inclusion of majority of the area of the selected tooth (210) in the uploaded image (200', 200") for tooth color shade determining processing, without including any noise caused by unwanted objects in the part of the acquired image to be analyzed, such as neighboring teeth, gum, tongue or lips. If a free form selection is used, the cells of the matrix may have mutually different shapes and sizes especially in the outer edge cells of the matrix.
  • Figure 3 illustrates an exemplary high-level process of determining a tooth shade as seen by a user using the user application.
  • a user interface towards the system is preferably provided by the user application running on a processor of a mobile device used by the user.
  • the user After installing the user application at the mobile device of the user and signing up as user of the user application, the user first acquires a calibration image using the camera of the mobile device in the step 301.
  • a unique identification is generated for the user.
  • the user is identified with identification '111'.
  • the calibration image is to be acquired in the normal working lighting conditions at the user's premises, for example at a dentist's reception. Normal working lighting conditions thus refer to normal ambient lighting conditions at the venue, without any specially designed apparatus attached or associated with the mobile phone or its camera for standardizing the lighting conditions. If the user works in more than one venue at different times, separate calibration image is preferably acquired for each of the venues, since the lighting conditions likely vary between these significantly.
  • the user application allows the user to tag each calibration image according to the venue it was taken in. Further, geographical location of the calibration image may be associated with the calibration image and used as identification of the venue.
  • the venues may be named in any manner. For example, a simple numbering or alphabet naming may be user. Preferably the user may name the venue freely, reflecting the actual name of the venue such as "Discovery Bay Office", "Central Office” and so on. This name is shown to the user in his/her user application. This way it becomes easier for the user to select the correct venue each time he/she uses the user application subsequently.
  • the calibration image naming functionality of the user application may be utilized for naming calibration images taken at different times in the same venue.
  • the calibration image thus acquired is used for an initial lighting calibration.
  • the calibration image is preferably an image of a tooth sample with predefined color, for example A1 color shade as provided in a standard shade guide, such as Vita Classic Shade Guide used since 1960's, using the light source in the dentist's office.
  • the calibration image is cropped, or the area of the sample tooth is selected as explained above in relation to images 2a to 2d.
  • the calibration image is uploaded to the server (110) and stored as a calibration image at the memory device (130) at or associated with the server.
  • the stored calibration image may be labeled as a calibration image for this particular user (Ul) together with the user-given name for the calibration image.
  • the calibration image is a reference that is used for determining approximate lighting conditions (LI).
  • the result, color code embeddings of the calibration image is compared to color code embeddings of a global reference image. While the calibration image and the global reference image represent a sample tooth of known color, preferably A1 color, a difference e can be calculated between the color code embeddings of the calibration image and the embeddings of the global reference image.
  • Calculation of the difference 'e' can be expressed with a mathematical representation.
  • the lighting conditions in the calibration image be LI, and the model tooth shown in the calibration image has shade Al.
  • color code embeddings of the calibration image are associated with parameters (Ll+Al).
  • Color code embeddings of the corresponding global reference image is associated with parameters Global(L+Al).
  • Calibration image is used to enable compensation of differences in lighting conditions and thereby indirectly cancelling differences in factors affecting colors seen in the image, including but not restricted to ambient light, camera clarity, indoor air quality problems.
  • the color shade determining system is capable of adapting the color shade determining such that lighting conditions in production images do not have to be exactly the same as in the calibration image.
  • the user is provided with two alternatives in the step 302.
  • training database refers to a processed training database that is operable to define tooth color shades.
  • Second alternative is that the user enables use of global training database, which is provided in the global training database by the system provider, ready to be used by any user.
  • the user selects use of global training database
  • use of the private training database is no longer enabled, and the system will subsequently only use the global training database provided by the system provider and the user device is auto-calibrated using the above explained calibration technique.
  • the user initially selects use of a private training database, he retains the option to change to use of the global training database as illustrated with the arrow back to the selection step 302. Such re-selection may occur at any step after first selecting use of private training database.
  • the images recorded in the training databases include no patient identification information.
  • the images include label indicating the user identity (Ul). Recorded images represent a tooth or a part of a tooth from which the user or the patient is not possible to be identified.
  • No mix of private and global training database is allowed without user's permission. Only the administrators of the system providing company can access the global training database using his administration permission login to the system.
  • inclusion of private training database into the global training database may be enabled. However, such inclusion can only be performed by administrators of the system in response to acceptance of such inclusion by the respective user, and any user identification (Ul) of the included private training database is removed in this process.
  • Global training database should never include any identification of users from whom such training database originates from to protect user's privacy.
  • a production image in other words an image acquired by the user using his mobile device and representing a patient's tooth of unknown color
  • the server application performs color shade determination on basis of intelligent comparison of the production image with the private training database.
  • a production image in other words an image acquired by the user using his mobile device and comprising teeth of unknown color, is obtained, processed at the user device and uploaded to the server, which performs color shade determination on basis of intelligent comparison of the production image with the global training database.
  • the acquired image may be cropped in the user device or the appropriate area of the image that includes the selected tooth is marked on the image before uploading the image to the server for processing.
  • the matrix dividing the selected tooth into a plurality of areas to be processed may be defined at the user device or at the server.
  • the image to be uploaded may be appropriately labeled by the user device and/or by the server. If user identity is associated in the uploaded image, it can be stored as a label associated with the image. Also, a label indicating lighting conditions may be associated with the uploaded image.
  • the user preferably selects one lighting condition among the venue and/or lighting condition names associated with and thus identifying his/her calibration images, and the respective lighting condition label (LI) is associated with the uploaded training images and production images using the user application, that best matches the current lighting conditions.
  • the respective lighting condition label (LI) is associated with the uploaded training images and production images using the user application, that best matches the current lighting conditions.
  • steps 305P and 305G are mutually similar, in response to uploading the production image to the server, he receives information that indicates the automatically defined tooth color shade.
  • the user may repeat steps 304G and 305G or steps 304P and 305P for a plurality of production images.
  • the user may also choose, at any point, to obtain another calibration image. This may occur for example when he/she first time uses the user application at a new venue.
  • Figure 4 illustrates an exemplary high-level process of determining tooth shade as seen by the system.
  • the system receives and analyzes a calibration image from a registered user (Ul).
  • a fingerprint of the user is created that comprises a unique identification of the user (Ul), his unique calibration lighting condition (LI) and information associated with color code embeddings associated to the predefined color (Al) of the standard A1 tooth as it appears in the calibration image.
  • the data structure stored on the server thus comprises information Ul + LI + Al.
  • Bitmap format is preferred for all image types, since compression of the image reduces colors in the image, which would subsequently reduce quality of the color shade determining.
  • Image coding preferably utilizes one of color spaces commonly used in computer graphics, such as RGB (Red-Green-Blue), CMYK (Cyan-Magenta-Yellow-Black) or YCbCr. Lighting condition is defined by analyzing the calibration image.
  • One possible method of defining a lighting condition is to first process the calibration image using K-means clustering algorithm, which is known in the art of image processing.
  • the calibration image or each cell of the calibration image may be simplified into a limited size color code embeddings matrix using the K- means clustering algorithm.
  • the calibration image or each cell of the calibration image may be simplified into a 1x12 color code embeddings matrix.
  • the determined lighting condition (LI) may be stored as a label associated with the image.
  • the label comprises the 1x12.
  • the 1x12 matrix may be defined as follows: first 3 items of the 1x12 matrix may include the proportions of the 3 most often detected colors in the image.
  • the remainder of the 1x12 matrix may then comprise data indicative of the respective colors.
  • RGB coding may be used.
  • 3 items of the 1x12 matrix may be indicative of RGB values, for example the value of R of the first, most often detected color corresponding to the first of the first 3 items, then the value of G of the first color and then B of the first color.
  • another 3 items of the 1x12 matrix may represent RGB values of the second in order most often detected color.
  • the remaining 3 values are the 1x12 matrix may represent RGB values of the third in order most detected color and its RGB values.
  • the most often detected colors defined in the label may be arranged in any order; in other words, the colors do not have to be in order of proportions of appearance of the colors.
  • Various labels associated with the images may have a predefined format or free text form, and they may include but are not limited to a label identifying at least one of a venue name and a time of day or time of year.
  • the label may also be a combination of a free text defined by the user and one or more system provided labels having predefined form, which may or may not be shown to the user.
  • the calibration image metadata stored as labels associated with the image may include information on location where the calibration image was taken. This enables the user application to subsequently automatically select or propose to the user to select the appropriate calibration image for each subsequent training and/or production image on basis of the location.
  • Location data may be actual geographical location defined for example on basis of a GPS, GNSS, GLONASS or alike positioning system available in the user device, but location information may also be defined in any other way known in the art, for example on basis of available local area network names.
  • the server next receives a plurality of training images in the step 402 and processes them to generate a fully functional private training database. If the user selects using global training database, step 402 may be omitted.
  • the private training database preferably comprises processed training database based on a plurality of training images that are taken by the same user (Ul) and that have the same lighting conditions (LI).
  • Ul the same user
  • LI lighting conditions
  • the location may be suggested by the user application on basis of detected current location of the user and on basis of locations for which calibration images have been provided. Selection can be fully automatic, or the user may manually select and/or accept a suggestion made by the user application.
  • Global training database comprises training database that is based on a plurality training images acquired and uploaded by either the system provider or by anonymous users, which training database comprises training database for a plurality of different lighting conditions. After processing the calibration image, the system is ready to receive production image in the step 403, representing a tooth with unknown color.
  • the production image is then analyzed in the step 404 to determine the color shade of the tooth.
  • Color shade analysis is performed using a process that will be disclosed in more detail in connection to the figure 6.
  • the result is communicated and/or provided back to the user in the step 405.
  • Communication may be performed for example by showing in the user interface of the user application the determined color shade of the tooth or, when a matrix (220) is used, the determined color shade of each cell in the matrix.
  • the application also enables the user to export tooth color information as an export file in any applicable computer coding format known in the art. For example, a pdf-document may be exported.
  • Such exported file comprising tooth color information can subsequently be used in communication with a dental laboratory that manufactures the artificial tooth.
  • the exported file may be attached to an email, or the exported file can be automatically transferred to other computer systems over any type of data exchange capable interface known in the art of computer networks and/or mobile devices.
  • Exported and/or communicated tooth color information may further comprise color code embeddings of the analyzed production image.
  • Figure 5 illustrates an exemplary process of handling a training image when training of a training database is performed. This process may be referred to as a light calibration method. Same process may be applied to training both the private training database and the global training database, although the source of the received training image may be different.
  • the user can start training his own private database by acquiring a plurality of training images using his camera in his own lighting conditions (LI) corresponding to those of a previously acquired calibration image.
  • the user (Ul) preferably acquires images of teeth of different colors, for example model teeth with different known colors according to a standard tooth color shade system.
  • the user labels the training image with a known color shade according to the model tooth.
  • model teeth may have colors Al, A2, A3, A3.5, A4, Bl, B2, B3, B4, Cl, C2, C3, C4, D2, D3, D4, and the respective training images are labeled accordingly. Only single-color teeth or standard colored model teeth should be used for training. The acquired training image and the respective tooth color information is uploaded and stored in the private training database associated with the server.
  • the global training database is trained in a similar manner, but preferably a plurality of training images taken for each of a plurality of different lighting conditions and known tooth colors.
  • an enhanced training target for a user to be used as a training image.
  • Such enhanced training target which may be referred to as a training sheet, comprises a single sheet a variety of known model tooth color shade samples on a black background.
  • the variety of shade samples comprise the entire range of shades of a standard shade guide, such as the Vita Classic Shade Guide.
  • all shade samples disposed on the enhanced training sheet are rectangular. This simplifies image processing task, while the wanted shade sample areas may be easily recognized and cropped for further image processing.
  • the acquired training image is preferably processed at the server, since a standard mobile phone camera is unlikely to have image processing capabilities to extract a plurality of selected areas from a single acquired image.
  • the server preferably has pre-stored information on order of the tooth shade samples on the training sheet so that each sample can be automatically labeled.
  • the mobile phone may also comprise image processing functionality which enables separately selecting (cropping) each of the shade samples shown in the acquired, single image of the training sheet and providing a plurality of cropped shade sample images with for processing at the server.
  • the image processing functionality of the mobile phone may also be capable of labeling the cropped shade sample images.
  • the shade sample labels may be associated with each of the cropped shade sample images based on order of arrival at the server.
  • the training image is received at the server.
  • the received training image is labeled with information regarding the known standard color of the model tooth shown in the training image. If the training image is used for training private training database, the training image will also be tagged with information regarding the user and his initial lighting conditions.
  • the lighting information associated to this training image is obtained by the server.
  • lighting information preferably comprises the lighting label LI associated with a calibration image and the difference e that was defined on basis of the respective calibration image.
  • an unsupervised learning algorithm is used to analyze the training image.
  • the same unsupervised learning algorithm may be used for defining colors in all image types.
  • K-means clustering algorithm known in the art may be used, which finds groups in the data. K refers to number of groups.
  • the algorithm works iteratively to assign each data point of the image, in other words each pixel of the image, to one of K groups based on the features that are provided. Data points are clustered based on feature similarity.
  • the results of the K-means clustering algorithm are:
  • unsupervised learning is applied to convert a large matrix, for example a bitmap acquired with the camera into a smaller matrix, which in our case includes predominant color codes in the acquired image.
  • color code embeddings of the training image are first defined.
  • the defined training image color code embeddings are then adjusted by deducting the difference e defined on basis of the calibration image from the defined color code embeddings.
  • input to the K-means clustering algorithm is preferably a cropped tooth image that only comprises a single tooth or majority of a single tooth.
  • the uploaded image may have size of 500x500 pixels.
  • the user may determine in any other way an area that represents an area showing the one tooth, and the system may automatically crop before the image is uploaded. For accurate tooth color shade determining lips, gum and other teeth must be cropped away or limited outside the selected area that is to be processed by the K-means clustering algorithm.
  • the area shown in the received image is preferably handled as a matrix of a plurality of cells.
  • the output of the K-means algorithm is color code embeddings for each of the plurality of cells. These embeddings are then adjusted by deducting the difference defined on basis of the corresponding calibration image.
  • the training database After storing a plurality of adjusted color code embeddings for a plurality of training images representing sample teeth of known color in approximately same lighting conditions LI, the training database is ready to be utilized by a supervised learning algorithm to determine color shade of a tooth with unknown color in the approximately same lighting conditions.
  • a supervised learning algorithm is applied for determining color shade of a tooth.
  • This step can be used for testing quality of the training data and adding new training images in the training data as well as for determining color shade of a tooth in a production image. Testing refers to obtaining images of known color shade tooth, but not indicating this color to the application. Thus, the application will handle the image as it was a normal production image, and the user may compare the result to the actual known color of the sample tooth.
  • Support Vector Machine (SVM) algorithm is used as the supervised learning algorithm.
  • the objective of the SVM algorithm is to find a hyperplane in an N-dimensional space (N-number of features) that distinctly classifies the data points, namely the color code embeddings received from the K-means clustering algorithm.
  • SVM is found to be particularly useful to identify matching color code embeddings with the lowest distance to the training image color code embeddings in order to predict the shade of the teeth. For example, if the user has trained the model with one image of A1 color model tooth, the comparison is done during the testing stage to that one training image only.
  • the color code embeddings will be compared with color code embeddings of all those A1 tooth color training images as well as to color code embeddings of training images representing any other standard color mode teeth, such as A2, A3 and so on.
  • the training process utilizes the combination of the unsupervised training algorithm in the step 503 and the supervised training algorithm, and the analyzed new training image is included in the supervised training model at the step 505. If the acquired image is a production image, it is not included in the training model, but obtained color shade information is communicated back to the user.
  • the uploaded image does not have associated any information on the expected color of the tooth shown in the uploaded image.
  • the color code embeddings are then adjusted by deducting the difference defined on basis of the calibration image from the defined color code embeddings.
  • Supervised training algorithm is then applied to adjusted color code embeddings of the uploaded image, comparing it to all color code embeddings in the respective training database. As a result, the supervised training algorithm decides which tooth color shade has the closest distance to that of the uploaded image and provides this determined color shade as the result to the user.
  • the acquired private training database may be merged with a global training database after a check has been performed to the private training database by administrators of the system provider. This avoids uploading erroneously tagged or otherwise erroneous images in the global training database.
  • identity of the user is preferably removed from the training images. In the process of merging, identity of the user is removed by not saving the identity information (Ul) of the record as it has become unnecessary and potentially would be used for user identification afterwards.
  • the user can start acquiring and uploading actual production images, each representing a tooth of a patient for color shade determination using his own private training database.
  • the user can select use of a global training database.
  • the server application is configured to compare calibration information of the user and the private training database with the global training database. If the server application detects on basis of such comparison that the global training database includes enough images taken in lighting conditions LI, the application may propose to the user that he could use the global training database instead of the private training database.
  • Sufficient number of images may be for example at least 400 images, preferably at least 500 images and more preferably at least 500 images.
  • Tooth color shade may also be an indication of an abnormality in the tooth.
  • Training images can also be taken from a real tooth with abnormality, such as abnormal color or shape of the tooth that indicates for example that the tooth is dead, it has caries or any other tooth disease.
  • the user may label the image as representing a tooth with abnormality.
  • the label given by the user preferably names the type of abnormality.
  • the uploaded image is in this case cropped to represent the area with the abnormality.
  • the training image is then stored in the training database similarly to any other training image.
  • the same training database may provide means for determining tooth color shade and/or an indication of possible abnormality.
  • different training databases may be defined for different purposes/uses.
  • no global reference image is used for calibration.
  • the color code embeddings of the calibration image are used as such, and a difference is calculated between the color code embeddings of the calibration image and the training image.
  • Calibration image stored in the database is associated with information (Ll+Al) that corresponds to the color code embeddings of the calibration image.
  • color code embeddings for the obtained training images are adjusted in the step 504 by deducting the color code embeddings of the calibration image as such from the color code embeddings of the training image.
  • Value "y” thus represents distance of the color embeddings of an image of A3 colored sample tooth from the calibration image. Similar calculations are performed to all training images representing different model tooth colors. This method may be referred to as "magnitude comparison" and this variant of the method, the magnitude difference between the A1 and other shades are stored and used for the analysis.
  • the alternative embodiment is particularly useful when global training database is used, since it does not restrict selection of applicable training images to any specific lighting conditions. However, either of the embodiments may be used with global and private training databases.
  • magnitude difference may be defined as a vector.
  • the magnitude difference vector is preferably of same form and size as the color code embeddings.
  • the supervised learning algorithm may then be applied in the phase 505 to determine the shade of the tooth shown in the uploaded image by finding the color code embedding in the training database that has the lowest distance to this obtained color code embeddings "x".
  • Figure 6 illustrates a process of defining tooth color shade.
  • an image is received that shows at least one tooth or teeth of a subject.
  • the color shade of the tooth is unknown.
  • step 602 applicable calibration image in the selected training database, private or global, is selected on basis of at least one of user and lighting information.
  • K-means clustering similar to explained in connection to phase 440 is applied on the received image for obtaining color code embeddings in the received image.
  • the color code embeddings are all associated with colors of a single tooth.
  • the same principle may be used for detecting symptoms of a dental disease based on color of tongue or gums, or extraordinary color of the tooth, which may be indicative of for example dental caries.
  • the same principle can be used for or detecting the change of the color in the tooth/teeth over time by detecting the color from the same user on regular basis.
  • the color of the tooth of the user may change over time due various reasons. The reasons include but are not limited to recurring user actions to improve tooth whitening or alternatively user diet may impact the color of the tooth.
  • respective part of the image should be selected that represents the respective tooth of interest, the tongue or part of the gums.
  • the obtained color code embeddings are adjusted to remove or reduce effects of lighting in the obtained color code embeddings.
  • the adjustment may be performed either by deducting the difference e from the obtained color code embeddings or by deducting color code embeddings of the calibration image from the obtained color code embeddings.
  • the result of the step 604 is adjusted color code embeddings.
  • the adjusted color code embeddings are compared to trained color code embeddings among the training images in the applicable training database.
  • the color code embeddings of a training image is selected that has the lowest distance to the unknown image's color code embeddings.
  • the tooth color associated with the selected training image with the lowest distance is deemed to represent the color of the tooth in the uploaded image.
  • the color code embeddings are preferably selected in iterative manner. In a series of iteration steps the most likely color code embeddings out of a subset of possible color code embeddings are suggested as the selected color code embeddings.
  • a 1x12 matrix may be used for color code embeddings. Iteration may start with all possible colors, and the amount of possible colors is reduced until just 16 possibilities for color code embeddings are left. Out of these 16 possibilities, three most likely, in other words the three most commonly appearing color code embedding options are selected. Naturally, instead of the 16 possibilities used in the example, any integer number of color code possibilities may be used.
  • the three most likely color code embeddings are included in the 1x12 color code embeddings matrix.
  • three fields of the matrix indicate the relative amounts of the three most common color code embeddings, and the remaining fields are reserved for indicating the color coding for these.
  • three fields of the matrix may be used to include RGB or values or YCbCr color code values of each of the three most common three colors. If a four-color model such as CMYK is used for color coding, the color code matrix may be for example of size 1x15 to allow using four fields in the matrix for each one of the three most common color code embeddings.
  • tooth color information indicative of the determined tooth color shade is communicated to the user.
  • the tooth color information may comprise the color code associated with the selected training image that corresponds to the most likely color shade of the tooth.
  • the determined tooth color information may also comprise the determined color code embeddings.
  • At least part of the tooth color information is exported from the system, preferably in a digital format, to another data processing system or application of the user.
  • this information can be made available for other systems or applications for further analysis.
  • at least part of the tooth color information is exported, it may be further processed and/or analyzed by the user or by a system or application of the user for any purpose.
  • tooth color information may be exported to another application and/or to archives of the user for later use.
  • At least part of the tooth color information may be exported into an application facilitating manufacturing of artificial tooth or teeth.
  • at least part of the tooth color information may be stored into another user application to be subsequently used as basis of tooth color shade comparison.
  • a tooth color shade comparison application may be used for example to detect changes in tooth color shade for example due to whitening or color changes due to diet.
  • the tooth color information may be determined to indicate that the tooth may, based on its color, have some abnormality. This is possible, if the training database comprises training images of teeth with abnormality. Indication of likelihood of an abnormality may also be provided to the user, so that he may for example examine the tooth in more detail.
  • the step 607 may be omitted, since the tooth color shade information can be made available to the user via the other application or system that receives the exported tooth color information.
  • Main intelligence of the system resides at the server, more precisely at the server application running on the server.
  • the user application running in the mobile device needs a data connection to the server.
  • the user application acts as a user interface, allowing the user to obtain and upload images as well as tag tooth colors in the training images, and to receive tooth color information.
  • every single trained color code embedding from the SVM model is compared to the color code embeddings that corresponds to test images captured for testing and the color code embedding with the lowest distance is the resulting tooth color shade.
  • the resulting tooth color shade may be expressed by referring to a particular tooth color as defined in the used standard tooth color shade system. Testing ensures that the system works as intended and the color shades are detected accurately. It is apparent to a person skilled in the art that as technology advanced, the basic idea of the invention can be implemented in various ways. The invention and its embodiments are therefore not restricted to the above examples, but they may vary within the scope of the claims.

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Epidemiology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Image Processing (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The present invention relates to a computer-implemented method, a data-processing system and data-processing apparatuses as well as computer program products for defining color shade of a tooth using a camera of a mobile communication device. On basis of a received image 5 of a tooth of a subject that comprises a part of an image acquired with a plain camera of the mobile communication device, lighting conditions are obtained, and applicable training images are selected from a training database. Training images comprise images of a model tooth with known colors. K-means clustering is applied to the received image to obtain color 10 code embeddings of the received image and the color code embeddings of the received image are adjusted based on the indication of lighting conditions. Obtained color code embeddings are obtained to all color code embeddings of the selected training images to find the training image with color code embeddings having lowest distance to received image's color 15 code embeddings, and the color shade of the tooth in the received image is determined to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the received image. Tooth color information indicative of the defined tooth color shade is communicated back to the mobile 20 communication device from which the image was received.

Description

Determining tooth color shade based on image obtained with a mobile device
Field
The present invention relates to a method, a system and a computer program product related to determining color shade of a tooth of a subject. More particularly, the invention relates to a method, a system and a computer program product related to determining color shade of a tooth in order to enable manufacturing of artificial teeth, an artificial tooth or a dental crown (corona artificialis) in correct color and/or in order to enable determining change in tooth color and/or in order to provide indication that the tooth may suffer from an abnormality.
Background
Traditionally definition of color shade and selection of the color of artificial tooth or teeth on basis of the defined color shade is based on the dentist or dental technician manually comparing the tooth or the teeth of the subject to a matrix of color shade models. This has been found to be an error-prone method, often leading to need to re-produce the expensive teeth crowns, artificial tooth or artificial teeth again in different color.
Using photography as such, even with a camera of a mobile phone, for taking images based on which tooth color shade is defined has been subject to interest.
Natural teeth are formed by layers of materials having different optical characteristics. In particular the enamel of the outer part of the tooth affects greatly on the color shade of the tooth, but also the dentine under the enamel may affect the color shade, especially if the enamel is particularly thin and/or translucent. Thickness of the enamel varies. This complexity of the structure of the teeth causes the color shade definition to become a challenging task. Description of the related art
Various methods have been introduced, which aim to increase reliability of automatic color shade recognition of a tooth. A color can easily be detected from an image as such, but the color appearing in the image may not correspond the actual color of the subject, because the process of photography includes parameters that significantly affect the result. Aspects related to lighting is one of main factors that make colors in an image appearing different from those of the actual subject, including both specific lighting arranged to fall on the subject for imaging, but also ambient lighting. Further affecting factors relate for example to the type, resolution and settings of the camera, distance between the camera and the subject and so on.
Imaging-based solutions for tooth color shade detection recognize the problem caused by effect of lighting conditions that may significantly affect the colors appearing in the image. Two main types of solutions are known in the art for calibrating the color shades: lighting conditions are standardized for example using a lighting device that closes any ambient light away that would affect the acquired image, and/or one or more standard reference colors or color shades are included in the same image with the subject tooth or teeth. These calibration methods may be used either separately or combined. International patent application W017166326 A1 discloses a method and device for realizing color comparison of artificial tooth using an image. A standardizing color comparison environment is provided having a standard light source. The image, as such is sent to artificial tooth production center and a technician compares the image shown on a color corrected monitor to a selection of colors that are under a like standard light source.
International patent application W018080413 A2 discloses an automatic dental color determination system integrated to mobile phones or tablets. Lighting conditions during acquisition of the image are standardized by using a mechanical light isolation apparatus that surrounds the mouth of the subject. A digital platform then performs dental color determination on basis of the image.
International patent application W09956658 A1 discloses an automated tooth shade analysis. An image of the patent's teeth is acquired with black and white normalization references shown in the same image. A software is then normalizing the image using the normalization references, and the color of the tooth is obtained from the normalized image.
However, mechanical lighting calibration devices tend to be cumbersome to use, and/or the standard reference color or colors have to be always available on the spot when the image is taken. Further, over longer periods of time, the reference colors may even change for example due to dirt or fading, which may lead to misinterpretation of the tooth color shade.
Thus, a more robust and versatile solution is needed that enables easy and reliable determining of tooth or teeth color shade. Summary
An object is to provide a method and apparatus so as to solve the problem of determining tooth color shade. The objects of the present invention are achieved with a method performed in a server according to the claim 1 and with a method performed in a mobile communication device according to the claim 11. The objects of the present invention are further achieved with a computer program product according to claim 15, a data-processing system comprising means for carrying out the method steps, with a data-processing apparatus according to the claim 17 and a mobile communication device according to claim 18.
The preferred embodiments of the invention are disclosed in the dependent claims.
According to a first method aspect, a computer-implemented method of defining color shade of a tooth iusing a camera of a mobile communication device is provided. The method comprises receiving an image of a tooth of a subject, wherein the received image comprises a part of an image acquired with a plain camera of the mobile communication device and obtaining indication of lighting conditions of the received image. The method also comprises selecting applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color and applying K-means clustering to the received image to obtain color code embeddings of the received image. The method further comprises adjusting the color code embeddings of the received image based on the indication of lighting conditions, comparing the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with color code embeddings having lowest distance to received image's color code embeddings and defining the color shade of the tooth in the received image to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the received image. The method also comprises communicating tooth color information indicative of the defined tooth color shade back to the mobile communication device from which the image was received.
According to a second aspect, area of the received image is divided into a matrix with a plurality of cells, each cell representing one part of the tooth, and wherein color code embeddings are defined for each cell of the matrix.
According to a third aspect, the training database is one of a private training database and a global training database.
According to a fourth aspect, the received image is associated with a label indicating lighting conditions in which the received image was acquired.
According to a fifth aspect, the adjusting the color code embeddings comprises deducting from the obtained color embeddings of the received image a difference between color code embeddings of a global reference image and a calibration image acquired with the mobile communication device in lighting conditions that approximately correspond to the lighting conditions in the received image. According to a sixth aspect, the applicable training images comprise a plurality of training images each associated with a label that indicates that the respective training image has been acquired in approximately similar lighting conditions with the received image and the calibration image and a label indicating actual color shade of a model tooth shown in the respective training image.
According to a seventh aspect, the adjusting the color code embeddings comprises calculating a magnitude difference between color code embeddings of the received image and color code embeddings of a calibration image acquired with the mobile communication device in approximately similar lighting conditions.
According to an eighth aspect, applicable training images comprise a plurality of training images associated with magnitude difference information, and the training image having the lowest distance is defined by comparing the magnitude difference of the received image and magnitude differences associated with the applicable training images.
According to a ninth aspect, the method further comprises exporting at least part of the tooth color information to another application or data processing system.
According to a tenth aspect, the method further comprises at least one of: providing the defined tooth color information to be used as basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade, comparing the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and providing an indication that the defined tooth color information indicates an abnormality in the tooth. According to another aspect, a data-processing apparatus is provided comprising means for carrying out the method according to any of the above aspects.
According to an elenventh method aspect, a computer-implemented method of defining color shade of a tooth using a camera of a mobile communication device is provided. The method comprises acquiring an image of teeth of a subject with a plain camera of the mobile communication device, receiving, via the user interface of the mobile communication device, determination of an area in the acquired image that comprises one tooth, pre-processing the acquired image to produce an image of the one tooth for uploading, and associating a label with the image that indicates lighting conditions in which the image was acquired. The method further comprises uploading the image of the tooth to a server for obtaining indication of lighting conditions of the received image on basis of the associated label, for selecting applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color, for applying K-means clustering to the received image to obtain color code embeddings of the uploaded image, for comparing the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with lowest distance to uploaded image's color code embeddings, and for defining the color shade of tooth shown in the uploaded image to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the uploaded image. The method also comprises receiving by the mobile communication device tooth color information indicative of the defined color shade of the tooth of the subject comprised in the uploaded image.
According to a twelfth method aspect, area of the uploaded image is divided into a matrix with a plurality of cells, each cell representing one part of the tooth, and wherein color code embeddings are defined for each cell of the matrix.
According to a thirteenth method aspect, the training database is one of a private training database and a global training database.
According to a fourteenth method aspect, the method further comprises associating, before uploading, the uploaded image with a label indicating lighting conditions in which the image was acquired. According to a fifteenth method aspect, the method further comprises at least one of: providing the defined tooth color information to be used basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade, indicating a result of comparison of the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and providing an indication that the defined tooth color information indicates an abnormality in the tooth.
According to another aspect, a mobile communication device is provided comprising means for carrying out the method according to any one of the the eleventh to fifteenth method aspect.
According to a further aspect, a computer program product is provided having instructions which when executed cause a computing device or system to perform a method according to any one of the above method aspects.
According to a further aspect, a data-processing system is provided comprising means for carrying out the method according to any of the above method aspects.
The present invention is based on the idea of acquiring an image of teeth with a standard camera provided in a mobile communication device and using artificial intelligence obtainable with use of a combination of machine learning methods for teaching and enabling the system to recognize exact colour shade of the tooth of previously unknown colour shown in the image. The determined colour shade may then be utilized for manufacturing artificial teeth or tooth crown. The present invention has the advantage that the dentist can use a simple, plain mobile communication device camera, for example a mobile phone or tablet computer camera, to acquire a digital image of the teeth of a patient, while the system enables accurately determining color shade of the tooth on basis of the acquired digital image with high reliability without requiring exact calibration of parameters that have an effect on the colors appearing in the acquired digital image. With plain in this connection it is meant that there are no special accessories or devices attached to or included in the mobile phone or its camera for acquiring the dental image(s), but the generic use mobile device with an integrated camera is used as such. Thus, success rate of selecting correct color shade matching with that of the patient's actual tooth color shade for manufacture of an artificial tooth or teeth is high, and risk of need for costly re-manufacturing of the artificial tooth or teeth due to erroneous color selection is reduced. By utilizing the artificial intelligence with machine learning methodologies, use of various cumbersome apparatuses or devices or methods known in the art by which the digital image could be calibrated into exactly repeatable parameters can be avoided. Further, the system facilitates protection of identity of the patient and the user.
Brief description of the drawings
In the following the invention will be described in greater detail, in connection with preferred embodiments, with reference to the attached drawings, in which
Figure 1 is a schematic representation of a system
Figures 2a to 2d illustrate pre-processing of the images.
Figure 3 illustrates an exemplary process of determining a tooth shade as seen by a user
Figure 4 illustrates an exemplary process of determining tooth shade as seen by the system
Figure 5 illustrates a process of handling a training image Figure 6 illustrates a process of defining tooth color shade
Detailed description
The figure 1 illustrates a system according to the invention. This exemplary system serves three different users, for example dentists, each having their own mobile devices (100a, 100b, 100c), each equipped with a camera and with mobile data connectivity. Any number of users may use the actual system. Each mobile device (100a, 100b, 100c) is capable of connecting to a server (110) over a data connection. The server (110) carries responsibility of the intelligence in the system. The server comprises or is associated with at least one image storage means (120, 125a, 125b, 125c, 130, 140).
The system utilizes at least three types of images, namely training images, calibration images and production images. The term 'training image' refers to an image that is used for training an artificial intelligence mapping function to map an image representing a tooth or teeth of a patient to a particular tooth shade. The term 'calibration image' refers to an image that is used for calibrating lighting conditions. Lighting conditions refer to normal ambient lighting conditions at the venue, without any specially designed apparatus attached or associated with the mobile phone or its camera for standardizing the lighting conditions. The term 'production image' refers to an image representing teeth/tooth of an actual patient, the color of which is to be determined. Preferably, each of the training image, calibration image and production image comprise an image of a single tooth that covers at least majority of the visible area of a single tooth.
Training images comprise one or more collections of images used for training the tooth color shade determining system. The term training database refers to a collection of labeled training images, and training data associated with the respective training images as training data. Training data may be associated with the training images as labels. Training databases may be logically divided into private training databases (125a, 125b, 125c) and global training databases (120).
In a private training database (125a, 125b, 125c), labels associated with each training image preferably comprise an identifier of the user, one or more identifiers of the training environment and attributes of the tooth color shade. The identifier of the user may be for example a username. Identifiers of the training environment may comprise time at which the image was taken, name of the venue as given by the user, GPS coordinates, light source, model of the camera, phone model and calibration image name. Light source may for example define name and/or model of the light source in the venue as given by the user. Attributes of the tooth color shade include the color shade to which the image is/was compared with (for example Al) and color code embeddings. If the image was taken from a standard tooth shade guide, the attributes of the tooth color shade include the known standard shade. If the training image is taken from a real tooth, there is a non-zero likelihood that there is an error in the training data.
The global training database preferably comprises the same data as a private training database, except the user identification data and the name of the venue as a user entered data. Training images in a private training database may comprise identification of the user, but training images in a global training database (120) should not comprise any user identification for ensuring user privacy.
Images, including training images, calibration images and production images are preferably labeled differently according to their intended use, origin and other information associated with the images for example by the user, and processing, storage and use of the images by the server (110) is based on the labeling. Labeling enables flexible use of images. For example, images from a private training database may be included in the global training database by relabeling. However, to maintain full control of privacy of users as well as quality and content of the global training database (120), such inclusion is preferably only performed by operators or supervisors of the system on consent of the respective user. The term label refers to any type of additional data associated with images. Labels may also be referred to as metadata.
A training database (120, 125a, 1250b, 125c) may be used for color shade determination of production images acquired using camera of any one of the mobile devices (100a, 100b, 100c). Alternative to the global training database (120) each user can build their own private training database (125a, 125b, 125c) accessible only by the respective user using his/her mobile device (100a, 100b, 100c) or by other internet-capable device that is capable of authenticating the user. The private or global training databases comprise data associated with each of a plurality of training images that are used to teach the system to correctly recognize color shade of a tooth.
Although various training databases (120, 125a, 125b, 125c) are illustrated in the figure 1 as separate elements, this is only intended to illustrate a logical separation of training databases (120, 125a, 125b, 125c). Depending on system setup, training databases (120, 125a, 125b, 125c) as well as storage means (130) for storing calibration images and storage means (140) for storing production images may reside in same or different physical storage devices as known in the art. Likewise, the server (110) may be a single physical server or a plurality of physical servers, or the server (110) may be implemented in a virtual server or in a cluster of virtual servers each providing service to one or more users (100a, 100b, 100c) as known in the art.
The training images stored in the private or global training databases are used for training an artificial intelligence mapping function to map an image representing a tooth or teeth of a patient to a particular tooth shade. For training purposes, a plurality of images (200) of teeth of the patient is acquired by the user with the normal, plain camera of the mobile device. Each acquired image is pre-processed at the user device before the training image is sent to the server. The pre-processing will be described in more detail later on. Preferably, images used for a particular private training database are always acquired using the same mobile device of the user, which device comprises a camera and a tooth color shade application for processing and labeling the images. In the following, we can refer to the tooth color shade application running at the user device simply as 'the user application'. Likewise, the tooth color shade application running at the server may be referred to as 'the server application'. The user may choose to build several private training databases for different lighting conditions. This may be the case if the user works and takes pictures in different venues, or if amount of ambient light in a particular venue varies greatly depending on for example time of day or time of year. Figures 2a to 2d illustrate examples of pre-processing of acquired images. The same process is in principle applicable to all types of images used in this system, in other words for training images, calibration images and production images. In this example, the acquired image shows mouth and teeth of a subject. Alternatively, in particular when a calibration or a training image is acquired, the image may show a model tooth on arbitrary background. After acquiring the image with the camera of the mobile device, the user preferably indicates an area showing a single tooth (210) in the acquired image (200). Preferably, the user indicates the selection of the single tooth (210) by cropping the image (200) as illustrated with the grey shading in the figure 2b so that only the single, selected tooth (210) is shown in the cropped image (200')· Alternatively, the user may be enabled to select a part (200") of the image as illustrated in the figures 2c and 2d, and the user application may automatically crop the image by removing image data outside the area selected by the user. In yet further alternative, information on the selected area may be included in the acquired image, which is uploaded in its entirety to the server, wherein only the selected area will be processed for tooth color determining. The term uploaded image refers to the cropped image (200') or the selected part of the image (200") that is uploaded to the server for processing. Preferably only the part (200', 200") of the original image (200) that represents the selected tooth (210) is uploaded to the server for processing, since this saves amount of data to be uploaded and bandwidth needed for swift image data uploading. Preferably, majority of the area of the selected tooth (210) is included in the uploaded image. Since the shape of the cropped or selected area may not fully match with the shape of the tooth appearing in the original image, part of the peripheral area of the selected tooth (210) may be outside the uploaded image. However, this is acceptable, since it is more important that the uploaded image (200', 200") does not include any foreign objects such as lips, gum or neighboring teeth, since these would add noise in the color determining algorithms. The user application is preferably operable for performing the cropping of the image or selecting the area in the image for automatic cropping. Alternatively, the user could use other image processing software in the user device for cropping the image and only then associate the cropped image with the user application.
The uploaded image (200', 200") showing the selected tooth (210) may be divided into a matrix (220) having a plurality of cells, and color shade is defined for each of these cells. Preferably, the cells of the matrix (220) have approximately of equal areas. Defining the matrix (220) is preferably performed at the server, by the server application, the matrix (220) may also be defined by the user application. Dividing the area of the uploaded image (200', 200") into a plurality of cells enables taking into account variation of the tooth color shade between different parts of the selected tooth (210). According to a first exemplary embodiment, a 3*3 matrix may be used as illustrated in the figure 2b, and according to another exemplary embodiment, a 3*4 matrix as illustrated in the figures 2c and 2d. Alternatively, a smaller or larger matrix (220) may be used, for example 2*2, 2*3, 4*4, 4*5, 4*6, 5*5, or 5*6 matrix. Decreasing the size of the matrix reduces the accuracy of color shade determining in different parts of the tooth, but tests have indicated that sufficiently accurate color shade determining of different parts of the tooth can be achieved for example with a 3*3 matrix.
The cropped image or the selected area of the image is preferably rectangular, as seen commonly in the art, but it can also be free form as illustrated in the figure 2d to facilitate inclusion of majority of the area of the selected tooth (210) in the uploaded image (200', 200") for tooth color shade determining processing, without including any noise caused by unwanted objects in the part of the acquired image to be analyzed, such as neighboring teeth, gum, tongue or lips. If a free form selection is used, the cells of the matrix may have mutually different shapes and sizes especially in the outer edge cells of the matrix. Figure 3 illustrates an exemplary high-level process of determining a tooth shade as seen by a user using the user application. A user interface towards the system is preferably provided by the user application running on a processor of a mobile device used by the user.
After installing the user application at the mobile device of the user and signing up as user of the user application, the user first acquires a calibration image using the camera of the mobile device in the step 301. At registration, a unique identification is generated for the user. In this example, the user is identified with identification '111'. The calibration image is to be acquired in the normal working lighting conditions at the user's premises, for example at a dentist's reception. Normal working lighting conditions thus refer to normal ambient lighting conditions at the venue, without any specially designed apparatus attached or associated with the mobile phone or its camera for standardizing the lighting conditions. If the user works in more than one venue at different times, separate calibration image is preferably acquired for each of the venues, since the lighting conditions likely vary between these significantly. The user application allows the user to tag each calibration image according to the venue it was taken in. Further, geographical location of the calibration image may be associated with the calibration image and used as identification of the venue. The venues may be named in any manner. For example, a simple numbering or alphabet naming may be user. Preferably the user may name the venue freely, reflecting the actual name of the venue such as "Discovery Bay Office", "Central Office" and so on. This name is shown to the user in his/her user application. This way it becomes easier for the user to select the correct venue each time he/she uses the user application subsequently. The calibration image naming functionality of the user application may be utilized for naming calibration images taken at different times in the same venue. This way the user may acquire and store a separate calibration image for example for lighting conditions in morning light, afternoon light and evening lights from the window. In later examples, we will refer to different lighting conditions with references like Ί_ ,'I_2' for simplicity. The calibration image thus acquired is used for an initial lighting calibration. The calibration image is preferably an image of a tooth sample with predefined color, for example A1 color shade as provided in a standard shade guide, such as Vita Classic Shade Guide used since 1960's, using the light source in the dentist's office. The calibration image is cropped, or the area of the sample tooth is selected as explained above in relation to images 2a to 2d. In the step 301, the calibration image is uploaded to the server (110) and stored as a calibration image at the memory device (130) at or associated with the server. The stored calibration image may be labeled as a calibration image for this particular user (Ul) together with the user-given name for the calibration image. The calibration image is a reference that is used for determining approximate lighting conditions (LI). After processing the calibration image, the result, color code embeddings of the calibration image, is compared to color code embeddings of a global reference image. While the calibration image and the global reference image represent a sample tooth of known color, preferably A1 color, a difference e can be calculated between the color code embeddings of the calibration image and the embeddings of the global reference image.
Calculation of the difference 'e' can be expressed with a mathematical representation. Let the lighting conditions in the calibration image be LI, and the model tooth shown in the calibration image has shade Al. Thus, color code embeddings of the calibration image are associated with parameters (Ll+Al). Color code embeddings of the corresponding global reference image is associated with parameters Global(L+Al). The difference in the color code embeddings is then defined as e=Global(L+Al) - (Ll+Al). This difference e is then stored and used for calibration in training, testing phases while generating and/or testing the training database as well as in the production phase.
Calibration image is used to enable compensation of differences in lighting conditions and thereby indirectly cancelling differences in factors affecting colors seen in the image, including but not restricted to ambient light, camera clarity, indoor air quality problems. However, it should be noticed that the color shade determining system is capable of adapting the color shade determining such that lighting conditions in production images do not have to be exactly the same as in the calibration image.
The user is provided with two alternatives in the step 302.
In the first alternative, the user may choose to generate his own, private training database by performing a training process in the step 303. This training process will be explained in detail later. The term training database refers to a processed training database that is operable to define tooth color shades.
Second alternative is that the user enables use of global training database, which is provided in the global training database by the system provider, ready to be used by any user.
Preferably, if the user selects use of global training database, use of the private training database is no longer enabled, and the system will subsequently only use the global training database provided by the system provider and the user device is auto-calibrated using the above explained calibration technique. If the user initially selects use of a private training database, he retains the option to change to use of the global training database as illustrated with the arrow back to the selection step 302. Such re-selection may occur at any step after first selecting use of private training database.
For protecting user privacy, the images recorded in the training databases include no patient identification information. When training images taken by a user are included in his/her private training database, the images include label indicating the user identity (Ul). Recorded images represent a tooth or a part of a tooth from which the user or the patient is not possible to be identified. No mix of private and global training database is allowed without user's permission. Only the administrators of the system providing company can access the global training database using his administration permission login to the system. In one embodiment, illustrated with step 313, inclusion of private training database into the global training database may be enabled. However, such inclusion can only be performed by administrators of the system in response to acceptance of such inclusion by the respective user, and any user identification (Ul) of the included private training database is removed in this process. Global training database should never include any identification of users from whom such training database originates from to protect user's privacy.
After training database has been selected and, in case of use of the private training database was selected, required training has been performed, the system is ready to be used for determining tooth color shade. In the step 304P, a production image, in other words an image acquired by the user using his mobile device and representing a patient's tooth of unknown color, is obtained an uploaded to the server as explained in connection to figures 2a to 2d, and the server application performs color shade determination on basis of intelligent comparison of the production image with the private training database. Correspondingly in the step 304G, a production image, in other words an image acquired by the user using his mobile device and comprising teeth of unknown color, is obtained, processed at the user device and uploaded to the server, which performs color shade determination on basis of intelligent comparison of the production image with the global training database. The major process steps are thus the same, except that the training database is selected differently on basis of the user's choice. As explained in connection to figures 2a to 2d, the acquired image may be cropped in the user device or the appropriate area of the image that includes the selected tooth is marked on the image before uploading the image to the server for processing. The matrix dividing the selected tooth into a plurality of areas to be processed may be defined at the user device or at the server. The image to be uploaded may be appropriately labeled by the user device and/or by the server. If user identity is associated in the uploaded image, it can be stored as a label associated with the image. Also, a label indicating lighting conditions may be associated with the uploaded image. If there are more than one calibration images stored for the particular user, the user preferably selects one lighting condition among the venue and/or lighting condition names associated with and thus identifying his/her calibration images, and the respective lighting condition label (LI) is associated with the uploaded training images and production images using the user application, that best matches the current lighting conditions.
Likewise, steps 305P and 305G are mutually similar, in response to uploading the production image to the server, he receives information that indicates the automatically defined tooth color shade.
The user may repeat steps 304G and 305G or steps 304P and 305P for a plurality of production images. The user may also choose, at any point, to obtain another calibration image. This may occur for example when he/she first time uses the user application at a new venue.
Figure 4 illustrates an exemplary high-level process of determining tooth shade as seen by the system.
In the step 401, the system receives and analyzes a calibration image from a registered user (Ul).
When the uploaded calibration image is analyzed at the server, a fingerprint of the user is created that comprises a unique identification of the user (Ul), his unique calibration lighting condition (LI) and information associated with color code embeddings associated to the predefined color (Al) of the standard A1 tooth as it appears in the calibration image. The data structure stored on the server thus comprises information Ul + LI + Al.
Bitmap format is preferred for all image types, since compression of the image reduces colors in the image, which would subsequently reduce quality of the color shade determining. Image coding preferably utilizes one of color spaces commonly used in computer graphics, such as RGB (Red-Green-Blue), CMYK (Cyan-Magenta-Yellow-Black) or YCbCr. Lighting condition is defined by analyzing the calibration image.
One possible method of defining a lighting condition (LI) is to first process the calibration image using K-means clustering algorithm, which is known in the art of image processing. The calibration image or each cell of the calibration image may be simplified into a limited size color code embeddings matrix using the K- means clustering algorithm. For example, the calibration image or each cell of the calibration image may be simplified into a 1x12 color code embeddings matrix. The determined lighting condition (LI) may be stored as a label associated with the image. For example, when a 1x12 matrix is used as lighting condition, the label comprises the 1x12. In an exemplary embodiment, the 1x12 matrix may be defined as follows: first 3 items of the 1x12 matrix may include the proportions of the 3 most often detected colors in the image. The remainder of the 1x12 matrix may then comprise data indicative of the respective colors. For example, RGB coding may be used. 3 items of the 1x12 matrix may be indicative of RGB values, for example the value of R of the first, most often detected color corresponding to the first of the first 3 items, then the value of G of the first color and then B of the first color. Similarly, another 3 items of the 1x12 matrix may represent RGB values of the second in order most often detected color. The remaining 3 values are the 1x12 matrix may represent RGB values of the third in order most detected color and its RGB values. In an alternative embodiment, the most often detected colors defined in the label may be arranged in any order; in other words, the colors do not have to be in order of proportions of appearance of the colors.
Various labels associated with the images may have a predefined format or free text form, and they may include but are not limited to a label identifying at least one of a venue name and a time of day or time of year. The label may also be a combination of a free text defined by the user and one or more system provided labels having predefined form, which may or may not be shown to the user. Further, the calibration image metadata stored as labels associated with the image may include information on location where the calibration image was taken. This enables the user application to subsequently automatically select or propose to the user to select the appropriate calibration image for each subsequent training and/or production image on basis of the location. Location data may be actual geographical location defined for example on basis of a GPS, GNSS, GLONASS or alike positioning system available in the user device, but location information may also be defined in any other way known in the art, for example on basis of available local area network names.
If the user selects to use a private training database, the server next receives a plurality of training images in the step 402 and processes them to generate a fully functional private training database. If the user selects using global training database, step 402 may be omitted. The private training database preferably comprises processed training database based on a plurality of training images that are taken by the same user (Ul) and that have the same lighting conditions (LI). When acquiring training and production images, the location may be suggested by the user application on basis of detected current location of the user and on basis of locations for which calibration images have been provided. Selection can be fully automatic, or the user may manually select and/or accept a suggestion made by the user application. If there are several mutually very close locations to select from, the application preferably gives a list of the most likely locations in some order of preference from which the user may select the correct one. Such listing and selection may be necessary if the user for example operates in the same building but in different rooms and/or in different floors, so that geographical location is not significantly different. Details of handling of training images will be described later in connection to the figure 5. Global training database comprises training database that is based on a plurality training images acquired and uploaded by either the system provider or by anonymous users, which training database comprises training database for a plurality of different lighting conditions. After processing the calibration image, the system is ready to receive production image in the step 403, representing a tooth with unknown color.
The production image is then analyzed in the step 404 to determine the color shade of the tooth. Color shade analysis is performed using a process that will be disclosed in more detail in connection to the figure 6.
After analyzing the color shade of the tooth, the result is communicated and/or provided back to the user in the step 405. Communication may be performed for example by showing in the user interface of the user application the determined color shade of the tooth or, when a matrix (220) is used, the determined color shade of each cell in the matrix. The application also enables the user to export tooth color information as an export file in any applicable computer coding format known in the art. For example, a pdf-document may be exported. Such exported file comprising tooth color information can subsequently be used in communication with a dental laboratory that manufactures the artificial tooth. For example, the exported file may be attached to an email, or the exported file can be automatically transferred to other computer systems over any type of data exchange capable interface known in the art of computer networks and/or mobile devices. Exported and/or communicated tooth color information may further comprise color code embeddings of the analyzed production image.
Figure 5 illustrates an exemplary process of handling a training image when training of a training database is performed. This process may be referred to as a light calibration method. Same process may be applied to training both the private training database and the global training database, although the source of the received training image may be different.
When the training, in other words acquisition of a plurality of training images to create a training database, is performed by the user, the user (Ul) can start training his own private database by acquiring a plurality of training images using his camera in his own lighting conditions (LI) corresponding to those of a previously acquired calibration image. The user (Ul) preferably acquires images of teeth of different colors, for example model teeth with different known colors according to a standard tooth color shade system. Before or after acquiring a training image, and preferably before uploading the training image, the user labels the training image with a known color shade according to the model tooth. For example, when is used, model teeth may have colors Al, A2, A3, A3.5, A4, Bl, B2, B3, B4, Cl, C2, C3, C4, D2, D3, D4, and the respective training images are labeled accordingly. Only single-color teeth or standard colored model teeth should be used for training. The acquired training image and the respective tooth color information is uploaded and stored in the private training database associated with the server.
The global training database is trained in a similar manner, but preferably a plurality of training images taken for each of a plurality of different lighting conditions and known tooth colors.
According to another embodiment, an enhanced training target is provided for a user to be used as a training image. Such enhanced training target, which may be referred to as a training sheet, comprises a single sheet a variety of known model tooth color shade samples on a black background. In one example, the variety of shade samples comprise the entire range of shades of a standard shade guide, such as the Vita Classic Shade Guide. Preferably, all shade samples disposed on the enhanced training sheet are rectangular. This simplifies image processing task, while the wanted shade sample areas may be easily recognized and cropped for further image processing.
Significant amount of time and effort can be saved by utilizing the training sheet, since a single training image enables training of all shades shown in the training sheet in the same lighting conditions.
When the training sheet is used, the acquired training image is preferably processed at the server, since a standard mobile phone camera is unlikely to have image processing capabilities to extract a plurality of selected areas from a single acquired image. The server preferably has pre-stored information on order of the tooth shade samples on the training sheet so that each sample can be automatically labeled. However, the mobile phone may also comprise image processing functionality which enables separately selecting (cropping) each of the shade samples shown in the acquired, single image of the training sheet and providing a plurality of cropped shade sample images with for processing at the server. The image processing functionality of the mobile phone may also be capable of labeling the cropped shade sample images. In a yet further alternative, the shade sample labels may be associated with each of the cropped shade sample images based on order of arrival at the server. Using the training sheet simplifies the manual steps of the training process and thus significantly speeds up the training process at the user's premises, since all color shades may be trained based on a single training image, and a plurality of training images for different tooth shades in given lighting conditions can be stored in the training database on basis of this single training image. In the step 501, the training image is received at the server. The received training image is labeled with information regarding the known standard color of the model tooth shown in the training image. If the training image is used for training private training database, the training image will also be tagged with information regarding the user and his initial lighting conditions. In the step 502, the lighting information associated to this training image is obtained by the server. In this method, lighting information preferably comprises the lighting label LI associated with a calibration image and the difference e that was defined on basis of the respective calibration image.
In the step 503, an unsupervised learning algorithm is used to analyze the training image. The same unsupervised learning algorithm may be used for defining colors in all image types. For example, K-means clustering algorithm known in the art may be used, which finds groups in the data. K refers to number of groups. The algorithm works iteratively to assign each data point of the image, in other words each pixel of the image, to one of K groups based on the features that are provided. Data points are clustered based on feature similarity. When processing training images, the results of the K-means clustering algorithm are:
1. Centroids of the K clusters, which can be used to label new data
2. Labels for the training database (each data point is assigned to a single cluster)
In the current invention, unsupervised learning is applied to convert a large matrix, for example a bitmap acquired with the camera into a smaller matrix, which in our case includes predominant color codes in the acquired image.
In the step 504, as a result of the K-means clustering algorithm, color code embeddings of the training image are first defined. The defined training image color code embeddings are then adjusted by deducting the difference e defined on basis of the calibration image from the defined color code embeddings. Thus, for an A1 tooth training image a color code embedding EMBS=(Ll+Al)-e is stored, for an A2 color tooth training image a color code embedding EMBS=(Ll+A2)-e is stored, for an A3 color tooth training image a color code embedding EMBS=(Ll+A3)-e is stored, and so on. For example, when K-means clustering algorithm is used, input to the K-means clustering algorithm is preferably a cropped tooth image that only comprises a single tooth or majority of a single tooth. For example, the uploaded image may have size of 500x500 pixels. As indicated above, instead of action known as cropping, using the user interface, the user may determine in any other way an area that represents an area showing the one tooth, and the system may automatically crop before the image is uploaded. For accurate tooth color shade determining lips, gum and other teeth must be cropped away or limited outside the selected area that is to be processed by the K-means clustering algorithm. Further, the area shown in the received image is preferably handled as a matrix of a plurality of cells. Output of the K-means algorithm may be for example a 1x12 matrix called as color code embeddings, which only contains K= 12 different color codes. This color code embeddings matrix represents then 12 most likely color code embeddings of the selected tooth. When the selected tooth is handled as a matrix of a plurality of cells, the output of the K-means algorithm is color code embeddings for each of the plurality of cells. These embeddings are then adjusted by deducting the difference defined on basis of the corresponding calibration image.
After storing a plurality of adjusted color code embeddings for a plurality of training images representing sample teeth of known color in approximately same lighting conditions LI, the training database is ready to be utilized by a supervised learning algorithm to determine color shade of a tooth with unknown color in the approximately same lighting conditions.
In the step 505, a supervised learning algorithm is applied for determining color shade of a tooth. This step can be used for testing quality of the training data and adding new training images in the training data as well as for determining color shade of a tooth in a production image. Testing refers to obtaining images of known color shade tooth, but not indicating this color to the application. Thus, the application will handle the image as it was a normal production image, and the user may compare the result to the actual known color of the sample tooth.
Preferably, Support Vector Machine (SVM) algorithm is used as the supervised learning algorithm. The objective of the SVM algorithm is to find a hyperplane in an N-dimensional space (N-number of features) that distinctly classifies the data points, namely the color code embeddings received from the K-means clustering algorithm. SVM is found to be particularly useful to identify matching color code embeddings with the lowest distance to the training image color code embeddings in order to predict the shade of the teeth. For example, if the user has trained the model with one image of A1 color model tooth, the comparison is done during the testing stage to that one training image only. If the user has trained the model using a plurality of images of A1 color model tooth, for example 50 images, the color code embeddings will be compared with color code embeddings of all those A1 tooth color training images as well as to color code embeddings of training images representing any other standard color mode teeth, such as A2, A3 and so on.
If the acquired image was a training image, the training process utilizes the combination of the unsupervised training algorithm in the step 503 and the supervised training algorithm, and the analyzed new training image is included in the supervised training model at the step 505. If the acquired image is a production image, it is not included in the training model, but obtained color shade information is communicated back to the user.
If acquired image was a testing or production image, the uploaded image does not have associated any information on the expected color of the tooth shown in the uploaded image. After defining initial color code embeddings for the uploaded image, the color code embeddings are then adjusted by deducting the difference defined on basis of the calibration image from the defined color code embeddings. Supervised training algorithm is then applied to adjusted color code embeddings of the uploaded image, comparing it to all color code embeddings in the respective training database. As a result, the supervised training algorithm decides which tooth color shade has the closest distance to that of the uploaded image and provides this determined color shade as the result to the user.
The acquired private training database may be merged with a global training database after a check has been performed to the private training database by administrators of the system provider. This avoids uploading erroneously tagged or otherwise erroneous images in the global training database. When merging a private training database with the global training database, identity of the user is preferably removed from the training images. In the process of merging, identity of the user is removed by not saving the identity information (Ul) of the record as it has become unnecessary and potentially would be used for user identification afterwards.
After acquiring sufficient amount of training images, for example at least 10 training images, the user can start acquiring and uploading actual production images, each representing a tooth of a patient for color shade determination using his own private training database. Alternatively, the user can select use of a global training database.
The server application is configured to compare calibration information of the user and the private training database with the global training database. If the server application detects on basis of such comparison that the global training database includes enough images taken in lighting conditions LI, the application may propose to the user that he could use the global training database instead of the private training database. Sufficient number of images may be for example at least 400 images, preferably at least 500 images and more preferably at least 500 images.
Tooth color shade may also be an indication of an abnormality in the tooth. Training images can also be taken from a real tooth with abnormality, such as abnormal color or shape of the tooth that indicates for example that the tooth is dead, it has caries or any other tooth disease. Instead of selecting a standard model color, the user may label the image as representing a tooth with abnormality. The label given by the user preferably names the type of abnormality. Preferably, the uploaded image is in this case cropped to represent the area with the abnormality. The training image is then stored in the training database similarly to any other training image. Thus, the same training database may provide means for determining tooth color shade and/or an indication of possible abnormality. Alternatively, different training databases may be defined for different purposes/uses.
In an alternative embodiment, the steps of figures 4 and 5 are performed in the similar manner, but light calibration and training are done in an alternative manner.
In the alternative embodiment, no global reference image is used for calibration. Instead of calculating a difference e between the color code embeddings of the calibration image and the global reference image, the color code embeddings of the calibration image are used as such, and a difference is calculated between the color code embeddings of the calibration image and the training image. Calibration image stored in the database is associated with information (Ll+Al) that corresponds to the color code embeddings of the calibration image. In the training phase, color code embeddings for the obtained training images are adjusted in the step 504 by deducting the color code embeddings of the calibration image as such from the color code embeddings of the training image. Thus, a training image representing a model tooth with shade A1 in lighting conditions LI will initially receive color code embeddings (Ll+Al). Adjusted color code embeddings for model tooth of color A1 are obtained then by calculating EMBS = (L1+A1)-(L1+A1), which ideally gives ideally a value of 0. In practice, the value may, however slightly deviate from 0. Adjusted color code embeddings for model tooth of color A2 are obtained by calculating EMBS=(L1+A2)- (Ll+Al)=(Ll+Al+x)-(Ll+Al)=x. Value "x" thus represents distance of the color embeddings of an image of A2 colored sample tooth from the calibration image. Likewise, adjusted color code embeddings for model tooth of color A3 are obtained by calculating EMBS=(Ll+A3)-(Ll+Al) = (Ll+A3+y)-(Ll+Al)=y. Value "y" thus represents distance of the color embeddings of an image of A3 colored sample tooth from the calibration image. Similar calculations are performed to all training images representing different model tooth colors. This method may be referred to as "magnitude comparison" and this variant of the method, the magnitude difference between the A1 and other shades are stored and used for the analysis. The alternative embodiment is particularly useful when global training database is used, since it does not restrict selection of applicable training images to any specific lighting conditions. However, either of the embodiments may be used with global and private training databases.
Since received and stored values are just relative difference in magnitude in comparison to the A1 model tooth and approximately similar lighting conditions LI, lighting and color components are cancelled from the adjusted color code embeddings that are stored and used by the supervised learning algorithm. Thus, these magnitude difference representing color code embeddings stored can be used by any user, and the magnitude differences may thus be used for processing any production images of any user in any lighting conditions. Like any color code embeddings, magnitude difference may be defined as a vector. The magnitude difference vector is preferably of same form and size as the color code embeddings.
In analysis of further testing and production images, color code embeddings of an image in any lighting conditions can be obtained according to equation EMBS=(L1+AX)-(L1+A1) = (L1 +A1 +X)-(L1 +A1)=X. The supervised learning algorithm may then be applied in the phase 505 to determine the shade of the tooth shown in the uploaded image by finding the color code embedding in the training database that has the lowest distance to this obtained color code embeddings "x".
Figure 6 illustrates a process of defining tooth color shade.
In the step 601, an image is received that shows at least one tooth or teeth of a subject. The color shade of the tooth is unknown.
In the step 602, applicable calibration image in the selected training database, private or global, is selected on basis of at least one of user and lighting information.
In the step 603, K-means clustering similar to explained in connection to phase 440 is applied on the received image for obtaining color code embeddings in the received image. In the preferred embodiment, the color code embeddings are all associated with colors of a single tooth. However, the same principle may be used for detecting symptoms of a dental disease based on color of tongue or gums, or extraordinary color of the tooth, which may be indicative of for example dental caries. Further, the same principle can be used for or detecting the change of the color in the tooth/teeth over time by detecting the color from the same user on regular basis. The color of the tooth of the user may change over time due various reasons. The reasons include but are not limited to recurring user actions to improve tooth whitening or alternatively user diet may impact the color of the tooth. For such complementary application, respective part of the image should be selected that represents the respective tooth of interest, the tongue or part of the gums.
In the step 604, the obtained color code embeddings are adjusted to remove or reduce effects of lighting in the obtained color code embeddings. As explained above, depending on the method used, the adjustment may be performed either by deducting the difference e from the obtained color code embeddings or by deducting color code embeddings of the calibration image from the obtained color code embeddings. The result of the step 604 is adjusted color code embeddings.
In the step 605, the adjusted color code embeddings are compared to trained color code embeddings among the training images in the applicable training database. In the step 606, the color code embeddings of a training image is selected that has the lowest distance to the unknown image's color code embeddings. The tooth color associated with the selected training image with the lowest distance is deemed to represent the color of the tooth in the uploaded image. The color code embeddings are preferably selected in iterative manner. In a series of iteration steps the most likely color code embeddings out of a subset of possible color code embeddings are suggested as the selected color code embeddings. When a three-field color coding, such as RGB or YCbCr is used for color coding, a 1x12 matrix may be used for color code embeddings. Iteration may start with all possible colors, and the amount of possible colors is reduced until just 16 possibilities for color code embeddings are left. Out of these 16 possibilities, three most likely, in other words the three most commonly appearing color code embedding options are selected. Naturally, instead of the 16 possibilities used in the example, any integer number of color code possibilities may be used. The three most likely color code embeddings are included in the 1x12 color code embeddings matrix. Preferably, three fields of the matrix indicate the relative amounts of the three most common color code embeddings, and the remaining fields are reserved for indicating the color coding for these. For example, three fields of the matrix may be used to include RGB or values or YCbCr color code values of each of the three most common three colors. If a four-color model such as CMYK is used for color coding, the color code matrix may be for example of size 1x15 to allow using four fields in the matrix for each one of the three most common color code embeddings.
In the step 607, tooth color information indicative of the determined tooth color shade is communicated to the user. The tooth color information may comprise the color code associated with the selected training image that corresponds to the most likely color shade of the tooth. The determined tooth color information may also comprise the determined color code embeddings.
In the optional step 608, at least part of the tooth color information is exported from the system, preferably in a digital format, to another data processing system or application of the user. By exporting the tooth color information, this information can be made available for other systems or applications for further analysis. When at least part of the tooth color information is exported, it may be further processed and/or analyzed by the user or by a system or application of the user for any purpose. For example, tooth color information may be exported to another application and/or to archives of the user for later use.
According to an embodiment, at least part of the tooth color information may be exported into an application facilitating manufacturing of artificial tooth or teeth. In another embodiment, at least part of the tooth color information may be stored into another user application to be subsequently used as basis of tooth color shade comparison. A tooth color shade comparison application may be used for example to detect changes in tooth color shade for example due to whitening or color changes due to diet. In yet another embodiment, the tooth color information may be determined to indicate that the tooth may, based on its color, have some abnormality. This is possible, if the training database comprises training images of teeth with abnormality. Indication of likelihood of an abnormality may also be provided to the user, so that he may for example examine the tooth in more detail. When tooth color information is exported, the step 607 may be omitted, since the tooth color shade information can be made available to the user via the other application or system that receives the exported tooth color information.
Main intelligence of the system resides at the server, more precisely at the server application running on the server. The user application running in the mobile device needs a data connection to the server. The user application acts as a user interface, allowing the user to obtain and upload images as well as tag tooth colors in the training images, and to receive tooth color information.
When the system is tested, every single trained color code embedding from the SVM model is compared to the color code embeddings that corresponds to test images captured for testing and the color code embedding with the lowest distance is the resulting tooth color shade. Thus, the resulting tooth color shade may be expressed by referring to a particular tooth color as defined in the used standard tooth color shade system. Testing ensures that the system works as intended and the color shades are detected accurately. It is apparent to a person skilled in the art that as technology advanced, the basic idea of the invention can be implemented in various ways. The invention and its embodiments are therefore not restricted to the above examples, but they may vary within the scope of the claims.

Claims

Claims
1. A computer-implemented method of defining color shade of a tooth using a camera of a mobile communication device, characterized by
- receiving an image of a tooth of a subject, wherein the received image comprises a part of an image acquired with a plain camera of the mobile communication device;
- obtaining indication of lighting conditions of the received image;
- selecting applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color;
- applying K-means clustering to the received image to obtain color code embeddings of the received image;
- adjusting the color code embeddings of the received image based on the indication of lighting conditions;
- comparing the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with color code embeddings having lowest distance to received image's color code embeddings;
- defining the color shade of the tooth in the received image to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the received image; and
- communicating tooth color information indicative of the defined tooth color shade back to the mobile communication device from which the image was received.
2. The method according to claim 1, wherein area of the received image is divided into a matrix with a plurality of cells, each cell representing one part of the tooth, and wherein color code embeddings are defined for each cell of the matrix.
3. The method according to any of claims 1 to 2, wherein the training database is one of a private training database and a global training database.
4. The method according to any of claims 1 to 3, wherein the received image is associated with a label indicating lighting conditions in which the received image was acquired.
5. The method according to any of claims 1 to 4, wherein the adjusting the color code embeddings comprises deducting from the obtained color embeddings of the received image a difference between color code embeddings of a global reference image and a calibration image acquired with the mobile communication device in lighting conditions that approximately correspond to the lighting conditions in the received image.
6. The method according to claim 5, wherein the applicable training images comprise a plurality of training images each associated with a label that indicates that the respective training image has been acquired in approximately similar lighting conditions with the received image and the calibration image and a label indicating actual color shade of a model tooth shown in the respective training image.
7. The method according to any of claims 1 to 4, wherein the adjusting the color code embeddings comprises calculating a magnitude difference between color code embeddings of the received image and color code embeddings of a calibration image acquired with the mobile communication device in approximately similar lighting conditions.
8. The method according to claim 7, wherein applicable training images comprise a plurality of training images associated with magnitude difference information, and the training image having the lowest distance is defined by comparing the magnitude difference of the received image and magnitude differences associated with the applicable training images.
9. The method according to any of the preceding claims, further comprising - exporting at least part of the tooth color information to another application or data processing system.
10. The method according to any of the preceding claims, further comprising at least one of:
- providing the defined tooth color information to be used as basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade,
- comparing the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and
- providing an indication that the defined tooth color information indicates an abnormality in the tooth.
11. A computer-implemented method of defining color shade of a tooth using a camera of a mobile communication device, characterized by
- acquiring an image of teeth of a subject with a plain camera of the mobile communication device;
- receiving, via the user interface of the mobile communication device, determination of an area in the acquired image that comprises one tooth;
- pre-processing the acquired image to produce an image of the one tooth for uploading;
- associating a label with the image that indicates lighting conditions in which the image was acquired; and
- uploading the image of the tooth to a server:
- for obtaining indication of lighting conditions of the received image on basis of the associated label; - for selecting applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color,
- for applying K-means clustering to the received image to obtain color code embeddings of the uploaded image,
- for comparing the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with lowest distance to uploaded image's color code embeddings, and
- for defining the color shade of tooth shown in the uploaded image to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the uploaded image; and
- receiving by the mobile communication device tooth color information indicative of the defined color shade of the tooth of the subject comprised in the uploaded image.
12. The method according to claim 11, wherein area of the uploaded image is divided into a matrix with a plurality of cells, each cell representing one part of the tooth, and wherein color code embeddings are defined for each cell of the matrix.
13. The method according to any of claims 11 to 12, wherein the training database is one of a private training database and a global training database.
14. The method according to any of claims 11 to 13, wherein the method further comprises associating, before uploading, the uploaded image with a label indicating lighting conditions in which the image was acquired.
15. The method according to any of claims 11 to 14, further comprising at least one of: - providing the defined tooth color information to be used basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade,
- indicating a result of comparison of the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and
- providing an indication that the defined tooth color information indicates an abnormality in the tooth.
16. A computer program product having instructions which when executed cause a computing device or system to perform a method according to any one of claims 1 to 10 or any one of claims 11 to 15.
17. A data-processing system comprising means for carrying out the method according to any one of claims 1 to 15.
18. A data-processing apparatus comprising means for carrying out the method according to any one of claims 1 to 10.
19. A mobile communication device comprising means for carrying out the method according to any one of claims 11 to 15.
PCT/FI2020/050203 2019-03-29 2020-03-27 Determining tooth color shade based on image obtained with a mobile device WO2020201623A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20717242.0A EP3948786A1 (en) 2019-03-29 2020-03-27 Determining tooth color shade based on image obtained with a mobile device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201921012676 2019-03-29
IN201921012676 2019-03-29

Publications (1)

Publication Number Publication Date
WO2020201623A1 true WO2020201623A1 (en) 2020-10-08

Family

ID=70189985

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2020/050203 WO2020201623A1 (en) 2019-03-29 2020-03-27 Determining tooth color shade based on image obtained with a mobile device

Country Status (3)

Country Link
EP (1) EP3948786A1 (en)
FI (1) FI130746B1 (en)
WO (1) WO2020201623A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023003932A1 (en) * 2021-07-23 2023-01-26 MIME, Inc. Color image analysis for makeup color prediction model
WO2023072743A1 (en) * 2021-10-28 2023-05-04 Unilever Ip Holdings B.V. Methods and apparatus for determining a colour value of teeth

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999056658A1 (en) 1998-05-05 1999-11-11 Dentech, Llc Automated tooth shade analysis and matching system
EP1486901A2 (en) * 2003-06-12 2004-12-15 Eastman Kodak Company System for determining the colour of teeth
US20130244197A1 (en) * 2012-03-16 2013-09-19 Soek Gam Tjioe Dental Shade Matching Device
WO2015082300A1 (en) * 2013-12-05 2015-06-11 Style Idea Factory Sociedad Limitada Device for dental use for discriminating the color of teeth
WO2017166326A1 (en) 2016-03-31 2017-10-05 姚科 Method and device for realizing color comparison of artificial tooth
WO2018080413A2 (en) 2016-10-31 2018-05-03 Cil Koray Dental color determination system integrated to mobile phones and tablets

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999056658A1 (en) 1998-05-05 1999-11-11 Dentech, Llc Automated tooth shade analysis and matching system
EP1486901A2 (en) * 2003-06-12 2004-12-15 Eastman Kodak Company System for determining the colour of teeth
US20130244197A1 (en) * 2012-03-16 2013-09-19 Soek Gam Tjioe Dental Shade Matching Device
WO2015082300A1 (en) * 2013-12-05 2015-06-11 Style Idea Factory Sociedad Limitada Device for dental use for discriminating the color of teeth
WO2017166326A1 (en) 2016-03-31 2017-10-05 姚科 Method and device for realizing color comparison of artificial tooth
WO2018080413A2 (en) 2016-10-31 2018-05-03 Cil Koray Dental color determination system integrated to mobile phones and tablets

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023003932A1 (en) * 2021-07-23 2023-01-26 MIME, Inc. Color image analysis for makeup color prediction model
WO2023072743A1 (en) * 2021-10-28 2023-05-04 Unilever Ip Holdings B.V. Methods and apparatus for determining a colour value of teeth

Also Published As

Publication number Publication date
EP3948786A1 (en) 2022-02-09
FI130746B1 (en) 2024-02-26
FI20195916A1 (en) 2020-09-30

Similar Documents

Publication Publication Date Title
US10115191B2 (en) Information processing apparatus, information processing system, information processing method, program, and recording medium
US7751606B2 (en) Tooth locating within dental images
US8819015B2 (en) Object identification apparatus and method for identifying object
JP4549352B2 (en) Image processing apparatus and method, and image processing program
JP5496509B2 (en) System, method, and apparatus for image processing for color classification and skin color detection
US9247106B2 (en) Color correction based on multiple images
US20150186755A1 (en) Systems and Methods for Object Identification
US20140267782A1 (en) Apparatus And Method For Automated Self-Training Of White Balance By Electronic Cameras
WO2013179581A1 (en) Image measurement device, image measurement method and image measurement system
WO2020201623A1 (en) Determining tooth color shade based on image obtained with a mobile device
CN110189329B (en) System and method for locating patch regions of a color chip
CN113111806A (en) Method and system for object recognition
EP4058986A1 (en) Method and device for identification of effect pigments in a target coating
Montenegro et al. A comparative study of color spaces in skin-based face segmentation
Lee et al. A taxonomy of color constancy and invariance algorithm
Justiawan et al. Comparative analysis of color matching system for teeth recognition using color moment
CN111291778A (en) Training method of depth classification model, exposure anomaly detection method and device
WO2023096971A1 (en) Artificial intelligence-based hyperspectrally resolved detection of anomalous cells
CN114972547A (en) Method for determining tooth color
KR102504318B1 (en) Growth analysis method based on smart edge device and apparatus and system therefor
CN102077245B (en) Face-detection processing methods, image processing devices, and articles of manufacture
JPH07262379A (en) Article identifying system
US20220172453A1 (en) Information processing system for determining inspection settings for object based on identification information thereof
KR20230092604A (en) Intelligent cloud volume measurement apparatus and method
Yen et al. A new approach for measuring facial image quality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20717242

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020717242

Country of ref document: EP

Effective date: 20211029