EP3948786A1 - Détermination de teinte de dent en fonction d'une image obtenue à l'aide d'un dispositif mobile - Google Patents

Détermination de teinte de dent en fonction d'une image obtenue à l'aide d'un dispositif mobile

Info

Publication number
EP3948786A1
EP3948786A1 EP20717242.0A EP20717242A EP3948786A1 EP 3948786 A1 EP3948786 A1 EP 3948786A1 EP 20717242 A EP20717242 A EP 20717242A EP 3948786 A1 EP3948786 A1 EP 3948786A1
Authority
EP
European Patent Office
Prior art keywords
image
tooth
color
training
embeddings
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20717242.0A
Other languages
German (de)
English (en)
Inventor
Tomi HOTAKAINEN
Jukka KUOSMANEN
Ville Sarja
Karthik Balu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LUMI DENTAL LTD
Original Assignee
Lumi Dental Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lumi Dental Oy filed Critical Lumi Dental Oy
Publication of EP3948786A1 publication Critical patent/EP3948786A1/fr
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/46Measurement of colour; Colour measuring devices, e.g. colorimeters
    • G01J3/463Colour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/46Measurement of colour; Colour measuring devices, e.g. colorimeters
    • G01J3/50Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors
    • G01J3/508Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors measuring the colour of teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C13/00Dental prostheses; Making same
    • A61C13/08Artificial teeth; Making same
    • A61C13/082Cosmetic aspects, e.g. inlays; Determination of the colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Definitions

  • the present invention relates to a method, a system and a computer program product related to determining color shade of a tooth of a subject. More particularly, the invention relates to a method, a system and a computer program product related to determining color shade of a tooth in order to enable manufacturing of artificial teeth, an artificial tooth or a dental crown (corona artificialis) in correct color and/or in order to enable determining change in tooth color and/or in order to provide indication that the tooth may suffer from an abnormality.
  • Natural teeth are formed by layers of materials having different optical characteristics.
  • the enamel of the outer part of the tooth affects greatly on the color shade of the tooth, but also the dentine under the enamel may affect the color shade, especially if the enamel is particularly thin and/or translucent. Thickness of the enamel varies. This complexity of the structure of the teeth causes the color shade definition to become a challenging task. Description of the related art
  • Imaging-based solutions for tooth color shade detection recognize the problem caused by effect of lighting conditions that may significantly affect the colors appearing in the image.
  • Two main types of solutions are known in the art for calibrating the color shades: lighting conditions are standardized for example using a lighting device that closes any ambient light away that would affect the acquired image, and/or one or more standard reference colors or color shades are included in the same image with the subject tooth or teeth. These calibration methods may be used either separately or combined.
  • International patent application W017166326 A1 discloses a method and device for realizing color comparison of artificial tooth using an image.
  • a standardizing color comparison environment is provided having a standard light source. The image, as such is sent to artificial tooth production center and a technician compares the image shown on a color corrected monitor to a selection of colors that are under a like standard light source.
  • An object is to provide a method and apparatus so as to solve the problem of determining tooth color shade.
  • the objects of the present invention are achieved with a method performed in a server according to the claim 1 and with a method performed in a mobile communication device according to the claim 11.
  • the objects of the present invention are further achieved with a computer program product according to claim 15, a data-processing system comprising means for carrying out the method steps, with a data-processing apparatus according to the claim 17 and a mobile communication device according to claim 18.
  • a computer-implemented method of defining color shade of a tooth iusing a camera of a mobile communication device comprises receiving an image of a tooth of a subject, wherein the received image comprises a part of an image acquired with a plain camera of the mobile communication device and obtaining indication of lighting conditions of the received image.
  • the method also comprises selecting applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color and applying K-means clustering to the received image to obtain color code embeddings of the received image.
  • the method further comprises adjusting the color code embeddings of the received image based on the indication of lighting conditions, comparing the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with color code embeddings having lowest distance to received image's color code embeddings and defining the color shade of the tooth in the received image to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the received image.
  • the method also comprises communicating tooth color information indicative of the defined tooth color shade back to the mobile communication device from which the image was received.
  • area of the received image is divided into a matrix with a plurality of cells, each cell representing one part of the tooth, and wherein color code embeddings are defined for each cell of the matrix.
  • the training database is one of a private training database and a global training database.
  • the received image is associated with a label indicating lighting conditions in which the received image was acquired.
  • the adjusting the color code embeddings comprises deducting from the obtained color embeddings of the received image a difference between color code embeddings of a global reference image and a calibration image acquired with the mobile communication device in lighting conditions that approximately correspond to the lighting conditions in the received image.
  • the applicable training images comprise a plurality of training images each associated with a label that indicates that the respective training image has been acquired in approximately similar lighting conditions with the received image and the calibration image and a label indicating actual color shade of a model tooth shown in the respective training image.
  • the adjusting the color code embeddings comprises calculating a magnitude difference between color code embeddings of the received image and color code embeddings of a calibration image acquired with the mobile communication device in approximately similar lighting conditions.
  • applicable training images comprise a plurality of training images associated with magnitude difference information, and the training image having the lowest distance is defined by comparing the magnitude difference of the received image and magnitude differences associated with the applicable training images.
  • the method further comprises exporting at least part of the tooth color information to another application or data processing system.
  • the method further comprises at least one of: providing the defined tooth color information to be used as basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade, comparing the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and providing an indication that the defined tooth color information indicates an abnormality in the tooth.
  • a data-processing apparatus comprising means for carrying out the method according to any of the above aspects.
  • a computer-implemented method of defining color shade of a tooth using a camera of a mobile communication device comprises acquiring an image of teeth of a subject with a plain camera of the mobile communication device, receiving, via the user interface of the mobile communication device, determination of an area in the acquired image that comprises one tooth, pre-processing the acquired image to produce an image of the one tooth for uploading, and associating a label with the image that indicates lighting conditions in which the image was acquired.
  • the method further comprises uploading the image of the tooth to a server for obtaining indication of lighting conditions of the received image on basis of the associated label, for selecting applicable training images from a training database, wherein each training image comprises an image of a model tooth with known color, for applying K-means clustering to the received image to obtain color code embeddings of the uploaded image, for comparing the obtained color code embeddings to all color code embeddings of the selected training images to find the training image with lowest distance to uploaded image's color code embeddings, and for defining the color shade of tooth shown in the uploaded image to be equal to the color shade of a model tooth shown in the training image having the lowest distance of color code embeddings to those of the uploaded image.
  • the method also comprises receiving by the mobile communication device tooth color information indicative of the defined color shade of the tooth of the subject comprised in the uploaded image.
  • area of the uploaded image is divided into a matrix with a plurality of cells, each cell representing one part of the tooth, and wherein color code embeddings are defined for each cell of the matrix.
  • the training database is one of a private training database and a global training database.
  • the method further comprises associating, before uploading, the uploaded image with a label indicating lighting conditions in which the image was acquired.
  • the method further comprises at least one of: providing the defined tooth color information to be used basis for manufacturing an artificial tooth or artificial teeth that have the defined color shade, indicating a result of comparison of the defined the color shade to a color shade of the same tooth of the same subject obtained previously for determining change of color shade of the tooth, and providing an indication that the defined tooth color information indicates an abnormality in the tooth.
  • a mobile communication device comprising means for carrying out the method according to any one of the the eleventh to fifteenth method aspect.
  • a computer program product having instructions which when executed cause a computing device or system to perform a method according to any one of the above method aspects.
  • a data-processing system comprising means for carrying out the method according to any of the above method aspects.
  • the present invention is based on the idea of acquiring an image of teeth with a standard camera provided in a mobile communication device and using artificial intelligence obtainable with use of a combination of machine learning methods for teaching and enabling the system to recognize exact colour shade of the tooth of previously unknown colour shown in the image.
  • the determined colour shade may then be utilized for manufacturing artificial teeth or tooth crown.
  • the present invention has the advantage that the dentist can use a simple, plain mobile communication device camera, for example a mobile phone or tablet computer camera, to acquire a digital image of the teeth of a patient, while the system enables accurately determining color shade of the tooth on basis of the acquired digital image with high reliability without requiring exact calibration of parameters that have an effect on the colors appearing in the acquired digital image.
  • Figure 1 is a schematic representation of a system
  • Figures 2a to 2d illustrate pre-processing of the images.
  • Figure 3 illustrates an exemplary process of determining a tooth shade as seen by a user
  • Figure 4 illustrates an exemplary process of determining tooth shade as seen by the system
  • Figure 5 illustrates a process of handling a training image
  • Figure 6 illustrates a process of defining tooth color shade
  • the figure 1 illustrates a system according to the invention.
  • This exemplary system serves three different users, for example dentists, each having their own mobile devices (100a, 100b, 100c), each equipped with a camera and with mobile data connectivity. Any number of users may use the actual system.
  • Each mobile device (100a, 100b, 100c) is capable of connecting to a server (110) over a data connection.
  • the server (110) carries responsibility of the intelligence in the system.
  • the server comprises or is associated with at least one image storage means (120, 125a, 125b, 125c, 130, 140).
  • the system utilizes at least three types of images, namely training images, calibration images and production images.
  • the term 'training image' refers to an image that is used for training an artificial intelligence mapping function to map an image representing a tooth or teeth of a patient to a particular tooth shade.
  • the term 'calibration image' refers to an image that is used for calibrating lighting conditions. Lighting conditions refer to normal ambient lighting conditions at the venue, without any specially designed apparatus attached or associated with the mobile phone or its camera for standardizing the lighting conditions.
  • the term 'production image' refers to an image representing teeth/tooth of an actual patient, the color of which is to be determined.
  • each of the training image, calibration image and production image comprise an image of a single tooth that covers at least majority of the visible area of a single tooth.
  • Training images comprise one or more collections of images used for training the tooth color shade determining system.
  • the term training database refers to a collection of labeled training images, and training data associated with the respective training images as training data. Training data may be associated with the training images as labels. Training databases may be logically divided into private training databases (125a, 125b, 125c) and global training databases (120).
  • labels associated with each training image preferably comprise an identifier of the user, one or more identifiers of the training environment and attributes of the tooth color shade.
  • the identifier of the user may be for example a username.
  • Identifiers of the training environment may comprise time at which the image was taken, name of the venue as given by the user, GPS coordinates, light source, model of the camera, phone model and calibration image name.
  • Light source may for example define name and/or model of the light source in the venue as given by the user.
  • Attributes of the tooth color shade include the color shade to which the image is/was compared with (for example Al) and color code embeddings. If the image was taken from a standard tooth shade guide, the attributes of the tooth color shade include the known standard shade. If the training image is taken from a real tooth, there is a non-zero likelihood that there is an error in the training data.
  • the global training database preferably comprises the same data as a private training database, except the user identification data and the name of the venue as a user entered data.
  • Training images in a private training database may comprise identification of the user, but training images in a global training database (120) should not comprise any user identification for ensuring user privacy.
  • Images, including training images, calibration images and production images are preferably labeled differently according to their intended use, origin and other information associated with the images for example by the user, and processing, storage and use of the images by the server (110) is based on the labeling.
  • Labeling enables flexible use of images. For example, images from a private training database may be included in the global training database by relabeling. However, to maintain full control of privacy of users as well as quality and content of the global training database (120), such inclusion is preferably only performed by operators or supervisors of the system on consent of the respective user.
  • the term label refers to any type of additional data associated with images. Labels may also be referred to as metadata.
  • a training database (120, 125a, 1250b, 125c) may be used for color shade determination of production images acquired using camera of any one of the mobile devices (100a, 100b, 100c).
  • each user can build their own private training database (125a, 125b, 125c) accessible only by the respective user using his/her mobile device (100a, 100b, 100c) or by other internet-capable device that is capable of authenticating the user.
  • the private or global training databases comprise data associated with each of a plurality of training images that are used to teach the system to correctly recognize color shade of a tooth.
  • training databases 120, 125a, 125b, 125c
  • training databases 120, 125a, 125b, 125c
  • storage means 130
  • storage means 130
  • storage means 130
  • storage means 140
  • production images may reside in same or different physical storage devices as known in the art.
  • the server (110) may be a single physical server or a plurality of physical servers, or the server (110) may be implemented in a virtual server or in a cluster of virtual servers each providing service to one or more users (100a, 100b, 100c) as known in the art.
  • the training images stored in the private or global training databases are used for training an artificial intelligence mapping function to map an image representing a tooth or teeth of a patient to a particular tooth shade.
  • a plurality of images (200) of teeth of the patient is acquired by the user with the normal, plain camera of the mobile device.
  • Each acquired image is pre-processed at the user device before the training image is sent to the server.
  • the pre-processing will be described in more detail later on.
  • images used for a particular private training database are always acquired using the same mobile device of the user, which device comprises a camera and a tooth color shade application for processing and labeling the images.
  • the tooth color shade application running at the server may be referred to as 'the server application'.
  • the user may choose to build several private training databases for different lighting conditions. This may be the case if the user works and takes pictures in different venues, or if amount of ambient light in a particular venue varies greatly depending on for example time of day or time of year.
  • Figures 2a to 2d illustrate examples of pre-processing of acquired images. The same process is in principle applicable to all types of images used in this system, in other words for training images, calibration images and production images.
  • the acquired image shows mouth and teeth of a subject.
  • the image may show a model tooth on arbitrary background.
  • the user After acquiring the image with the camera of the mobile device, the user preferably indicates an area showing a single tooth (210) in the acquired image (200).
  • the user indicates the selection of the single tooth (210) by cropping the image (200) as illustrated with the grey shading in the figure 2b so that only the single, selected tooth (210) is shown in the cropped image (200') ⁇
  • the user may be enabled to select a part (200") of the image as illustrated in the figures 2c and 2d, and the user application may automatically crop the image by removing image data outside the area selected by the user.
  • information on the selected area may be included in the acquired image, which is uploaded in its entirety to the server, wherein only the selected area will be processed for tooth color determining.
  • the term uploaded image refers to the cropped image (200') or the selected part of the image (200") that is uploaded to the server for processing.
  • the user application is preferably operable for performing the cropping of the image or selecting the area in the image for automatic cropping.
  • the user could use other image processing software in the user device for cropping the image and only then associate the cropped image with the user application.
  • the uploaded image (200', 200") showing the selected tooth (210) may be divided into a matrix (220) having a plurality of cells, and color shade is defined for each of these cells.
  • the cells of the matrix (220) have approximately of equal areas.
  • Defining the matrix (220) is preferably performed at the server, by the server application, the matrix (220) may also be defined by the user application. Dividing the area of the uploaded image (200', 200") into a plurality of cells enables taking into account variation of the tooth color shade between different parts of the selected tooth (210).
  • a 3*3 matrix may be used as illustrated in the figure 2b, and according to another exemplary embodiment, a 3*4 matrix as illustrated in the figures 2c and 2d.
  • a smaller or larger matrix (220) may be used, for example 2*2, 2*3, 4*4, 4*5, 4*6, 5*5, or 5*6 matrix. Decreasing the size of the matrix reduces the accuracy of color shade determining in different parts of the tooth, but tests have indicated that sufficiently accurate color shade determining of different parts of the tooth can be achieved for example with a 3*3 matrix.
  • the cropped image or the selected area of the image is preferably rectangular, as seen commonly in the art, but it can also be free form as illustrated in the figure 2d to facilitate inclusion of majority of the area of the selected tooth (210) in the uploaded image (200', 200") for tooth color shade determining processing, without including any noise caused by unwanted objects in the part of the acquired image to be analyzed, such as neighboring teeth, gum, tongue or lips. If a free form selection is used, the cells of the matrix may have mutually different shapes and sizes especially in the outer edge cells of the matrix.
  • Figure 3 illustrates an exemplary high-level process of determining a tooth shade as seen by a user using the user application.
  • a user interface towards the system is preferably provided by the user application running on a processor of a mobile device used by the user.
  • the user After installing the user application at the mobile device of the user and signing up as user of the user application, the user first acquires a calibration image using the camera of the mobile device in the step 301.
  • a unique identification is generated for the user.
  • the user is identified with identification '111'.
  • the calibration image is to be acquired in the normal working lighting conditions at the user's premises, for example at a dentist's reception. Normal working lighting conditions thus refer to normal ambient lighting conditions at the venue, without any specially designed apparatus attached or associated with the mobile phone or its camera for standardizing the lighting conditions. If the user works in more than one venue at different times, separate calibration image is preferably acquired for each of the venues, since the lighting conditions likely vary between these significantly.
  • the user application allows the user to tag each calibration image according to the venue it was taken in. Further, geographical location of the calibration image may be associated with the calibration image and used as identification of the venue.
  • the venues may be named in any manner. For example, a simple numbering or alphabet naming may be user. Preferably the user may name the venue freely, reflecting the actual name of the venue such as "Discovery Bay Office", "Central Office” and so on. This name is shown to the user in his/her user application. This way it becomes easier for the user to select the correct venue each time he/she uses the user application subsequently.
  • the calibration image naming functionality of the user application may be utilized for naming calibration images taken at different times in the same venue.
  • the calibration image thus acquired is used for an initial lighting calibration.
  • the calibration image is preferably an image of a tooth sample with predefined color, for example A1 color shade as provided in a standard shade guide, such as Vita Classic Shade Guide used since 1960's, using the light source in the dentist's office.
  • the calibration image is cropped, or the area of the sample tooth is selected as explained above in relation to images 2a to 2d.
  • the calibration image is uploaded to the server (110) and stored as a calibration image at the memory device (130) at or associated with the server.
  • the stored calibration image may be labeled as a calibration image for this particular user (Ul) together with the user-given name for the calibration image.
  • the calibration image is a reference that is used for determining approximate lighting conditions (LI).
  • the result, color code embeddings of the calibration image is compared to color code embeddings of a global reference image. While the calibration image and the global reference image represent a sample tooth of known color, preferably A1 color, a difference e can be calculated between the color code embeddings of the calibration image and the embeddings of the global reference image.
  • Calculation of the difference 'e' can be expressed with a mathematical representation.
  • the lighting conditions in the calibration image be LI, and the model tooth shown in the calibration image has shade Al.
  • color code embeddings of the calibration image are associated with parameters (Ll+Al).
  • Color code embeddings of the corresponding global reference image is associated with parameters Global(L+Al).
  • Calibration image is used to enable compensation of differences in lighting conditions and thereby indirectly cancelling differences in factors affecting colors seen in the image, including but not restricted to ambient light, camera clarity, indoor air quality problems.
  • the color shade determining system is capable of adapting the color shade determining such that lighting conditions in production images do not have to be exactly the same as in the calibration image.
  • the user is provided with two alternatives in the step 302.
  • training database refers to a processed training database that is operable to define tooth color shades.
  • Second alternative is that the user enables use of global training database, which is provided in the global training database by the system provider, ready to be used by any user.
  • the user selects use of global training database
  • use of the private training database is no longer enabled, and the system will subsequently only use the global training database provided by the system provider and the user device is auto-calibrated using the above explained calibration technique.
  • the user initially selects use of a private training database, he retains the option to change to use of the global training database as illustrated with the arrow back to the selection step 302. Such re-selection may occur at any step after first selecting use of private training database.
  • the images recorded in the training databases include no patient identification information.
  • the images include label indicating the user identity (Ul). Recorded images represent a tooth or a part of a tooth from which the user or the patient is not possible to be identified.
  • No mix of private and global training database is allowed without user's permission. Only the administrators of the system providing company can access the global training database using his administration permission login to the system.
  • inclusion of private training database into the global training database may be enabled. However, such inclusion can only be performed by administrators of the system in response to acceptance of such inclusion by the respective user, and any user identification (Ul) of the included private training database is removed in this process.
  • Global training database should never include any identification of users from whom such training database originates from to protect user's privacy.
  • a production image in other words an image acquired by the user using his mobile device and representing a patient's tooth of unknown color
  • the server application performs color shade determination on basis of intelligent comparison of the production image with the private training database.
  • a production image in other words an image acquired by the user using his mobile device and comprising teeth of unknown color, is obtained, processed at the user device and uploaded to the server, which performs color shade determination on basis of intelligent comparison of the production image with the global training database.
  • the acquired image may be cropped in the user device or the appropriate area of the image that includes the selected tooth is marked on the image before uploading the image to the server for processing.
  • the matrix dividing the selected tooth into a plurality of areas to be processed may be defined at the user device or at the server.
  • the image to be uploaded may be appropriately labeled by the user device and/or by the server. If user identity is associated in the uploaded image, it can be stored as a label associated with the image. Also, a label indicating lighting conditions may be associated with the uploaded image.
  • the user preferably selects one lighting condition among the venue and/or lighting condition names associated with and thus identifying his/her calibration images, and the respective lighting condition label (LI) is associated with the uploaded training images and production images using the user application, that best matches the current lighting conditions.
  • the respective lighting condition label (LI) is associated with the uploaded training images and production images using the user application, that best matches the current lighting conditions.
  • steps 305P and 305G are mutually similar, in response to uploading the production image to the server, he receives information that indicates the automatically defined tooth color shade.
  • the user may repeat steps 304G and 305G or steps 304P and 305P for a plurality of production images.
  • the user may also choose, at any point, to obtain another calibration image. This may occur for example when he/she first time uses the user application at a new venue.
  • Figure 4 illustrates an exemplary high-level process of determining tooth shade as seen by the system.
  • the system receives and analyzes a calibration image from a registered user (Ul).
  • a fingerprint of the user is created that comprises a unique identification of the user (Ul), his unique calibration lighting condition (LI) and information associated with color code embeddings associated to the predefined color (Al) of the standard A1 tooth as it appears in the calibration image.
  • the data structure stored on the server thus comprises information Ul + LI + Al.
  • Bitmap format is preferred for all image types, since compression of the image reduces colors in the image, which would subsequently reduce quality of the color shade determining.
  • Image coding preferably utilizes one of color spaces commonly used in computer graphics, such as RGB (Red-Green-Blue), CMYK (Cyan-Magenta-Yellow-Black) or YCbCr. Lighting condition is defined by analyzing the calibration image.
  • One possible method of defining a lighting condition is to first process the calibration image using K-means clustering algorithm, which is known in the art of image processing.
  • the calibration image or each cell of the calibration image may be simplified into a limited size color code embeddings matrix using the K- means clustering algorithm.
  • the calibration image or each cell of the calibration image may be simplified into a 1x12 color code embeddings matrix.
  • the determined lighting condition (LI) may be stored as a label associated with the image.
  • the label comprises the 1x12.
  • the 1x12 matrix may be defined as follows: first 3 items of the 1x12 matrix may include the proportions of the 3 most often detected colors in the image.
  • the remainder of the 1x12 matrix may then comprise data indicative of the respective colors.
  • RGB coding may be used.
  • 3 items of the 1x12 matrix may be indicative of RGB values, for example the value of R of the first, most often detected color corresponding to the first of the first 3 items, then the value of G of the first color and then B of the first color.
  • another 3 items of the 1x12 matrix may represent RGB values of the second in order most often detected color.
  • the remaining 3 values are the 1x12 matrix may represent RGB values of the third in order most detected color and its RGB values.
  • the most often detected colors defined in the label may be arranged in any order; in other words, the colors do not have to be in order of proportions of appearance of the colors.
  • Various labels associated with the images may have a predefined format or free text form, and they may include but are not limited to a label identifying at least one of a venue name and a time of day or time of year.
  • the label may also be a combination of a free text defined by the user and one or more system provided labels having predefined form, which may or may not be shown to the user.
  • the calibration image metadata stored as labels associated with the image may include information on location where the calibration image was taken. This enables the user application to subsequently automatically select or propose to the user to select the appropriate calibration image for each subsequent training and/or production image on basis of the location.
  • Location data may be actual geographical location defined for example on basis of a GPS, GNSS, GLONASS or alike positioning system available in the user device, but location information may also be defined in any other way known in the art, for example on basis of available local area network names.
  • the server next receives a plurality of training images in the step 402 and processes them to generate a fully functional private training database. If the user selects using global training database, step 402 may be omitted.
  • the private training database preferably comprises processed training database based on a plurality of training images that are taken by the same user (Ul) and that have the same lighting conditions (LI).
  • Ul the same user
  • LI lighting conditions
  • the location may be suggested by the user application on basis of detected current location of the user and on basis of locations for which calibration images have been provided. Selection can be fully automatic, or the user may manually select and/or accept a suggestion made by the user application.
  • Global training database comprises training database that is based on a plurality training images acquired and uploaded by either the system provider or by anonymous users, which training database comprises training database for a plurality of different lighting conditions. After processing the calibration image, the system is ready to receive production image in the step 403, representing a tooth with unknown color.
  • the production image is then analyzed in the step 404 to determine the color shade of the tooth.
  • Color shade analysis is performed using a process that will be disclosed in more detail in connection to the figure 6.
  • the result is communicated and/or provided back to the user in the step 405.
  • Communication may be performed for example by showing in the user interface of the user application the determined color shade of the tooth or, when a matrix (220) is used, the determined color shade of each cell in the matrix.
  • the application also enables the user to export tooth color information as an export file in any applicable computer coding format known in the art. For example, a pdf-document may be exported.
  • Such exported file comprising tooth color information can subsequently be used in communication with a dental laboratory that manufactures the artificial tooth.
  • the exported file may be attached to an email, or the exported file can be automatically transferred to other computer systems over any type of data exchange capable interface known in the art of computer networks and/or mobile devices.
  • Exported and/or communicated tooth color information may further comprise color code embeddings of the analyzed production image.
  • Figure 5 illustrates an exemplary process of handling a training image when training of a training database is performed. This process may be referred to as a light calibration method. Same process may be applied to training both the private training database and the global training database, although the source of the received training image may be different.
  • the user can start training his own private database by acquiring a plurality of training images using his camera in his own lighting conditions (LI) corresponding to those of a previously acquired calibration image.
  • the user (Ul) preferably acquires images of teeth of different colors, for example model teeth with different known colors according to a standard tooth color shade system.
  • the user labels the training image with a known color shade according to the model tooth.
  • model teeth may have colors Al, A2, A3, A3.5, A4, Bl, B2, B3, B4, Cl, C2, C3, C4, D2, D3, D4, and the respective training images are labeled accordingly. Only single-color teeth or standard colored model teeth should be used for training. The acquired training image and the respective tooth color information is uploaded and stored in the private training database associated with the server.
  • the global training database is trained in a similar manner, but preferably a plurality of training images taken for each of a plurality of different lighting conditions and known tooth colors.
  • an enhanced training target for a user to be used as a training image.
  • Such enhanced training target which may be referred to as a training sheet, comprises a single sheet a variety of known model tooth color shade samples on a black background.
  • the variety of shade samples comprise the entire range of shades of a standard shade guide, such as the Vita Classic Shade Guide.
  • all shade samples disposed on the enhanced training sheet are rectangular. This simplifies image processing task, while the wanted shade sample areas may be easily recognized and cropped for further image processing.
  • the acquired training image is preferably processed at the server, since a standard mobile phone camera is unlikely to have image processing capabilities to extract a plurality of selected areas from a single acquired image.
  • the server preferably has pre-stored information on order of the tooth shade samples on the training sheet so that each sample can be automatically labeled.
  • the mobile phone may also comprise image processing functionality which enables separately selecting (cropping) each of the shade samples shown in the acquired, single image of the training sheet and providing a plurality of cropped shade sample images with for processing at the server.
  • the image processing functionality of the mobile phone may also be capable of labeling the cropped shade sample images.
  • the shade sample labels may be associated with each of the cropped shade sample images based on order of arrival at the server.
  • the training image is received at the server.
  • the received training image is labeled with information regarding the known standard color of the model tooth shown in the training image. If the training image is used for training private training database, the training image will also be tagged with information regarding the user and his initial lighting conditions.
  • the lighting information associated to this training image is obtained by the server.
  • lighting information preferably comprises the lighting label LI associated with a calibration image and the difference e that was defined on basis of the respective calibration image.
  • an unsupervised learning algorithm is used to analyze the training image.
  • the same unsupervised learning algorithm may be used for defining colors in all image types.
  • K-means clustering algorithm known in the art may be used, which finds groups in the data. K refers to number of groups.
  • the algorithm works iteratively to assign each data point of the image, in other words each pixel of the image, to one of K groups based on the features that are provided. Data points are clustered based on feature similarity.
  • the results of the K-means clustering algorithm are:
  • unsupervised learning is applied to convert a large matrix, for example a bitmap acquired with the camera into a smaller matrix, which in our case includes predominant color codes in the acquired image.
  • color code embeddings of the training image are first defined.
  • the defined training image color code embeddings are then adjusted by deducting the difference e defined on basis of the calibration image from the defined color code embeddings.
  • input to the K-means clustering algorithm is preferably a cropped tooth image that only comprises a single tooth or majority of a single tooth.
  • the uploaded image may have size of 500x500 pixels.
  • the user may determine in any other way an area that represents an area showing the one tooth, and the system may automatically crop before the image is uploaded. For accurate tooth color shade determining lips, gum and other teeth must be cropped away or limited outside the selected area that is to be processed by the K-means clustering algorithm.
  • the area shown in the received image is preferably handled as a matrix of a plurality of cells.
  • the output of the K-means algorithm is color code embeddings for each of the plurality of cells. These embeddings are then adjusted by deducting the difference defined on basis of the corresponding calibration image.
  • the training database After storing a plurality of adjusted color code embeddings for a plurality of training images representing sample teeth of known color in approximately same lighting conditions LI, the training database is ready to be utilized by a supervised learning algorithm to determine color shade of a tooth with unknown color in the approximately same lighting conditions.
  • a supervised learning algorithm is applied for determining color shade of a tooth.
  • This step can be used for testing quality of the training data and adding new training images in the training data as well as for determining color shade of a tooth in a production image. Testing refers to obtaining images of known color shade tooth, but not indicating this color to the application. Thus, the application will handle the image as it was a normal production image, and the user may compare the result to the actual known color of the sample tooth.
  • Support Vector Machine (SVM) algorithm is used as the supervised learning algorithm.
  • the objective of the SVM algorithm is to find a hyperplane in an N-dimensional space (N-number of features) that distinctly classifies the data points, namely the color code embeddings received from the K-means clustering algorithm.
  • SVM is found to be particularly useful to identify matching color code embeddings with the lowest distance to the training image color code embeddings in order to predict the shade of the teeth. For example, if the user has trained the model with one image of A1 color model tooth, the comparison is done during the testing stage to that one training image only.
  • the color code embeddings will be compared with color code embeddings of all those A1 tooth color training images as well as to color code embeddings of training images representing any other standard color mode teeth, such as A2, A3 and so on.
  • the training process utilizes the combination of the unsupervised training algorithm in the step 503 and the supervised training algorithm, and the analyzed new training image is included in the supervised training model at the step 505. If the acquired image is a production image, it is not included in the training model, but obtained color shade information is communicated back to the user.
  • the uploaded image does not have associated any information on the expected color of the tooth shown in the uploaded image.
  • the color code embeddings are then adjusted by deducting the difference defined on basis of the calibration image from the defined color code embeddings.
  • Supervised training algorithm is then applied to adjusted color code embeddings of the uploaded image, comparing it to all color code embeddings in the respective training database. As a result, the supervised training algorithm decides which tooth color shade has the closest distance to that of the uploaded image and provides this determined color shade as the result to the user.
  • the acquired private training database may be merged with a global training database after a check has been performed to the private training database by administrators of the system provider. This avoids uploading erroneously tagged or otherwise erroneous images in the global training database.
  • identity of the user is preferably removed from the training images. In the process of merging, identity of the user is removed by not saving the identity information (Ul) of the record as it has become unnecessary and potentially would be used for user identification afterwards.
  • the user can start acquiring and uploading actual production images, each representing a tooth of a patient for color shade determination using his own private training database.
  • the user can select use of a global training database.
  • the server application is configured to compare calibration information of the user and the private training database with the global training database. If the server application detects on basis of such comparison that the global training database includes enough images taken in lighting conditions LI, the application may propose to the user that he could use the global training database instead of the private training database.
  • Sufficient number of images may be for example at least 400 images, preferably at least 500 images and more preferably at least 500 images.
  • Tooth color shade may also be an indication of an abnormality in the tooth.
  • Training images can also be taken from a real tooth with abnormality, such as abnormal color or shape of the tooth that indicates for example that the tooth is dead, it has caries or any other tooth disease.
  • the user may label the image as representing a tooth with abnormality.
  • the label given by the user preferably names the type of abnormality.
  • the uploaded image is in this case cropped to represent the area with the abnormality.
  • the training image is then stored in the training database similarly to any other training image.
  • the same training database may provide means for determining tooth color shade and/or an indication of possible abnormality.
  • different training databases may be defined for different purposes/uses.
  • no global reference image is used for calibration.
  • the color code embeddings of the calibration image are used as such, and a difference is calculated between the color code embeddings of the calibration image and the training image.
  • Calibration image stored in the database is associated with information (Ll+Al) that corresponds to the color code embeddings of the calibration image.
  • color code embeddings for the obtained training images are adjusted in the step 504 by deducting the color code embeddings of the calibration image as such from the color code embeddings of the training image.
  • Value "y” thus represents distance of the color embeddings of an image of A3 colored sample tooth from the calibration image. Similar calculations are performed to all training images representing different model tooth colors. This method may be referred to as "magnitude comparison" and this variant of the method, the magnitude difference between the A1 and other shades are stored and used for the analysis.
  • the alternative embodiment is particularly useful when global training database is used, since it does not restrict selection of applicable training images to any specific lighting conditions. However, either of the embodiments may be used with global and private training databases.
  • magnitude difference may be defined as a vector.
  • the magnitude difference vector is preferably of same form and size as the color code embeddings.
  • the supervised learning algorithm may then be applied in the phase 505 to determine the shade of the tooth shown in the uploaded image by finding the color code embedding in the training database that has the lowest distance to this obtained color code embeddings "x".
  • Figure 6 illustrates a process of defining tooth color shade.
  • an image is received that shows at least one tooth or teeth of a subject.
  • the color shade of the tooth is unknown.
  • step 602 applicable calibration image in the selected training database, private or global, is selected on basis of at least one of user and lighting information.
  • K-means clustering similar to explained in connection to phase 440 is applied on the received image for obtaining color code embeddings in the received image.
  • the color code embeddings are all associated with colors of a single tooth.
  • the same principle may be used for detecting symptoms of a dental disease based on color of tongue or gums, or extraordinary color of the tooth, which may be indicative of for example dental caries.
  • the same principle can be used for or detecting the change of the color in the tooth/teeth over time by detecting the color from the same user on regular basis.
  • the color of the tooth of the user may change over time due various reasons. The reasons include but are not limited to recurring user actions to improve tooth whitening or alternatively user diet may impact the color of the tooth.
  • respective part of the image should be selected that represents the respective tooth of interest, the tongue or part of the gums.
  • the obtained color code embeddings are adjusted to remove or reduce effects of lighting in the obtained color code embeddings.
  • the adjustment may be performed either by deducting the difference e from the obtained color code embeddings or by deducting color code embeddings of the calibration image from the obtained color code embeddings.
  • the result of the step 604 is adjusted color code embeddings.
  • the adjusted color code embeddings are compared to trained color code embeddings among the training images in the applicable training database.
  • the color code embeddings of a training image is selected that has the lowest distance to the unknown image's color code embeddings.
  • the tooth color associated with the selected training image with the lowest distance is deemed to represent the color of the tooth in the uploaded image.
  • the color code embeddings are preferably selected in iterative manner. In a series of iteration steps the most likely color code embeddings out of a subset of possible color code embeddings are suggested as the selected color code embeddings.
  • a 1x12 matrix may be used for color code embeddings. Iteration may start with all possible colors, and the amount of possible colors is reduced until just 16 possibilities for color code embeddings are left. Out of these 16 possibilities, three most likely, in other words the three most commonly appearing color code embedding options are selected. Naturally, instead of the 16 possibilities used in the example, any integer number of color code possibilities may be used.
  • the three most likely color code embeddings are included in the 1x12 color code embeddings matrix.
  • three fields of the matrix indicate the relative amounts of the three most common color code embeddings, and the remaining fields are reserved for indicating the color coding for these.
  • three fields of the matrix may be used to include RGB or values or YCbCr color code values of each of the three most common three colors. If a four-color model such as CMYK is used for color coding, the color code matrix may be for example of size 1x15 to allow using four fields in the matrix for each one of the three most common color code embeddings.
  • tooth color information indicative of the determined tooth color shade is communicated to the user.
  • the tooth color information may comprise the color code associated with the selected training image that corresponds to the most likely color shade of the tooth.
  • the determined tooth color information may also comprise the determined color code embeddings.
  • At least part of the tooth color information is exported from the system, preferably in a digital format, to another data processing system or application of the user.
  • this information can be made available for other systems or applications for further analysis.
  • at least part of the tooth color information is exported, it may be further processed and/or analyzed by the user or by a system or application of the user for any purpose.
  • tooth color information may be exported to another application and/or to archives of the user for later use.
  • At least part of the tooth color information may be exported into an application facilitating manufacturing of artificial tooth or teeth.
  • at least part of the tooth color information may be stored into another user application to be subsequently used as basis of tooth color shade comparison.
  • a tooth color shade comparison application may be used for example to detect changes in tooth color shade for example due to whitening or color changes due to diet.
  • the tooth color information may be determined to indicate that the tooth may, based on its color, have some abnormality. This is possible, if the training database comprises training images of teeth with abnormality. Indication of likelihood of an abnormality may also be provided to the user, so that he may for example examine the tooth in more detail.
  • the step 607 may be omitted, since the tooth color shade information can be made available to the user via the other application or system that receives the exported tooth color information.
  • Main intelligence of the system resides at the server, more precisely at the server application running on the server.
  • the user application running in the mobile device needs a data connection to the server.
  • the user application acts as a user interface, allowing the user to obtain and upload images as well as tag tooth colors in the training images, and to receive tooth color information.
  • every single trained color code embedding from the SVM model is compared to the color code embeddings that corresponds to test images captured for testing and the color code embedding with the lowest distance is the resulting tooth color shade.
  • the resulting tooth color shade may be expressed by referring to a particular tooth color as defined in the used standard tooth color shade system. Testing ensures that the system works as intended and the color shades are detected accurately. It is apparent to a person skilled in the art that as technology advanced, the basic idea of the invention can be implemented in various ways. The invention and its embodiments are therefore not restricted to the above examples, but they may vary within the scope of the claims.

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Epidemiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Processing (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

La présente invention concerne un procédé mis en œuvre par ordinateur, un système de traitement de données et des appareils de traitement de données, ainsi que des produits-programmes d'ordinateur permettant de définir une teinte d'une dent à l'aide d'une caméra d'un dispositif de communication mobile. En fonction d'une image reçue d'une dent d'un sujet comprenant une partie d'une image acquise à l'aide d'une caméra ordinaire du dispositif de communication mobile, des conditions d'éclairage sont obtenues, et des images d'apprentissage applicables sont sélectionnées à partir d'une base de données d'apprentissage. Des images d'apprentissage comprennent des images d'une dent modèle présentant des couleurs connues. Un regroupement de K moyennes est appliqué à l'image reçue afin d'obtenir des intégrations de code de couleur de l'image reçue et les intégrations de code de couleur de l'image reçue sont réglées en fonction de l'indication des conditions d'éclairage. Des intégrations de code de couleur obtenues sont obtenues pour toutes les intégrations de code de couleur des images d'apprentissage sélectionnées afin de trouver l'image d'apprentissage présentant des intégrations de code de couleur présentant la distance la plus faible par rapport aux intégrations de code de couleur de l'image reçue, et la teinte de la dent dans l'image reçue est déterminée comme égale à la teinte d'une dent de modèle représentée dans l'image d'apprentissage présentant la distance la plus faible d'intégrations de code de couleur par rapport à celles de l'image reçue. Des informations de couleur de dent indiquant la teinte de dent définie sont communiquées en retour au dispositif de communication mobile en provenance duquel l'image a été reçue.
EP20717242.0A 2019-03-29 2020-03-27 Détermination de teinte de dent en fonction d'une image obtenue à l'aide d'un dispositif mobile Pending EP3948786A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201921012676 2019-03-29
PCT/FI2020/050203 WO2020201623A1 (fr) 2019-03-29 2020-03-27 Détermination de teinte de dent en fonction d'une image obtenue à l'aide d'un dispositif mobile

Publications (1)

Publication Number Publication Date
EP3948786A1 true EP3948786A1 (fr) 2022-02-09

Family

ID=70189985

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20717242.0A Pending EP3948786A1 (fr) 2019-03-29 2020-03-27 Détermination de teinte de dent en fonction d'une image obtenue à l'aide d'un dispositif mobile

Country Status (3)

Country Link
EP (1) EP3948786A1 (fr)
FI (1) FI130746B1 (fr)
WO (1) WO2020201623A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11373059B2 (en) * 2021-07-23 2022-06-28 MIME, Inc. Color image analysis for makeup color prediction model
WO2023072743A1 (fr) * 2021-10-28 2023-05-04 Unilever Ip Holdings B.V. Méthodes et appareils de détermination d'une valeur de couleur dentaire

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6190170B1 (en) 1998-05-05 2001-02-20 Dentech, Llc Automated tooth shade analysis and matching system
US7064830B2 (en) * 2003-06-12 2006-06-20 Eastman Kodak Company Dental color imaging system
US8848991B2 (en) * 2012-03-16 2014-09-30 Soek Gam Tjioe Dental shade matching device
EP3105558A1 (fr) * 2013-12-05 2016-12-21 Style Idea Factory Sociedad Limitada Dispositif à usage dentaire pour la distinction de la couleur des dents
CN105662624A (zh) 2016-03-31 2016-06-15 姚科 一种义齿比色的实现方法及装置
WO2018080413A2 (fr) 2016-10-31 2018-05-03 Cil Koray Système de détermination de couleur dentaire intégré à des téléphones mobiles et des tablettes

Also Published As

Publication number Publication date
WO2020201623A1 (fr) 2020-10-08
FI130746B1 (en) 2024-02-26
FI20195916A1 (en) 2020-09-30

Similar Documents

Publication Publication Date Title
US10115191B2 (en) Information processing apparatus, information processing system, information processing method, program, and recording medium
KR101140533B1 (ko) 이미지로부터 추정된 피부색에 기초해서 제품을 추천하는 컴퓨터 구현된 방법
US7751606B2 (en) Tooth locating within dental images
US8819015B2 (en) Object identification apparatus and method for identifying object
US7415165B2 (en) Red-eye detection device, red-eye detection method, and red-eye detection program
JP4549352B2 (ja) 画像処理装置および方法,ならびに画像処理プログラム
US20150304525A1 (en) Color correction based on multiple images
US9270866B2 (en) Apparatus and method for automated self-training of white balance by electronic cameras
US20150186755A1 (en) Systems and Methods for Object Identification
FI130746B1 (en) DETERMINATION OF TEETH COLOR BASED ON AN IMAGE TAKEN WITH A MOBILE DEVICE
WO2013179581A1 (fr) Dispositif de mesure d'image, procédé de mesure d'image et système de mesure d'image
CN110189329B (zh) 用于定位色卡的色块区域的系统和方法
EP4058986A1 (fr) Procédé et dispositif d'identification de pigments à effet dans un revêtement cible
CN113111806A (zh) 用于目标识别的方法和系统
CN113298753A (zh) 敏感肌的检测方法、图像处理方法、装置及设备
Montenegro et al. A comparative study of color spaces in skin-based face segmentation
Lee et al. A taxonomy of color constancy and invariance algorithm
US20230162354A1 (en) Artificial intelligence-based hyperspectrally resolved detection of anomalous cells
US20220172453A1 (en) Information processing system for determining inspection settings for object based on identification information thereof
Kibria et al. Smartphone-based point-of-care urinalysis assessment
CN114972547A (zh) 用于确定牙齿颜色的方法
KR102504318B1 (ko) 스마트 엣지 디바이스 기반의 생육 분석 방법 및 그를 위한 장치 및 시스템
CN113406042A (zh) 用于确定体外诊断系统中样品容器的特性的方法、分析装置和体外诊断系统
JPH07262379A (ja) 物品同定システム
Phillips et al. Hyperspectral Imaging honey dataset

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210921

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: LUMI DENTAL LTD