EP4580478A1 - Verfahren zur erfassung eines satzes von bildern eines objekts im mund - Google Patents
Verfahren zur erfassung eines satzes von bildern eines objekts im mundInfo
- Publication number
- EP4580478A1 EP4580478A1 EP23762446.5A EP23762446A EP4580478A1 EP 4580478 A1 EP4580478 A1 EP 4580478A1 EP 23762446 A EP23762446 A EP 23762446A EP 4580478 A1 EP4580478 A1 EP 4580478A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- acquisition
- user
- symbol
- acquisition device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C9/00—Impression cups, i.e. impression trays; Impression methods
- A61C9/004—Means or methods for taking digitized impressions
- A61C9/0046—Data acquisition means or methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/24—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the mouth, i.e. stomatoscopes, e.g. with tongue depressors; Instruments for opening or keeping open the mouth
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient; User input means
- A61B5/742—Details of notification to user or communication with user or patient; User input means using visual displays
- A61B5/743—Displaying an image simultaneously with additional graphical information, e.g. symbols, charts, function plots
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
- A61B2034/104—Modelling the effect of the tool, e.g. the effect of an implanted prosthesis or for predicting the effect of ablation or burring
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/108—Computer aided selection or customisation of medical implants or cutting guides
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Definitions
- the present invention relates to a method for acquiring a set of images of an oral object, and in particular a dental object, in particular a dental arch of a user.
- the invention also relates to a device for implementing such a method.
- An image “covers” a target when it at least partially represents that target.
- the set of images “covers” the target when it contains images covering the target in different observation directions, in particular to have precise three-dimensional information on the target.
- the shape of a multidimensional symbol in the space of the real scene observed by the acquisition device allows, when the user looks at the screen, to inform and guide it towards one or more acquisition conditions associated with said symbol and adapted to the acquisition of a desired image.
- a symbol presented in augmented reality thus provides particularly effective guidance information.
- - at least one symbol defines - a symbol axis, preferably an axis of revolution, an acquisition condition being an angular difference between the optical axis of the acquisition device and said symbol axis less than 20°, preferably less than 10° , preferably less than 5°, preferably substantially zero, that is to say substantially perfect alignment of the optical axis of the acquisition device with said symbol axis, and/or
- the method aims to quickly cover the target with a coverage rate greater than or equal to a coverage threshold, and the method comprises the following steps: a) acquisition of at least one image, preferably by the user, by means of an image acquisition device, preferably a mobile telephone; b) updating the coverage level of the target based on said at least one image acquired in step a); c) if the coverage level is below the coverage threshold,
- step a) by the user, of the position and/or orientation of the acquisition device depending on the guidance information, and resumed in step a), preferably in real time, the acquired images preferably being extracted from a film that the user views on a screen of the acquisition device .
- the determination of guidance information as a function of a coverage rate updated in real time advantageously makes it possible to facilitate the acquisition of images of the mouth of a user, and in particular the acquisition of dental images.
- the user receives guidance information in real time, which makes the acquisition more efficient, in particular when the guidance information is chosen to guide towards optimal acquisition conditions allowing the acquisition of a additional image maximizing the increase in coverage level.
- Images can advantageously be acquired under precise acquisition conditions, without special training. In particular, they can be acquired by the user himself or by one of his relatives. In particular, the process facilitates the acquisition of images of a child's arches by one of his parents.
- the presentation of information on the level of coverage is also particularly advantageous because it effectively discourages the user from interrupting the acquisition before it is complete. It makes acquisition particularly pleasant, with the user knowing at all times the path still to be taken before having acquired all of the images. The acquisition can even be fun.
- a method according to the second main aspect may also include, in particular, one or more of the following optional characteristics:
- a representation of the oral object when the oral object includes other parts of the user's mouth than the target - the guidance information and/or information on the coverage level and/or on the difference between the coverage threshold and the coverage level is/are presented in augmented reality in a preview image displayed on a display screen the acquisition device and representing the real scene observed by the acquisition device and/or in an image equivalent to the preview image representing a theoretical scene representing, in a symbolic or realistic manner, all or part of the elements of said real scene, in the same arrangement as in said real scene;
- the equivalent image comprises, preferably is a view of a model of at least part of said real scene, a model being a digital three-dimensional model;
- the target comprises a set of teeth and/or soft tissues
- the preview image or equivalent image is updated in real time, preferably on the screen of the user's mobile phone, the covered teeth being marked specifically, by a symbol or by application of an appearance allowing them to be distinguished from uncovered teeth, a tooth being “covered” when its initial surface to be covered is covered;
- a plurality of symbols are anchored, in augmented reality, in the real scene observed by the acquisition device, so as to appear on the preview image or on an equivalent image, each symbol being able for example to be anchored on a tooth respective of the user or be anchored so as to indicate to the user suitable acquisition conditions for the acquisition of the additional image;
- the symbols are three-dimensional symbols according to the first main aspect of the invention.
- the user aims, with the acquisition device, at a said symbol, preferably two-dimensional or three-dimensional, and when the targeted symbol is reached, preferably when it is at least partly superimposed at a fixed target displayed on the screen of the acquisition device, at least one image is acquired, preferably automatically, that is to say without specific intervention from the user;
- the symbols are anchored on adjacent teeth or not, in augmented reality, on the preview image or on an equivalent image, the symbols can for example be anchored regularly along the dental arch, for example every 2 or 3 teeth;
- the symbols are anchored and/or shaped so as to define, optionally in cooperation with a said target, predetermined acquisition conditions, preferably a distance of the acquisition device relative to the target and/or an orientation of the acquisition device around its optical axis and/or an angulation of said optical axis relative to the target;
- step b) said at least one image acquired in step a) is analyzed in order to identify a representation of the target on said image, preferably by means of a neural network, then, we mark the corresponding area in the preview image, preferably by coloring it with a color associated with said target;
- step a) before or after the acquisition of the image, the quality of the acquired or previewed image is evaluated, that is to say displayed on the screen of the acquisition device , respectively, preferably at least the sharpness and/or contrast, and/or the color balance of the image, and/or the distance of the acquisition device relative to the target and/or the orientation of the acquisition device around its optical axis and/or the angulation of said optical axis relative to the target, and we only acquire the image and/or update the coverage rate if the quality exceeds a predetermined quality threshold;
- said representation of the target is projected onto a model of the target or of the oral object, or “reference model”, following a projection direction oriented, relative to said model, such as the direction of the optical axis by relative to the target, said representation of the target being virtually (that is to say theoretically) positioned, relative to the reference model such as the acquisition device relative to the target, then
- the coverage rate is determined as a function of the surface of the reference model covered by said projection and by said projections carried out during possible previous step(s) b);
- each acquired image is submitted to a first neural network trained to detect the representation of the target, or remarkable points of said target, on the image, and to a second neural network trained to recognize, from the representation of the target or said remarkable points on the image, a corresponding area of a reference image, then
- a mobile phone with a screen preferably a mobile phone of the user or
- a support equipped with a camera and held against the user during the acquisition of the set of images, preferably allowing opening and closing of the mouth, preferably partially introduced into the mouth of the the user, preferably resting on the gums and/or teeth, and
- step a) the user visualizes, in real time, on a screen of said mobile phone, the real scene observed by the mobile phone or a corresponding theoretical scene, an image preferably being acquired automatically when the mobile phone observes the target under predetermined acquisition conditions, preferably under acquisition conditions determined at a step c) of the preceding cycle of steps a) to c);
- step c) the user is presented, preferably on the screen of the acquisition device, with a counter or gauge, preferably in the form of a progress bar, providing information on the coverage level and/or the difference between the coverage threshold and the coverage level, and/or
- the user is presented, preferably on the screen of the acquisition device, with a score calculated according to the duration to reach the coverage threshold and/or the quality of the acquired images, and/or the usefulness of the acquired images, and/or the user is presented, preferably on the screen of the acquisition device, with a classification determined as a function of said score.
- the guidance information comprises a set of symbols, positioned, in augmented reality, according to the respective images to be acquired.
- the set of images to be acquired includes one image for each symbol.
- the user must aim at the symbols with their mobile phone and reach them, like in a video game.
- each symbol can be anchored, in reality increased, on a respective tooth, the target being constituted by said teeth and/or soft tissues.
- the symbols can be anchored according to desired acquisition conditions, for example on non-adjacent teeth, for example every two or three teeth.
- a test pattern can be represented on the screen of the user's mobile phone.
- the target is superimposed with the symbol, the latter is reached: an image is then acquired, preferably automatically, and the symbol is marked, or disappears.
- the marking or disappearance of symbols provides information on the coverage rate. It also provides guidance information, with the user easily able to spot unreached symbols on the preview image displayed on the mobile phone screen. It can arrange the cell phone accordingly.
- the image set covers all targeted teeth and/or soft tissues.
- the coverage level can for example be the ratio of the number of symbols reached to the initial number of symbols, that is to say before the start of acquisition.
- the target and a symbol can have compatible dimensions, so that, when the user exactly superimposes the target and the symbol, the acquisition device and the target are at said distance.
- the computer program is executed by the image acquisition device, the computer program can be integrated into specialized software, in particular specialized software for a mobile phone or tablet.
- model we mean a digital three-dimensional model.
- a model is made up of a set of voxels.
- Figure 9 schematically represents the steps of a cycle of a process according to the first main aspect of the invention.
- Device 1 includes
- the computer 14 may be separate from the acquisition device or, preferably, be integrated into the acquisition device.
- the computer 14 may also include digital communication means allowing the exchange of data 20, in particular with the image acquisition device 10, or even with a database 22.
- the database 22 can also be integrated, partially or totally, in the acquisition device or in the computer. It may in particular contain the acquired images, the reference model or the reference image, the definition of the target and the oral object, or even a final model generated from the acquired images. It can also contain the information relating to the predetermined acquisition conditions associated with each symbol.
- the image acquisition device 10 is a mobile phone or a tablet.
- the screen 12 of the acquisition device is configured so as to present the guidance information, and preferably information on the coverage level and/or on the difference between the coverage threshold and the coverage level. . Alternatively, or in addition, this information can be presented on screen 18 of the computer.
- the image acquisition device can also be a mirror equipped with a camera.
- Device 1 is used for implementing a method according to the invention:
- the target has a predetermined “initial surface to cover”, that is to say a surface that the implementation of the method aims to cover.
- the “covered surface” is the part of the initial surface to be covered which, at a moment during the process, has already been covered, that is to say represented on at least one acquired image.
- the “area still to be covered” is the part of the initial surface to be covered which, at any time during the process, has not been represented on any acquired image.
- the oral object can also be an orthodontic appliance, for example a multi-attachment appliance, vestibular or lingual, an orthodontic splint, preferably invisible, an auxiliary, for example a cleat, a button or a screw, an educational appliance functional, for example to modify the positioning of the tongue or treat sleep apnea.
- an orthodontic appliance for example a multi-attachment appliance, vestibular or lingual, an orthodontic splint, preferably invisible, an auxiliary, for example a cleat, a button or a screw, an educational appliance functional, for example to modify the positioning of the tongue or treat sleep apnea.
- the target is identified before the process is implemented and the computer is informed. For example, we inform the computer that it is necessary to acquire a set of images covering teeth 10 to 14, or we provide it with an image or a model of a dental arch on which the representation of the target has been identified.
- the image or model used to identify the target can be generic, that is, usable by several users. They can be selected from a database, the database being accessible via digital communication means.
- a generic model may be a typodont.
- the generic model or the generic image are chosen so that they represent a target having a shape close to the user's target, which improves the precision of the method. If the target belongs to a dental arch, the model can be generated by the implementation of a method arranging tooth models, for example as described in European application No. 18 184486.
- the model or image used to identify the target is preferably a model or image representing the user's target, acquired prior to implementation of the method.
- the set of images is considered sufficient to cover a specific target when the level of coverage by the acquired images reaches a coverage threshold.
- the coverage threshold thus defines, directly or indirectly, a percentage of the initial surface to be covered which is considered sufficient for the acquisition to be terminated, that is to say so that we can consider that the acquisition is complete.
- a coverage threshold of 100% requires, for example, that the entire surface of the target is represented on at least one acquired image.
- the coverage threshold can be determined so that the set of images is sufficient to view the target at predetermined angles, in particular at any angle.
- the coverage threshold is preferably predetermined, before the first step a).
- the coverage threshold can be greater than 50%, 70%, 80%, 90%, 95% of the target area. Preferably, the coverage threshold is greater than 95%.
- the coverage level is a measure of the progress of the acquisition relative to the coverage threshold. Before the acquisition of a first image, the coverage level is therefore zero. The level of progress gradually increases throughout the cycles of stages a) to c).
- the coverage level can also be, for example, the ratio of the surface covered to the surface still to be covered, constituted by all the areas of the target which are not represented on any acquired image. If 30% of the initial surface area to be covered is covered, the coverage level is thus 30%/70%.
- An objective of the method is to guide the user during the acquisition of images so that the set of acquired images includes as few images as possible, that is to say that the acquisition is efficient, but sufficiently images so that the coverage threshold is reached.
- a multidimensional symbol preferably each multidimensional symbol, comprises, when the optical axis of the acquisition device coincides with said main direction, a dimension which, on the representation of the symbol on the screen, is variable as a function of the position of the acquisition device along the optical axis, that is to say as a function of the distance between the acquisition device and said symbol.
- Said dimension can be evaluated by observation of the screen by the user, and the user knows a value of said dimension defining the position of the acquisition device at the predetermined distance associated with said symbol.
- the multidimensional symbol has the shape of a superposition of rings, preferably of different diameters,
- the predetermined orientation being obtained when the centers of the rings are aligned along the optical axis of the acquisition device, that is to say that said centers appear on the screen as merged, and/or
- the predetermined distance being obtained when the centers of the rings are aligned along the optical axis and the spacing between said rings as they appear on the screen, preferably on the representation of the symbol in the preview image or the equivalent image has a predetermined value, for example when a first ring appears adjacent to a second ring, that is to say when the interior contour of the first ring is in contact with the exterior contour of the second ring.
- step 2) an image is acquired when the predetermined acquisition condition(s) associated with a symbol is/are met.
- the acquisition can in particular be carried out following step a) as described below.
- the image may be of the type described below for step a).
- the image is then analyzed to determine the acquisition conditions.
- each acquired image is submitted to a first neural network trained to detect the representation of the target, and/or remarkable points of said target, on the image, and to a second neural network trained to recognize, from the representation of the target or of said remarkable points on the image, said acquisition conditions, and in particular the angulation and/or the distance of the acquisition device relative to the target.
- the first neural network can be chosen in particular from the Object Detection Networks, and in particular from the neural networks listed below, in the passage relating to step b2). For example, we train the neural network by presenting it, for example for more than 1000 historical images: as input, a historical image representing a historical target and/or the remarkable points, and as output, the representation of said historical target and/or or notable points on the historical picture.
- the neural network thus learns to recognize, in a new image, the representation of the target and/or remarkable points.
- the second neural network can be chosen in particular from networks specialized in image classification, called “CNN” (“Convolutional neural network”), for example AlexNet (2012), ZF Net (2013), VGG Net (2014 ), GoogleNet (2015), Microsoft ResNet (2015), Caffe: BAIR Reference CaffeNet, BAIR AlexNet, Torch:VGG_CNN_S, VGG_CNN_M, VGG_CNN_M_2048, VGG_CNN_M_1024, VGG_CNN_M_128, VGG_CNN_F, VGG ILSVRC-2014 16-layer, VGG ILSVRC-20 14 19- layer, Network-in-Network (Imagenet & CIFAR-10), Google: Inception (V3, V4)
- CNN Convolutional neural network
- the neural network thus learns to define, for a new image, the conditions for its acquisition.
- the determination of the conditions for acquiring an image can also be carried out by searching for a view of a model of the user's arch which corresponds to the image, for example with an optimization operation, preferably one metaheuristic method, preferably evolutionary, preferably simulated annealing.
- An example of such a search is for example described in PCT/EP2015/074859, in European patent application No. 18 184477.0 or in WO2016/066651.
- an image is acquired, preferably automatically, that is to say without specific user intervention.
- a symbol changes appearance, for example color, or disappears when an image has been acquired under the acquisition conditions associated with said symbol.
- the process according to the second main aspect of the invention comprises several cycles of steps a) to c).
- step a) an image, preferably a photo, representing the user's oral object is acquired by means of an image acquisition device.
- a film is acquired with the image acquisition apparatus, and the acquired image is extracted from the film.
- an “original mask”, preferably a cloud of plots, is projected onto the scene observed by the acquisition device during step a), preferably by means of a projector integrated into the acquisition device.
- the distorted mask resulting from the projection of the original mask then appears on the preview image or the equivalent image.
- the projection is in infrared light so that the distorted mask is not visible to the naked eye.
- the acquired image is the image representing the deformed mask.
- the acquisition device then preferably uses an infrared camera. The nature of the distorted mask is not limited, however.
- the image is preferably acquired by the user himself. The user can acquire the image using a mobile phone.
- the acquired image is preferably “extraoral”, that is to say without the optical lens of the acquisition device being introduced into the user’s mouth.
- the image acquisition device may in particular be a mobile phone, a tablet, a camera or a computer, the image acquisition device preferably being a mobile phone or a tablet, in particular so that the The user can acquire images anywhere, and in particular outside the office of a dental care professional, for example more than 1 km from the office of a dental care professional.
- the user uses a mobile phone and a support on which the mobile phone is removably fixed, the support being held against the user during the acquisition of at least one part, preferably of all images.
- the support may be of the type described in PCT/EP2021/068702, EP17306361, PCT/EP2019/079565, PCT/EP2022/053847,
- the user uses a free mobile telephone, that is to say whose position and orientation he can freely fix, and in particular not fixed to a support.
- the method according to the invention makes it possible to guide the user in taking pictures, so that guidance by means of a support is not essential.
- the image acquisition device is not in contact with the user's mouth, either directly or via a support for the image acquisition device.
- the computer or the acquisition device can ask the user to put in the service position or on the contrary to remove an orthodontic appliance, for example an orthodontic splint, a cleat or an appliance with bow and tie. He can also ask the user to separate his lips from his dental arches, preferably using a retractor, so as to better expose the target to the image acquisition device, for example to fully expose at least a tooth, in particular the upper surface of an incisor and/or at least partially the upper surface of a molar. It may still require the user to open their mouth widely in order to acquire images in occlusal views representing lingual and occlusal surfaces of the teeth as well as the palate.
- an orthodontic appliance for example an orthodontic splint, a cleat or an appliance with bow and tie. He can also ask the user to separate his lips from his dental arches, preferably using a retractor, so as to better expose the target to the image acquisition device, for example to fully expose at least a tooth, in particular the upper surface of an incisor and/or at least
- the number of images acquired during a step a) is preferably less than 100, preferably less than 50, preferably less than 10, so that the guidance information is updated quickly.
- step b) the coverage level is updated to take into account the image(s) acquired in step a) immediately preceding, or “new images”.
- step b) the computer therefore analyzes each new image, preferably following the following steps: bl) determination of a potential contribution by the new image; b2) determination of the intersection between the potential contribution and the previous contribution made by the previously analyzed images; b3) if the intersection is not empty, addition of the potential contribution to the previous contribution.
- step b1) the computer determines the potential contribution by the new image. In particular, it determines whether the new image at least partially represents the target. If not, this new image cannot make a contribution and the computer moves on to analyzing the next new image. If yes, the computer determines the potential contribution of the new image, for example determines the contour of the representation of the target on the new image, or the number of the tooth or teeth of the target represented, at least partially, on the new image.
- step b2) the computer then compares the potential contribution to all the contributions resulting from the analysis of the images analyzed previously, or “previous contribution”.
- the potential contribution of the new image is the representation of the target on the new image.
- the computer evaluates the intersection of this potential contribution and the previous contribution consisting of the union of all the representations of the target on the images analyzed previously. If this intersection is empty, the new image cannot make a new contribution and the computer moves on to analyzing the next new image. If this intersection is not empty, that is to say that the new image represents an area of the target which was not on any of the images analyzed previously, the computer adds to the previous contribution a new contribution consisting of said intersection.
- the potential contribution of the new image is the number of one or more teeth of the target identified on the new image.
- the computer evaluates the intersection of this potential contribution and the previous contribution consisting of all the numbers of the target's teeth identified on the previously analyzed images. If this intersection is empty, the new image cannot make any additional contribution and the computer moves on to analyzing the next new image. If this intersection is not empty, the computer adds this intersection to the previous contribution.
- it includes the segmentation of the new image so as to identify the possible total or partial representation of the target.
- the analysis can be done using classic segmentation methods.
- the new image can be submitted to a neural network trained to detect the representation of the target on the new image, for example to determine the numbers of the teeth represented on the image, and/or the contours of said teeth, and /or the mouth and/or the lips, and/or the tongue, as described for example in European patent application No. 18 184477.0.
- the neural network can be chosen in particular from Object Detection Networks, for example R-CNN (2013), SSD (Single Shot MultiBox Detector: Object Detection network), Faster R-CNN (Faster Region-based Convolutional Network method: Object Detection network), Faster R-CNN (2015), SSD (2015), RCF (Richer Convolutional Features for Edge Detection) (2017), SPP-Net, 2014, OverFeat (Sermanet et al.), 2013, GoogleNet (Szegedy et al.
- a neural network by presenting it, for example for more than 1000 historical images: as input, a historical image representing a historical target, and as output, the representation of said historical target on the historical image.
- the neural network thus learns to recognize, in a new image, the representation of the target.
- these representations are preferably projected onto a common reference model in order to take into consideration conditions different acquisition methods, and in particular an orientation of the variable acquisition device depending on the image considered.
- the reference model preferably represents a reference oral object, preferably similar or even identical to the user's oral object.
- the computer analyzes the image to determine the conditions of its acquisition, that is to say the actual acquisition conditions. In particular, it evaluates the distance between the acquisition device and the user's oral object and the orientation of the acquisition device in space, relative to the user's oral object, at the time of image acquisition.
- the determination of the actual acquisition conditions can be carried out as described for example in European patent application No. 18 184477.0 or in WO2016/066651, or by submitting the image to a neural network trained to determine the acquisition conditions of the image submitted to it.
- the actual acquisition conditions are then virtually reproduced relative to the reference model, and the representation of the user's target on the image is projected onto the reference model.
- the set of projected surfaces obtained from previous images can constitute the previous contribution.
- the projected area obtained from the new image is the potential contribution.
- each image is subjected to a first detection neural network, identifying oral objects represented on the image, in particular, the tongue and/or tooth numbers, and/or the gums and/or the mouth and /or the lips and/or remarkable points of these organs, then each image is submitted to a second neural network trained to determine the acquisition conditions of the image submitted to it.
- the second neural network takes as input the oral objects detected by the first neural network, which improves the determination of the acquisition conditions.
- the determination of the potential contribution of a new image is determined by comparing the image to a “reference” image, for example a photo or a panoramic shot, preferably of at least one dental arch. similar or identical to the user's dental arch.
- the target is then generally not represented in the same way on the reference image and on the new image.
- a neural network is trained so that it learns to establish a concordance between the objects represented on the two images.
- the neural network thus learns to recognize, in a reference image, the representation of the target which corresponds to the representation of the target in a new image submitted to it.
- the neural network thus learns to identify the potential contribution of the new image.
- the method can be implemented several times, each time with a new reference image.
- step b3) the computer adds the potential contribution to the previous contribution if the intersection is not empty and calculates the coverage level resulting from the increment of the previous contribution by the new contribution.
- the computer evaluates the quality of the image or images acquired and only adds the potential contribution to step b3) if the quality is greater than a predefined quality threshold.
- the quality may in particular be an evaluation of the sharpness and/or contrast, and/or the color balance of the image, and/or the distance between the acquisition device and the user's mouth. .
- images which present satisfactory quality are taken into account.
- the image acquisition device preferably in the form of a mobile telephone, is fixed on a support which is kept in contact with the user during acquisition, as described previously, for example a support of the type described in PCT/EP2021/068702, EP17306361, PCT/EP2019/079565,
- the quality of the images (in particular the brightness, the distance of the acquisition device relative to the target, the angulation of the acquisition device relative to the target and the orientation of the acquisition device around its optical axis) is advantageously well controlled so that the evaluation of the quality of the image or images acquired is optional.
- the support is not bitten by the user, and, more preferably, allows opening and closing (up to a position in which the arches are in contact with each other in the plane occlusal) of the arches.
- the computer can decide to take several images, for example to vary the focal length in order to acquire a first clear image of the incisor group, then a second clear image of the posterior group of teeth.
- the acquisition of several images with different calibration conditions can be based on the quality assessment, but can also be programmed to be systematic at each acquisition step.
- step c) the computer compares the coverage level to the coverage threshold. If the coverage level is greater than or equal to the coverage threshold, step c) is completed. Otherwise, the computer determines guidance information to guide the user so that he positions the image acquisition device towards "future" acquisition conditions suitable for the acquisition, during step a ) following, of an additional image increasing said coverage level.
- the guidance information is thus determined to inform the user about the areas of the target for which he must still acquire one or more image(s). It is presented to the user and thus guides him to orient and/or position the image acquisition device according to the future acquisition conditions to be adopted for the next step a).
- the computer determines, for example by a random search or with an optimization algorithm, the future acquisition conditions, for the next cycle, so that the acquisition of images under these conditions Future acquisition maximizes the increase in the level of coverage.
- a tactile transmission of the guidance information can be a vibration indicating for example to the user to stop a movement.
- a haptic transmission of the guidance information can be a vibration, for example to indicate the successful passage of a target on a surface of the target or on a symbol.
- the presentation of the guidance information can be adapted to the user.
- the presentation of the guidance information includes several different types of transmissions stimulating several different senses, thus facilitating communication to the user.
- the guidance information is presented on a screen, preferably on a screen of the image acquisition device.
- the representation of the guidance information on a screen may include
- - preferably a frame of reference allowing the user to find their way when moving the acquisition device in space, that is to say to evaluate how the acquisition device is arranged in relation to the oral object, and in particular in relation to the target, and
- the repository preferably represents, at least partially, in a symbolic or realistic manner (that is to say adapted so that the user recognizes the object represented), at least one oral object as observed by the device 'acquisition. It is determined according to the actual acquisition conditions of the acquisition device.
- the repository can be for example a preview image, that is to say the image observed by the acquisition device, preferably the mobile phone, in real time and displayed on the screen of the acquisition device and/or an equivalent image representing a part of the user, for example the head of the user or part of the head or the mouth or the dental arches of the user ( by "equivalent”, we mean that the image corresponds to an observation of the part of the patient superimposable with the image observed by the acquisition device, and in particular observed along the optical axis of the acquisition device ).
- the equivalent image may be a line drawing, for example representing the outline of part of the user.
- the repository may represent a view of a user-specific model or a generic model, said model thus representing, precisely or more roughly, a part of the user, preferably at least the target, preferably at least the oral object.
- a generic model is common to several individuals.
- a generic benchmark can be determined in particular by statistical analysis of historical data representative of these individuals.
- a generic model can be for example a model of a typodont.
- a user-specific model may be a model of all or part of the user's oral object, in particular the user's target. It may in particular be a scan of the user's dental arches. It may also include or be a 3D model of an arch of the user in a configuration specific to a stage of the treatment. In particular, it may comprise or be a 3D model of an arch of the user in a configuration specific to a stage of a treatment with orthodontic aligners, and in particular a 3D model used for the design and manufacture of an orthodontic splint. Such a 3D model can be generated at the start of orthodontic treatment, or during orthodontic treatment.
- the equivalent image is preferably a view of a generic or specific model.
- a texture is applied to the model in order to make it more realistic and allow the user to identify more easily with the model.
- the texture can be extracted from an image, for example from an image acquired in step a), then applied to a model, preferably chosen from a database before the first step a).
- the model of which a view is used as a reference can in particular be the reference model used as a projection support for the acquired images, described above.
- the equivalent image can be at least partly symbolic. It may comprise, for example, a set of geometric shapes representing the oral object, for example a set of discs, each disc representing a tooth of a part of a dental arch, the oral object being the dental arch.
- Figure 10 represents two examples of equivalent images, representing the view of a 3D model of an arch of the user, with and without gum respectively.
- the view could also be, for example, a wireframe representation.
- the image acquisition device can display the preview image, preferably in a mini window, or "thumbnail" in English, which facilitates spatial identification for the user.
- Displaying the reference frame on the screen is optional if the indicator provides an indication of the desired movement.
- the indicator may be an arrow or a message recommending a particular move.
- displaying the reference frame on the screen is preferred because it considerably facilitates precise positioning of the acquisition device.
- the indicator is displayed, preferably together with the reference frame, on the screen, preferably on the screen of the image acquisition device, preferably on a mobile phone or tablet screen.
- the reference frame includes a representation of the oral object and the indicator is a mark indicating an area of this representation not yet covered.
- the indicator may for example be a particular contour surrounding this area or, preferably, a particular color applied to this area, or a symbol superimposed on this area.
- contour or color we mean a contour or a color allowing the user to distinguish said area from the rest of the representation of the oral object.
- This display helps guide the user quickly and efficiently. This guidance, which leaves great freedom to the user, is intuitive, so that the user does not need to have been trained beforehand to be guided.
- the indicator can in particular be displayed transparently or highlighted on the representation of the oral object.
- the indicator is preferably displayed in augmented reality when the repository is a preview image or an equivalent image.
- step c) the user is informed, preferably in real time, about the level of coverage achieved, that is to say the progress of the acquisition, and preferably on the coverage threshold.
- the coverage level information and/or the coverage threshold information is/are presented on a screen, preferably on the user's mobile phone screen.
- the information can for example take the form of a counter or a gauge, for example in the form of a progress bar.
- the coverage level and/or the coverage threshold is/are represented “graphically” on the screen, in particular in the form of line(s) and/or surface(s). ) and/or symbols.
- a representation of elements of the context of the target for example parts of the oral object different from the target, that is to say parts of the oral object that the method has not for the purpose of covering, for example teeth adjacent to the teeth for which we wish to acquire images.
- the initial surface to be covered can be presented on the screen in an identifiable manner by the user in order to inform him on the coverage threshold.
- it can be colored with a specific color, or more generally represented with a specific appearance, or delimited by a specific outline. The area thus represented with a specific appearance or surrounded by this contour represents the coverage threshold.
- the surface covered that is to say for which at least one image has already been acquired, can be presented on the screen in an identifiable manner by the user in order to inform him of the level of coverage.
- it can be colored with a specific color, or more generally represented with a specific appearance, or delimited by a specific outline.
- the zone(s) thus represented with a specific appearance or surrounded by this contour represent(s) the level of coverage.
- a surface of the target can be displayed in green or red depending on whether the acquisition of this surface has been made or whether the acquisition of this surface remains to be done.
- the covered area can be displayed transparently or highlighted.
- the coverage threshold is represented graphically as a set of symbols displayed in proximity, preferably superimposed, on representations of a set of respective teeth.
- the representations of these teeth belong to, or even constitute, a frame of reference.
- symbols can be presented in augmented reality on the mobile phone preview image or on a view of a dental arch model, preferably over a view of a model of the user's dental arches.
- the appearance of symbols for teeth for which the desired images have already been acquired (“teeth covered”) may be different from that of symbols for teeth for which all of the desired images have not yet been acquired (“teeth still to be covered”), which makes it possible to graphically visualize the coverage rate.
- the symbol relating to a tooth disappears as soon as the tooth is covered. The user then sees the difference between the coverage threshold (all symbols initially displayed) and the coverage level (symbols having disappeared).
- the “graphical” display of the coverage threshold and coverage level is particularly effective for the user to acquire all the required images.
- the graphical representations of the coverage threshold and the coverage level make it possible to visualize the areas of the target remaining to be covered, that is to say for which desired images must still be acquired.
- These graphic representations can thus be used as an indicator to guide the user. For example, coloring the covered surface with a color different from the surface yet to be covered makes it possible to highlight the surface yet to be covered, and thus guides the user.
- the graphic, or "visual" marking of the initial surface to be covered, the surface covered or the surface yet to be covered is not limited to the application of a color or texture or outline or the representation of particular symbols.
- the graphical representations of the coverage threshold and the coverage level are displayed in augmented reality, preferably on the screen of the mobile phone.
- a stopwatch is activated to measure the duration of image acquisition since the first step a).
- the display of this duration and the coverage level and/or the difference between the coverage threshold and the coverage level are a motivating factor for the user.
- a score is calculated as a function of the duration to reach the coverage threshold and/or the quality of the images acquired, and/or the usefulness of the images acquired, more generally of an objective set at the user.
- the initial surface to be covered includes zones assigned a utility coefficient, and the score is determined based on the utility coefficients of the zones in the covered surface.
- the initial surface to be covered consists of a part that is essential to cover and a part that is optional to cover.
- the utility coefficient assigned to a pixel in the “essential” part can be, for example, 100 and the utility coefficient assigned to a pixel in the “optional” part can be, for example, 10.
- the score can, for example, be function, or even be the sum of the utility coefficients for all the pixels in the covered area.
- the score can be compared to scores previously made by the user or by other users, so as to obtain a ranking of the acquisition operation of the set of images.
- a classification can be established for several patients, for example for all the patients of the same practitioner.
- An information message and/or a gift, for example a reward, may be sent to a patient depending on their ranking order.
- the timer and/or the score and/or the ranking may be displayed on the screen of the acquisition device.
- Acquisition thus becomes fun.
- acquisition can be presented as a video game, the objective being to reach the coverage threshold as quickly as possible.
- the coverage level When the coverage level is presented to the user, he can advantageously immediately visualize the effect of moving the acquisition device, in particular when the target colors or symbols associated with teeth change appearance or disappear as coverage progresses through image acquisition.
- the guidance is advantageously intuitive.
- the screen displays at all times a realistic representation of the oral object and the surface covered at that time.
- the surface covered is completed as the cycles progress, which allows the user to easily identify the surface they are remains to be acquired, and to position and orient the image acquisition device accordingly.
- the time interval between two successive cycles of steps a) to c) is preferably less than 5 minutes, 1 minute, 30 seconds, or 1 second.
- the user acquires the images acquired in real time, preferably by filming the oral object, steps b) to c) being immediately carried out for each acquired image.
- the acquired images can be transmitted to the user and/or preference to a dental professional.
- the acquired images can be stored, for example in a database, preferably accessible to a dental professional and/or to the user.
- the acquired images can be stored in a user's medical file.
- the set of images acquired typically includes more than 2, more than 5, more than 10, more than 50, more than 100 and/or less than 10,000 images.
- Said set of acquired images can be used, in particular for:
- Figures 3, 5, 6 and 7 illustrate examples of implementation of the first main aspect of the invention, in which multidimensional symbols 24 are virtually arranged in space in order to guide the user towards acquisition conditions associated.
- This presentation allows the user to quickly and simply identify the areas of teeth that remain to be covered and to easily orient the acquisition device accordingly. It also informs the user about the level of coverage.
- Symbols can symbolically represent the teeth to be covered (target).
- target When, for example, at least 90% of the surface of a tooth is acquired, better when at least 95% of the surface of a tooth is covered, even better when the entire surface of a tooth is covered, the symbol symbolically representing this tooth is no longer presented on the screen. Alternatively, the symbol is displayed in color or highlighted.
- a device and a method according to the invention advantageously make it possible to increase the autonomy of the user and to improve the quality and content of the images acquired by a user having no particular knowledge in the dental field. They also make it possible to possibly produce a 3D model of a target belonging to or constituting an oral object of the user, remotely. Finally, they greatly facilitate the remote determination of orthodontic treatment, as well as monitoring of any orthodontic treatment, without the user needing to make an appointment with a dental professional.
- the mobile phone can be replaced by a device comprising a support equipped with a camera and held against the user during the acquisition of the set of images, and a screen displaying the scene observed by the camera , said screen being integrated into the support or at a distance from the support.
- the shape of the symbols is not restrictive. 1, 2 or 3-dimensional symbols can be simultaneously presented in augmented reality.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Radiology & Medical Imaging (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Robotics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Epidemiology (AREA)
- Processing Or Creating Images (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
- Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
- Endoscopes (AREA)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR2208706A FR3139002A1 (fr) | 2022-08-31 | 2022-08-31 | Procédé d’acquisition d’un ensemble d’images d’un objet buccal |
| PCT/EP2023/073681 WO2024047046A1 (fr) | 2022-08-31 | 2023-08-29 | Procédé d'acquisition d'un ensemble d'images d'un objet buccal |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4580478A1 true EP4580478A1 (de) | 2025-07-09 |
Family
ID=84053420
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP23762446.5A Pending EP4580478A1 (de) | 2022-08-31 | 2023-08-29 | Verfahren zur erfassung eines satzes von bildern eines objekts im mund |
Country Status (6)
| Country | Link |
|---|---|
| EP (1) | EP4580478A1 (de) |
| JP (1) | JP2025529055A (de) |
| CN (1) | CN119968149A (de) |
| AU (1) | AU2023331962A1 (de) |
| FR (1) | FR3139002A1 (de) |
| WO (1) | WO2024047046A1 (de) |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| IT944715B (it) | 1970-11-06 | 1973-04-20 | North American Rockwell | Perfezionamento nelle composizioni di cristalli e liquidi nematici |
| IT972194B (it) | 1972-10-13 | 1974-05-20 | Mobert Di Trezzi E Monguzzi | Impilatore trasferitore in partico lare per macchine confezionatrici di sacchetti in plastica |
| FR2206745A5 (de) | 1972-11-13 | 1974-06-07 | Kleber Colombes | |
| USD496995S1 (en) | 2002-12-06 | 2004-10-05 | Discus Dental Impressions, Inc. | Combined dental lip and tongue retractor |
| ATE313302T1 (de) | 2003-03-17 | 2006-01-15 | Kerrhawe Sa | Wangen-und lippenabhalter für die dentalmedizin |
| FR3027507B1 (fr) | 2014-10-27 | 2016-12-23 | H 42 | Procede de controle de la dentition |
| US10467498B2 (en) * | 2015-03-06 | 2019-11-05 | Matthew Lee | Method and device for capturing images using image templates |
| US20170310886A1 (en) * | 2016-04-26 | 2017-10-26 | J.A.K. Investments Group LLC | Method and system for performing a virtual consultation |
| KR102236360B1 (ko) * | 2019-03-29 | 2021-04-05 | 오스템임플란트 주식회사 | 스캔 가이드 제공 방법 이를 위한 영상 처리장치 |
| US20230404709A1 (en) * | 2020-11-03 | 2023-12-21 | Alta Smiles Llc | System and method for orthodontic treatment times while minimizing in-office visits for orthodontics |
-
2022
- 2022-08-31 FR FR2208706A patent/FR3139002A1/fr active Pending
-
2023
- 2023-08-29 JP JP2025510353A patent/JP2025529055A/ja active Pending
- 2023-08-29 CN CN202380062978.8A patent/CN119968149A/zh active Pending
- 2023-08-29 AU AU2023331962A patent/AU2023331962A1/en active Pending
- 2023-08-29 EP EP23762446.5A patent/EP4580478A1/de active Pending
- 2023-08-29 WO PCT/EP2023/073681 patent/WO2024047046A1/fr not_active Ceased
Also Published As
| Publication number | Publication date |
|---|---|
| FR3139002A1 (fr) | 2024-03-01 |
| AU2023331962A1 (en) | 2025-03-20 |
| WO2024047046A1 (fr) | 2024-03-07 |
| JP2025529055A (ja) | 2025-09-04 |
| CN119968149A (zh) | 2025-05-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP4101418B1 (de) | System zur herstellung einer orthodontischen vorrichtung | |
| EP3659545B1 (de) | System zur überwachung der positionierung der zähne eines patienten | |
| EP3212115B1 (de) | Verfahren zur überwachung einer orthodontischen behandlung | |
| FR3069359B1 (fr) | Procede d'analyse d'une image d'une arcade dentaire | |
| FR3069361B1 (fr) | Procede d'analyse d'une image d'une arcade dentaire | |
| EP3213293B1 (de) | Überprüfung des gebisses | |
| EP3796865B1 (de) | Vorrichtung zur analyse einer zahnsituation | |
| EP3796866B1 (de) | Computerprogramm und vorrichtung zur analyse einer zahnsituation | |
| FR3111268A1 (fr) | Procédé de segmentation automatique d’une arcade dentaire | |
| EP3552575B1 (de) | Verfahren zur erzeugung eines 3d-modells einer zahnreihe | |
| WO2020011863A1 (fr) | Procede de transformation d'une vue d'un modele 3d d'une arcade dentaire en une vue photorealiste | |
| EP4161434A1 (de) | Verfahren zur verfolgung einer zahnbewegung | |
| WO2024047046A1 (fr) | Procédé d'acquisition d'un ensemble d'images d'un objet buccal | |
| WO2024047054A1 (fr) | Procédé d'acquisition d'un ensemble d'images d'un objet buccal | |
| FR3139004A1 (fr) | Procédé de scannage d’une arcade dentaire d’un utilisateur | |
| WO2022248513A1 (fr) | Procede d'acquisition d'un modele d'une arcade dentaire | |
| FR3135891A1 (fr) | Procede d’acquisition d’un modele d’une arcade dentaire | |
| EP4348572A1 (de) | Verfahren zur erfassung eines modells eines zahnbogens | |
| WO2023227613A1 (fr) | Procede d'acquisition d'un modele d'une arcade dentaire |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20250225 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) |