CN109793482B - Oral cavity scanning device and control method thereof - Google Patents

Oral cavity scanning device and control method thereof Download PDF

Info

Publication number
CN109793482B
CN109793482B CN201910011675.9A CN201910011675A CN109793482B CN 109793482 B CN109793482 B CN 109793482B CN 201910011675 A CN201910011675 A CN 201910011675A CN 109793482 B CN109793482 B CN 109793482B
Authority
CN
China
Prior art keywords
image data
reference image
scanning
tooth
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910011675.9A
Other languages
Chinese (zh)
Other versions
CN109793482A (en
Inventor
李宗熹
吴壮为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qisda Optronics Suzhou Co Ltd
Qisda Corp
Original Assignee
Qisda Optronics Suzhou Co Ltd
Qisda Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qisda Optronics Suzhou Co Ltd, Qisda Corp filed Critical Qisda Optronics Suzhou Co Ltd
Priority to CN201910011675.9A priority Critical patent/CN109793482B/en
Publication of CN109793482A publication Critical patent/CN109793482A/en
Application granted granted Critical
Publication of CN109793482B publication Critical patent/CN109793482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention discloses an oral cavity scanning device and a control method thereof, wherein the control method comprises the following steps: establishing a plurality of reference image data corresponding to each tooth according to the position of the tooth; setting a target scanning range and displaying corresponding reference image data; capturing a plurality of scanning images of each tooth within a target scanning range; establishing the corresponding three-dimensional model image according to the plurality of scanning images, and simultaneously displaying the plurality of scanning images and the three-dimensional model image; analyzing and comparing the characteristic points of each scanned image and the reference image data, judging whether the reference image data is missing or not, and displaying the completion state of the reference image data according to the missing condition; the user judges whether to finish the scanning operation according to the display state of the reference image data. The oral cavity scanning device can simultaneously display the image scanning state of each tooth face in real time in the scanning process, so that a user can perceive scanning flaws in real time without being limited by the current visual angle.

Description

Oral cavity scanning device and control method thereof
Technical Field
The invention relates to the field of oral cavity scanning, in particular to an oral cavity scanning device and a control method thereof.
Background
Generally, in the process of creating a three-dimensional dental model image, a user (e.g., a medical staff or the like) usually performs optical scanning (e.g., a scanning method of projecting a structured Light with a specific pattern by using a Digital Light Processing (DLP) projection technology or projecting a linear laser beam by using a laser Light source) on the upper and lower jaw areas in the oral cavity of a patient in a manner of holding an oral scanner (IOS), so that a corresponding three-dimensional dental model image can be displayed on a screen after image identification and synthesis by a back-end host for reference in subsequent dental implantation or denture fabrication.
However, for the application of the handheld oral cavity scanner, because the image capturing mechanism at the front end of the device must be matched with the limited space in the oral cavity, the Field of View (FOV) of the image capturing is limited, a single image capturing can only be performed to a small corner, and a user must continuously move the scanner to splice the images one by one to obtain a rough image of a target area. However, in the operation process, the screen can only display the three-dimensional tooth model image condition of a specific view angle, the user must stop the scanning operation first to find some fine holes by the view angle switching operation of the mouse or the keyboard, and then must perform the supplementary scanning operation or re-scanning, so the user must continuously switch the scanning mode/the observation mode to obtain a complete three-dimensional scanning model.
Disclosure of Invention
Therefore, an object of the present invention is to provide an oral cavity scanning device and a control method thereof, which can simultaneously display the image scanning status of each tooth in real time during the scanning process, so that the user of the oral cavity scanning device can detect the scanning defect in real time without being limited by the current viewing angle.
To achieve the above object, the present invention provides a method for controlling an oral cavity scanning device, comprising the steps of:
step S1, establishing a plurality of reference image data corresponding to each tooth according to the position of the tooth;
step S2, setting a target scanning range and displaying reference image data corresponding to the target scanning range;
step S3, capturing a plurality of scanned images of each tooth within the target scanning range, and sensing corresponding position information;
step S4, creating a three-dimensional model image corresponding to the target scanning range according to the plurality of scanned images, and displaying the plurality of scanned images and the three-dimensional model image at the same time;
step S5, analyzing and comparing the feature points of the reference image data with the scanned images, and determining whether the reference image data is missing, if so, displaying the missing part in the reference image data as an unfinished state; if no, displaying the reference image data as a finished state; and
in step S6, the user determines whether to end the scanning operation according to the display status of the reference image data.
As an optional technical solution, in step S2, the displaying the reference image data corresponding to the target scanning range includes:
calculating reference image data corresponding to the target scanning range according to the target scanning range and the plurality of reference image data; and
when entering a scanning program, searching reference image data corresponding to the target scanning range, and reading and displaying the reference image data if the reference image data is found; if not, the user enters into the self-operation mode.
As an optional technical solution, the step of calculating the reference image data corresponding to the target scanning range includes:
step S21, reading first history medical record data corresponding to the target scanning range, wherein the first history medical record data comprises tooth position information;
step S22, establishing a first basic image frame according to the first historical medical record data;
step S23, searching second history medical record data of the same patient, and if the second history medical record data of the same patient are searched, entering step S24; if not, forming a modified image frame according to the first basic image frame, wherein the second historical medical record data comprises at least one of historical scanned image data and historical reference image data;
step S24, finding whether a second basic image frame of the same patient exists, and if so, forming a modified image frame according to the first basic image frame and the second basic image frame; if not, forming a modified image frame according to the first basic image frame, wherein the second basic image frame can be the same tooth part or different tooth parts of the same patient; and
step S25: and forming corresponding reference image data according to the corrected image frame.
As an alternative solution, the reference image data is divided into a plurality of sub-regions according to the tooth position shape, wherein,
the incisor is divided into a plurality of sub-areas including a lingual side surface, a clip side surface, a lingual side tooth surface and gum joint, a buccal side tooth surface and gum joint, a lingual side gum and a buccal side gum;
the canine teeth are divided into a plurality of sub-areas comprising a little lingual side, a little buccal side, a little occlusal surface, a joint between a lingual tooth surface and a gum, a joint between a buccal tooth surface and a gum, and lingual gum and buccal gum; and
the molar is divided into a plurality of sub-regions including a large lingual side, a large buccal side, a large occlusal surface, a lingual tooth surface and gum junction, a buccal tooth surface and gum junction, a lingual gum and a buccal gum.
As an optional technical solution, the reference image data further includes a tooth position and tooth position connection sub-region.
As an optional technical solution, in step S4, the step of establishing the three-dimensional model image corresponding to the target scanning range according to the plurality of scanning images includes:
calculating according to the plurality of scanning images to obtain a plurality of three-dimensional point cloud data; and
and comparing the characteristic points of the plurality of three-dimensional point cloud data, and further splicing to form a three-dimensional model image.
As an optional technical solution, the step S5 includes:
step S51, comparing each scanned image with the reference image data according to the position information and image characteristics of each scanned image, and adding labels to classify the groups;
step S52, performing advanced analysis on each classified scanned image group, and determining the integrity of the scanned images, including whether there is a step difference or a hole between the images and whether there is color continuity, so as to determine whether there is a defect in each sub-region data on the reference image data;
in step S53, the sub-regions on the reference image data are marked with different colors or materials according to the integrity of the sub-region data analyzed in step S52.
As an optional technical solution, the step S5 includes:
step S54, comparing the recorded trajectory change and directional data of each scanned image with the reference image data, and adding labels to classify the groups;
step S52, performing advanced analysis on each classified scanned image group, and determining the integrity of the scanned images, including whether there is a step difference or a hole between the images and whether there is color continuity, so as to determine whether there is a defect in each sub-region data on the reference image data;
in step S53, the sub-regions on the reference image data are marked with different colors or materials according to the integrity of the sub-region data analyzed in step S52.
As an optional technical solution, the reference image data is a two-dimensional image or a three-dimensional image, and the reference image data is displayed in a display main picture but is independent of the three-dimensional model image; or the reference image data is displayed in a display main picture but superposed with the three-dimensional model image.
As an optional technical solution, if the display area of the reference image data overlaps with the display area of the three-dimensional model image, the reference image data is presented in a semi-transparent material or a frame line manner.
The present invention also provides an oral cavity scanning device, comprising:
the oral cavity scanner comprises a projection unit, an image sensing unit and an inertia measurement unit, and is used for capturing a plurality of scanning images of each tooth in a target scanning range and sensing corresponding position information; and
the image processing device is operatively connected with the oral cavity scanner and used for establishing a plurality of reference image data corresponding to each tooth according to the position of the tooth; the scanning device is used for setting the target scanning range and displaying reference image data corresponding to the target scanning range; the three-dimensional model image of the target scanning range is established according to the plurality of scanning images, and the plurality of scanning images and the three-dimensional model image are displayed at the same time; the scanning image processing device is used for analyzing and comparing the characteristic points of each scanning image and the reference image data, judging whether the reference image data has deficiency or not, and if so, displaying the part with deficiency in the reference image data as an unfinished state; if no, the reference image data is displayed as a finished state.
As an optional technical solution, a three-dimensional modeling system is installed in the image processing device, the three-dimensional modeling system interacts with the oral scanner through a transmission device, and the three-dimensional modeling system includes:
the image detection and decoding module is used for calculating a plurality of three-dimensional point cloud data according to the plurality of scanning images;
the modeling module is used for comparing the characteristic points of the plurality of three-dimensional point cloud data so as to splice and form a three-dimensional model image; and
and the processing module is used for comparing the characteristic points of the plurality of three-dimensional point cloud data with the reference image data, confirming whether each subarea data on the reference image data is missing or not, and marking the scanning completion state of each subarea on the reference image data by using different colors or materials according to the integrity of each subarea data.
Compared with the prior art, the oral cavity scanning device and the control method thereof can display the reference image data as a guide type user graphical interface control mode for synchronously prompting the current scanning progress in the scanning process, can automatically analyze whether a hole with a specific angle is broken or missing and needs to be subjected to complementary scanning according to the scanned (spliced) image data, and provide a suggested scanning route, so that a user is not limited by the current visual angle, the switching frequency of the user in a scanning mode/an observation mode is reduced, and the optimal scanning result is obtained.
The advantages and spirit of the present invention can be further understood by the following detailed description of the invention and the accompanying drawings.
Drawings
FIG. 1 is a schematic view of an oral scanning device according to the present invention;
fig. 2 is a flow chart illustrating a control method of the oral cavity scanning device according to the present invention.
Detailed Description
Referring to fig. 1, fig. 1 is a schematic view of an oral cavity scanning device according to the present invention. The invention provides an oral cavity scanning device 10, wherein the oral cavity scanning device 10 comprises an oral cavity scanner 1 and an image processing device 2. The oral cavity scanner 1 can employ a conventional dental model image scanning technology (e.g., a scanning technology of projecting a light with a specific pattern structure by a digital optical processing projection technology or a scanning technology of projecting a linear laser beam by a laser light source, etc.) to scan an image of an object to be scanned, and a related description of a dental image scanning principle is given in the prior art and will not be repeated herein. The oral scanner 1 can scan a two-dimensional image, or collect a one-dimensional image and combine the two-dimensional image into a two-dimensional image, and provide the two-dimensional image to the image processing device 2 for data processing, and the image processing device 2 is electrically connected to the oral scanner 1 and can be preferably a screen host with image processing and displaying functions, such as a desktop computer, a notebook computer, or a tablet computer, and the like. The image processing apparatus 2 has a display unit for displaying a three-dimensional model image 4 created from the scanned image transmitted from the oral scanner 1.
The oral scanner 1 includes a projection unit, an image sensing unit, and an inertia measurement unit, and the oral scanner 1 is configured to capture a plurality of scanning images of each tooth within a target scanning range and sense corresponding position information. In this embodiment, the plurality of scanned images are two-dimensional scanned images, and the plurality of scanned images may correspond to an occlusal surface, a lingual surface, a buccal surface, a lingual tooth surface and a gingival joint, a buccal tooth surface and a gingival joint, a gingival area, and the like. The image sensing unit may be, for example, a Charge-coupled Device (CCD) sensor, a Complementary Metal-Oxide Semiconductor (CMOS) sensor, or other sensors. The projection unit may be, for example, a Digital Light Processing (DLP) module, a 3LCD projection module, a Liquid Crystal on Silicon (LCos) projection module, a laser projection module, or other projection modules. The inertial measurement unit may be a gravity sensor (G sensor), a gyroscope (gyro), an electronic compass, or a combination thereof, and may sense motion changes corresponding to three axial directions according to a motion state of the oral scanner 1.
The image processing device 2 is used for establishing a plurality of reference image data corresponding to each tooth according to the position of the tooth, and the reference image data functions like a guiding user graphical interface, for example, the reference image data may include a guiding graphical interface and a scanning route. The image processing apparatus 2 is further configured to set a target scanning range, and display the reference image data 3 corresponding to the target scanning range on the display unit 21 of the image processing apparatus 2. The image processing apparatus 2 is further configured to establish a three-dimensional model image 4 of the target scanning range according to the plurality of scanning images, and simultaneously display the plurality of scanning images and the three-dimensional model image on the display unit 21; the scanning module is further configured to analyze and compare feature points of each scanned image and the reference image data, and determine whether the reference image data is missing, and if the reference image data is missing, display a missing part in the reference image data as an unfinished state, for example, display the missing part in a specific color, or mark a specific color outer frame on a corresponding scanned image; if there is no missing, the reference image data is displayed as a complete status, for example, 100% displayed in percentage or progress bar. The user of the oral cavity scanning device can determine whether to end the scanning operation according to the display state of the reference image data, that is, the display state of the reference image data shows the scanning condition of the teeth corresponding to the target scanning range, and the user can determine whether to end the scanning or continue the scanning along the suggested scanning route according to the display condition.
In the present embodiment, the image processing apparatus 2 is provided with a three-dimensional modeling system that interacts with the oral scanner via a transmission device (for example, wireless transmission or wired transmission), and the three-dimensional modeling system includes: the image processing device comprises an image detection and decoding module, a modeling module and a processing module. The image detection and decoding module is used for calculating a plurality of three-dimensional point cloud data according to the plurality of scanning images. The modeling module is used for comparing the characteristic points of the three-dimensional point cloud data so as to splice and form a three-dimensional model image. The processing module is used for comparing the plurality of three-dimensional point cloud data with the reference image data by characteristic points, confirming whether each subarea data on the reference image data is missing or not, and marking the scanning completion state of each subarea on the reference image data by using different colors or materials according to the integrity of each subarea data.
Wherein the reference image data is, for example, a two-dimensional image or a three-dimensional image, and the reference image data 3 is displayed in the display main screen 5 of the display unit 21 but is independent of the three-dimensional model image; alternatively, the reference image data 3 is displayed on the display main screen 5, but is superimposed on the three-dimensional model image. Moreover, preferably, if the display of the reference image data overlaps with the display area of the three-dimensional model image, the reference image data is displayed in a semi-transparent material or a frame line manner, so as to avoid visual interference.
Referring to fig. 2, fig. 2 is a flow chart illustrating a control method of the oral cavity scanning device according to the present invention. The present invention also provides a control method of an oral cavity scanning device, which can be used for controlling the oral cavity scanning device 10, and the control method of the oral cavity scanning device comprises the following steps:
step S1, establishing a plurality of reference image data corresponding to each tooth according to the position of the tooth; this step belongs to the default stage of the database;
step S2, setting a target scanning range, and displaying reference image data corresponding to the target scanning range by a display unit;
step S3, capturing a plurality of scanned images of each tooth within the target scanning range, and sensing corresponding position information;
step S4, building a three-dimensional model image corresponding to the target scanning range according to the plurality of scanned images, and displaying the plurality of scanned images and the three-dimensional model image on a display unit at the same time;
step S5, analyzing and comparing the feature points of the scanned images and the reference image data, and determining whether the reference image data is missing, if so, displaying the missing part in the reference image data as an unfinished state; if no, displaying the reference image data as a finished state; and
in step S6, the user determines whether to end the scanning operation according to the display status of the reference image data.
In step S2, the displaying the reference image data corresponding to the target scanning range includes:
calculating reference image data corresponding to the target scanning range according to the target scanning range and the plurality of reference image data; and
when entering a three-dimensional scanning program, searching reference image data corresponding to the target scanning range, and reading and displaying the reference image data if the reference image data is found; if not, the user enters into the self-operation mode.
The step of calculating the reference image data corresponding to the target scanning range includes, for example:
step S21, reading first history medical record data corresponding to the target scanning range, wherein the first history medical record data comprises tooth position information;
step S22, establishing a first basic image frame according to the first historical medical record data;
step S23, searching second history medical record data of the same patient, and if the second history medical record data of the same patient are searched, entering step S24; if not, forming a modified image frame according to the first basic image frame, wherein the second historical medical record data comprises at least one of historical scanned image data and historical reference image data;
step S24, finding whether a second basic image frame of the same patient exists, and if so, forming a modified image frame according to the first basic image frame and the second basic image frame; if not, forming a corrected basic image frame according to the first basic image frame;
step S25: the corresponding reference image data is formed based on the modified image frame, and since the tooth shape and condition (such as missing tooth, decayed tooth or artificial tooth) of each person are different even though the same tooth position is formed, the reference image data needs to be adjusted
In addition, the reference image data is divided into a plurality of sub-regions according to the tooth position shape, wherein the incisors are divided into a plurality of sub-regions including a lingual side surface, a splint side surface, a lingual tooth surface and gum seam, a buccal tooth surface and gum seam, a lingual gum and a buccal gum, for example. The canine teeth are divided into a plurality of sub-regions including, for example, the uvula side, the buccodex side, the lesser occlusal surface, the lingual tooth surface and gum junction, the buccal tooth surface and gum junction, the lingual gum and the buccal gum. The molar is, for example, divided into a plurality of sub-regions including a large lingual side, a large buccal side, a large occlusal surface, a lingual tooth surface and gum junction, a buccal tooth surface and gum junction, lingual gum and buccal gum. Moreover, the reference image data further includes a tooth position and tooth position connection position sub-region. Therefore, the images of the teeth facing each other can be displayed simultaneously in the scanning process, so that the user is not limited by the visual angle.
In step S4, the step of creating a three-dimensional model image corresponding to the target scanning range from the plurality of scanning images includes:
calculating according to the plurality of scanning images to obtain a plurality of three-dimensional point cloud data; and
and comparing the characteristic points of the plurality of three-dimensional point cloud data, and further splicing to form a three-dimensional model image.
Furthermore, in one embodiment, the step S5 includes:
step S51, identifying and classifying the scan images: comparing each scanned image with the reference image data according to the position information and the image characteristics of each scanned image, and adding a label to perform group classification; wherein the image features are, for example, local density, color, shape, etc. of the image, such as the color and shape of the teeth, gums, and slits of teeth are all different, and can be distinguished accordingly; in addition, only one label is limited to a single scanned image, if the data characteristics of the scanned image are similar to those of other scanned images around, the group labels of the other scanned images around can be added, and then analysis is performed according to the group characteristics;
step S52, checking the scanned image data group: performing advanced analysis on each classified scanned image group, and judging the integrity of the scanned images, including whether there is a step difference or a hole between the images and color continuity, so as to determine whether there is a defect in each sub-region data on the reference image data;
step S53, updating the reference image data: according to the integrity of the sub-region data analyzed in step S52, the sub-regions on the reference image data are marked with different colors or materials to indicate the scanning completion status.
In another embodiment, the step S5 may include the following steps:
step S54, comparing the recorded trajectory change and directional data of each scanned image with the reference image data, and adding labels to classify the groups; for example, the normal vector of the body of the oral scanner is turned from the occlusal surface of the tooth to the side surface of the tongue or the side surface of the jaw, and the gravity g value or the angular velocity in at least one direction in the three-dimensional direction (X \ Y \ Z) is instantly and greatly changed, so that the classification can be performed according to the change;
step S52, performing advanced analysis on each classified scanned image group, and determining the integrity of the scanned images, including whether there is a step difference or a hole between the images and whether there is color continuity, so as to determine whether there is a defect in each sub-region data on the reference image data; and
in step S53, the scan completion status of each sub-region on the reference image data is marked by different colors or materials according to the integrity of each sub-region data analyzed in step S52.
The above detailed description of the preferred embodiments is intended to more clearly describe the features and spirit of the present invention, and the scope of the present invention is not limited by the above disclosed preferred embodiments. On the contrary, it is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims. The scope of the claims to be accorded the invention is therefore to be accorded the broadest interpretation so as to encompass all such modifications and equivalent arrangements as is known in the art.

Claims (12)

1. A control method of an oral cavity scanning device is characterized by comprising the following steps:
step S1, establishing a plurality of reference image data corresponding to each tooth according to the position of the tooth, dividing the reference image data into a plurality of sub-regions according to the tooth position shape, and simultaneously displaying each image of the tooth facing each other in the scanning process;
step S2, setting a target scanning range and displaying reference image data corresponding to the target scanning range;
step S3, capturing a plurality of scan images of each tooth within the target scan range, and sensing corresponding position information;
step S4, building a three-dimensional model image corresponding to the target scanning range according to the plurality of scanned images, and displaying the plurality of scanned images and the three-dimensional model image at the same time;
step S5, analyzing and comparing the feature points of the scanned images and the reference image data, and determining whether the reference image data is missing, if so, displaying the missing part in the reference image data as an unfinished state; if no, displaying the reference image data as a finished state; and
step S6, the user determines whether to end the scanning operation according to the display status of the reference image data;
the reference image data are two-dimensional images, are overlapped on the three-dimensional model image in a semitransparent mode, and serve as a guiding user graphical interface to guide a scanning route.
2. The method for controlling an oral cavity scanning device according to claim 1, wherein the step S2 for displaying the reference image data corresponding to the target scanning range includes:
calculating reference image data corresponding to the target scanning range according to the target scanning range and the plurality of reference image data; and
when entering a scanning program, searching reference image data corresponding to the target scanning range, and reading and displaying the reference image data if the reference image data is found; if not, the user enters into the self-operation mode.
3. The method as claimed in claim 2, wherein the step of calculating the reference image data corresponding to the target scanning range comprises:
step S21, reading first history medical record data corresponding to the target scanning range, wherein the first history medical record data comprises tooth position information;
step S22, establishing a first basic image frame according to the first historical medical record data;
step S23, searching second history medical record data of the same patient, and if the second history medical record data of the same patient are searched, entering step S24; if not, forming a modified image frame according to the first basic image frame, wherein the second historical medical record data comprises at least one of historical scanned image data and historical reference image data;
step S24, finding whether a second basic image frame of the same patient exists, and if so, forming a modified image frame according to the first basic image frame and the second basic image frame; if not, forming a modified image frame according to the first basic image frame, wherein the second basic image frame can be the same tooth part or different tooth parts of the same patient; and
step S25: and forming corresponding reference image data according to the corrected image frame.
4. The method as claimed in claim 1, wherein the reference image data is divided into a plurality of sub-regions according to the tooth position shape,
the incisor is divided into a plurality of sub-areas including a lingual side surface, a clip side surface, a lingual side tooth surface and gum joint, a buccal side tooth surface and gum joint, a lingual side gum and a buccal side gum;
the canine teeth are divided into a plurality of sub-areas including a uvula side surface, a cheek side surface, a small occlusal surface, a lingual tooth surface and gum seam, a buccal tooth surface and gum seam, a lingual gum and a buccal gum; and
the molar is divided into a plurality of sub-regions including a large lingual side, a large buccal side, a large occlusal surface, a lingual tooth surface and gum junction, a buccal tooth surface and gum junction, a lingual gum and a buccal gum.
5. The method as claimed in claim 4, wherein the reference image data further includes a region of the junction between the teeth and the reference image data.
6. The method for controlling an oral cavity scanning device according to claim 1, wherein in step S4, the step of creating a three-dimensional model image corresponding to the target scanning range from the plurality of scanning images includes:
calculating according to the plurality of scanning images to obtain a plurality of three-dimensional point cloud data; and
and comparing the characteristic points of the plurality of three-dimensional point cloud data, and further splicing to form a three-dimensional model image.
7. The method for controlling an oral cavity scanning device according to claim 1, wherein the step S5 includes:
step S51, comparing each scanned image with the reference image data according to the position information and image characteristics of each scanned image, and adding labels to classify the groups;
step S52, performing advanced analysis on each classified scanned image group, and determining the integrity of the scanned images, including whether there is a step difference or a hole between the images and whether there is color continuity, so as to determine whether there is a defect in each sub-region data on the reference image data;
in step S53, the sub-regions on the reference image data are marked with different colors or materials according to the integrity of the sub-region data analyzed in step S52.
8. The method for controlling an oral cavity scanning device according to claim 1, wherein the step S5 includes:
step S54, comparing the recorded trajectory change and directional data of each scanned image with the reference image data, and adding labels to classify the groups;
step S52, performing advanced analysis on each classified scanned image group, and determining the integrity of the scanned images, including whether there is a step difference or a hole between the images and whether there is color continuity, so as to determine whether there is a defect in each sub-region data on the reference image data;
in step S53, the sub-regions on the reference image data are marked with different colors or materials according to the integrity of the sub-region data analyzed in step S52.
9. The method as claimed in claim 1, wherein the reference image data is a two-dimensional image or a three-dimensional image, the reference image data is displayed in a main display frame, but is independent of the three-dimensional model image; or the reference image data is displayed in a display main picture but superposed with the three-dimensional model image.
10. The method as claimed in claim 1, wherein if the display area of the reference image data overlaps with the display area of the three-dimensional model image, the reference image data is represented by a semi-transparent material or a frame line.
11. An oral scanning device, comprising:
the oral cavity scanner comprises a projection unit, an image sensing unit and an inertia measurement unit, and is used for capturing a plurality of scanning images of each tooth in a target scanning range and sensing corresponding position information; and
the image processing device is operatively connected with the oral cavity scanner and used for establishing a plurality of reference image data corresponding to each tooth according to the position of the tooth; the device is used for setting the target scanning range and displaying reference image data corresponding to the target scanning range; the three-dimensional model image of the target scanning range is established according to the plurality of scanning images, and the plurality of scanning images and the three-dimensional model image are displayed at the same time; the scanning image processing device is used for analyzing and comparing the characteristic points of each scanning image and the reference image data, judging whether the reference image data has deficiency or not, and if so, displaying the part with deficiency in the reference image data as an unfinished state; if no, the reference image data is displayed as a finished state.
12. The oral scanning device of claim 11, wherein the image processing device is installed with a three-dimensional modeling system, the three-dimensional modeling system interacts with the oral scanner through a transmission device, the three-dimensional modeling system comprises:
the image detection and decoding module is used for calculating a plurality of three-dimensional point cloud data according to the plurality of scanning images;
the modeling module is used for comparing the characteristic points of the plurality of three-dimensional point cloud data so as to splice and form a three-dimensional model image; and
and the processing module is used for comparing the characteristic points of the three-dimensional point cloud data with the reference image data, confirming whether each subarea data on the reference image data is missing or not, and marking the scanning completion state of each subarea on the reference image data by using different colors or materials according to the integrity of each subarea data.
CN201910011675.9A 2019-01-07 2019-01-07 Oral cavity scanning device and control method thereof Active CN109793482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910011675.9A CN109793482B (en) 2019-01-07 2019-01-07 Oral cavity scanning device and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910011675.9A CN109793482B (en) 2019-01-07 2019-01-07 Oral cavity scanning device and control method thereof

Publications (2)

Publication Number Publication Date
CN109793482A CN109793482A (en) 2019-05-24
CN109793482B true CN109793482B (en) 2022-06-21

Family

ID=66558476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910011675.9A Active CN109793482B (en) 2019-01-07 2019-01-07 Oral cavity scanning device and control method thereof

Country Status (1)

Country Link
CN (1) CN109793482B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110853106B (en) * 2019-10-29 2022-10-11 苏州佳世达光电有限公司 Oral scanning system and oral scanning image processing method
CN113140031B (en) * 2020-01-20 2024-04-19 苏州佳世达光电有限公司 Three-dimensional image modeling system and method and oral cavity scanning equipment applying same
CN112245046B (en) * 2020-10-09 2021-08-31 广州黑格智造信息科技有限公司 Method for screening undivided gypsum model, method for manufacturing screening jig and screening jig
CN112833813B (en) * 2020-12-30 2022-10-18 北京大学口腔医学院 Uninterrupted scanning method for three-dimensional scanning in multi-tooth-position mouth
CN114463407B (en) * 2022-01-19 2023-02-17 西安交通大学口腔医院 System for realizing oral cavity shaping simulation display by combining 3D image with feature fusion technology

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5684890A (en) * 1994-02-28 1997-11-04 Nec Corporation Three-dimensional reference image segmenting method and apparatus
US8102392B2 (en) * 2003-06-27 2012-01-24 Kabushiki Kaisha Toshiba Image processing/displaying apparatus having free moving control unit and limited moving control unit and method of controlling the same
US7463757B2 (en) * 2003-12-09 2008-12-09 Carestream Health, Inc. Tooth locating within dental images
TWI348120B (en) * 2008-01-21 2011-09-01 Ind Tech Res Inst Method of synthesizing an image with multi-view images
PL2344081T3 (en) * 2008-08-26 2013-08-30 Boiangiu Andy A dental bone implant
CN201492521U (en) * 2009-09-15 2010-06-02 余乃昌 Positioning assisted device for tooth implantation
DE102011014555A1 (en) * 2011-03-21 2012-09-27 Zfx Innovation Gmbh Method for planning a dental implant placement and reference arrangement
US8712137B2 (en) * 2012-05-29 2014-04-29 General Electric Company Methods and system for displaying segmented images
KR101473192B1 (en) * 2013-11-25 2014-12-16 주식회사 디오 method of manufacturing guide stent for dental implant
FR3027505B1 (en) * 2014-10-27 2022-05-06 H 43 METHOD FOR CONTROLLING THE POSITIONING OF TEETH
US10108269B2 (en) * 2015-03-06 2018-10-23 Align Technology, Inc. Intraoral scanner with touch sensitive input
CN104867148B (en) * 2015-04-22 2018-08-10 北京爱普力思健康科技有限公司 Acquisition methods, device and the oral cavity remote diagnosis system of predetermined kind subject image
CN105125162B (en) * 2015-09-17 2017-04-12 苏州佳世达光电有限公司 Oral cavity scanner
CN105631937B (en) * 2015-12-28 2019-06-28 苏州佳世达光电有限公司 Scan method and scanning means
US10136972B2 (en) * 2016-06-30 2018-11-27 Align Technology, Inc. Historical scan reference for intraoral scans
US10380212B2 (en) * 2016-07-27 2019-08-13 Align Technology, Inc. Methods and apparatuses for forming a three-dimensional volumetric model of a subject's teeth
CN108309493B (en) * 2018-02-12 2021-01-05 苏州佳世达光电有限公司 Oral scanner control method and oral scanning equipment
CN108613637B (en) * 2018-04-13 2020-04-07 深度创新科技(深圳)有限公司 Structured light system dephasing method and system based on reference image

Also Published As

Publication number Publication date
CN109793482A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
CN109793482B (en) Oral cavity scanning device and control method thereof
US11246689B2 (en) Intraoral scanning system with registration warnings
JP6605249B2 (en) Tooth analysis device and program thereof
JP6862551B2 (en) How to align the camera or scanning device with respect to the dental auxiliary element
KR20180027492A (en) System and method for scanning anatomical structures and displaying scanning results
KR20100017396A (en) Image processing device and image processing method for performing three dimensional measurements
KR101794561B1 (en) Apparatus for Dental Model Articulator and Articulating Method using the same
CN108700408A (en) Three-dimensional shape data and texture information generate system, shooting control program and three-dimensional shape data and texture information generation method
CN118175973A (en) Intraoral scanner real-time and post-scan visualization
JP2004202069A (en) Image reader and image reading method
JP2003156389A (en) Vibration measuring apparatus and storage medium
CN113645893B (en) Scanning guidance providing method and image processing apparatus therefor
JP2005349176A (en) Jaw movement analyzing method and jaw movement analyzing system
CN111937038B (en) Method for 3D scanning at least a portion of a surface of an object and optical 3D scanner
JP3621215B2 (en) 3D measuring device
US11039905B2 (en) Prosthesis design method and system based on arch line
JP2006017632A (en) Three-dimensional image processor, optical axis adjustment method, and optical axis adjustment support method
JP7469351B2 (en) Identification device, identification method, and identification program
JP6742272B2 (en) Wafer abnormality location detection device and wafer abnormality location identification method
JP2002328008A (en) Method and system for measuring dimensions
JP2007218922A (en) Image measuring device
JP3855254B2 (en) Displacement sensor
TW434495B (en) Image servo positioning and path-tracking control system
JPS61241612A (en) Three-dimensional form measuring system
US20230222652A1 (en) Methods and systems for determining fitness for a dental splint and for capturing digital data for fabricating a dental splint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant