CN118229877A - Data display method, device, equipment and storage medium - Google Patents

Data display method, device, equipment and storage medium Download PDF

Info

Publication number
CN118229877A
CN118229877A CN202410325780.0A CN202410325780A CN118229877A CN 118229877 A CN118229877 A CN 118229877A CN 202410325780 A CN202410325780 A CN 202410325780A CN 118229877 A CN118229877 A CN 118229877A
Authority
CN
China
Prior art keywords
image
dimensional reconstruction
caries
data
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410325780.0A
Other languages
Chinese (zh)
Inventor
马超
赵晓波
陈晓军
章惠全
王嘉磊
钱伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining 3D Technology Co Ltd
Original Assignee
Shining 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining 3D Technology Co Ltd filed Critical Shining 3D Technology Co Ltd
Priority to CN202410325780.0A priority Critical patent/CN118229877A/en
Publication of CN118229877A publication Critical patent/CN118229877A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0082Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
    • A61B5/0088Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for oral or dental tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0073Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by tomography, i.e. reconstruction of 3D images from 2D projections
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/7425Displaying combinations of multiple images regardless of image source, e.g. displaying a reference anatomical image with a live image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/7445Display arrangements, e.g. multiple display units
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • General Physics & Mathematics (AREA)
  • Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Epidemiology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a data display method, a device, equipment and a storage medium, wherein the data display method comprises the following steps: acquiring a first acquisition image of a target object, wherein the first acquisition image is used for three-dimensional reconstruction of the target object; acquiring a second acquired image of the target object, the second acquired image being used for caries identification of the target object; constructing a digital model of the target object according to the first acquired image and the second acquired image, wherein the digital model comprises three-dimensional reconstruction data; rendering and displaying the digital model of the target object. The method provided by the application improves the diagnosis efficiency on the premise of ensuring the caries detection accuracy.

Description

Data display method, device, equipment and storage medium
Technical Field
The present invention relates to the field of data display technologies, and in particular, to a data display method, apparatus, device, and storage medium.
Background
Intraoral caries detection is a common oral cavity examination method for finding and evaluating crank conditions on teeth, and at present, common intraoral caries examination methods comprise visual examination, X-ray examination, fluorescent examination, auxiliary examination of influencing equipment and the like, but visual examination methods are easy to miss and can not be used for checking caries in teeth, X-ray examination has certain damage to human bodies, and image equipment can not automatically identify caries.
With the development of three-dimensional scanning technology, an intraoral scanner is generally used for acquiring three-dimensional data of teeth and gums in an oral cavity, and the intraoral scanner can directly acquire scanning data such as three-dimensional morphological data and color texture information in the oral cavity, and the scanning data can intuitively and clearly reflect the state in the oral cavity, so that the intraoral scanner can provide data for tooth restoration, tooth implantation, oral disease diagnosis, oral disease prevention and the like. Therefore, it is highly desirable to provide a method for intraoral caries detection based on three-dimensional scan data to improve the efficiency of treatment while ensuring detection accuracy.
Disclosure of Invention
In order to solve the technical problems, the embodiments of the present disclosure provide a data display method, apparatus, device, and storage medium, which improve the diagnosis efficiency on the premise of ensuring the accuracy of the examination.
In a first aspect, an embodiment of the present disclosure provides a data display method, including:
Acquiring a first acquisition image of a target object, wherein the first acquisition image is used for three-dimensional reconstruction of the target object;
Acquiring a second acquired image of the target object, the second acquired image being used for caries identification of the target object;
Constructing a digital model of the target object according to the first acquired image and the second acquired image, wherein the digital model comprises three-dimensional reconstruction data;
rendering and displaying the digital model of the target object.
Optionally, the second acquired image is acquired by the image acquisition device after near infrared light and/or ultraviolet light is projected to the target object and reflected by the target object.
Optionally, the constructing a digitized model of the target object according to the first acquired image and the second acquired image includes:
constructing the first three-dimensional reconstruction data according to the first acquired image;
and constructing the second three-dimensional reconstruction data by associating the three-dimensional reconstruction data with the second acquired image.
Optionally, the digitized model further includes caries identification data, and after constructing the first three-dimensional reconstruction data from the first acquired image, the method further includes:
And carrying out caries identification on the target object based on the first three-dimensional reconstruction data and the second acquired image through a pre-trained identification model, and generating caries identification data.
Optionally, the first acquired image includes a reconstructed image, where the reconstructed image is acquired by the image acquisition device after the structured light pattern is projected onto the target image and reflected by the target image.
Optionally, the constructing the first three-dimensional reconstruction data according to the first acquired image includes:
And carrying out three-dimensional reconstruction on the target object based on the reconstructed image to obtain three-dimensional morphology data, wherein the first three-dimensional reconstruction data comprises the three-dimensional morphology data.
Optionally, the first acquired image further includes a texture image, where the texture image is acquired by the image acquisition device after white light is projected to the target object and reflected by the target object;
said constructing said first three-dimensional reconstruction data from said first acquired image comprises:
Mapping the three-dimensional morphology data based on the texture image, and constructing the first three-dimensional reconstruction data.
Optionally, the display interface for displaying the digitized model of the target object includes a first display area, and the rendering displays the digitized model of the target object, including:
Rendering display target three-dimensional reconstruction data in the first display area;
The target reconstruction data refer to the first three-dimensional reconstruction data, the second three-dimensional reconstruction data and/or third three-dimensional reconstruction data, the third three-dimensional data refers to three-dimensional reconstruction data obtained by mapping a caries image in the second acquired image on a local area of the first three-dimensional reconstruction data and mapping a texture image in the first acquired image on the rest areas except the local area, and the local area refers to a caries area in the caries identification data.
Optionally, the display interface further includes a second display area and a target frame, where the second display area is used to display the second acquired image.
Optionally, after the rendering and displaying the three-dimensional reconstruction data of the target in the first display area, the method further includes:
and responding to the moving operation of the target frame on the target three-dimensional reconstruction data, and displaying a second acquired image corresponding to a target area in the second display area according to a preset proportion, wherein the target area is an area selected by the target frame on the target three-dimensional reconstruction data.
Optionally, the display interface further includes a third display area, where the third display area is used to display a texture image in the first acquired image, and does not overlap with the second display area.
Optionally, in response to a movement operation of the target frame on the three-dimensional reconstruction data, the method further comprises:
And displaying texture images corresponding to the target area in the third display area according to the preset proportion, wherein the second display area and the third display area are synchronously displayed based on the moving operation.
Optionally, the caries identification data includes a caries location of each caries in the target object in the first three-dimensional reconstruction data and a caries serial number for each caries.
Optionally, the rendering displays a digitized model of the target object, including:
Displaying the first three-dimensional reconstruction data identifying the caries sequence number and the caries location, and/or,
Displaying the caries identification data by means of text prompting, and/or,
And broadcasting the caries identification data in real time in a voice prompt mode in the process of scanning the target object.
In a second aspect, an embodiment of the present disclosure provides a data display apparatus, including:
The first acquisition unit is used for acquiring a first acquisition image of a target object, wherein the first acquisition image is used for three-dimensional reconstruction of the target object;
A second acquisition unit configured to acquire a second acquired image of the target object, the second acquired image being used for caries identification of the target object;
A construction unit, configured to construct a digitized model of a target object according to the first acquired image and the second acquired image, where the digitized model includes three-dimensional reconstruction data;
And the display unit is used for rendering and displaying the digital model of the target object.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
one or more processors;
a storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the data display method as described above.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a data display method as described above.
The application provides a data display method, which comprises the following steps: acquiring a first acquisition image of a target object, wherein the first acquisition image is used for three-dimensional reconstruction of the target object; acquiring a second acquired image of the target object, the second acquired image being used for caries identification of the target object; constructing a digital model of the target object according to the first acquired image and the second acquired image, wherein the digital model comprises three-dimensional reconstruction data; the application ensures the checking accuracy by carrying out dental caries checking through the acquired images acquired under different projection lights, displays the digital model constructed based on the images acquired under different conditions in the three-dimensional scanning process or after the scanning is finished, and facilitates the user to know the condition of dental diseases in the oral cavity in time and to visually check in real time by rendering and displaying the digital model, thereby playing the role of preventing oral diseases and effectively improving the treatment efficiency.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a data display method according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a display interface according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of another display interface provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another display interface provided by an embodiment of the present disclosure;
Fig. 5 is a schematic structural diagram of a data display device according to an embodiment of the disclosure;
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
Specifically, a common method for detecting dental caries in the mouth specifically includes:
visual inspection: the dentist directly observes the condition of the teeth and oral tissue using the naked eye and oral examination instruments such as a mouth mirror and a probe, and examines the color, shape, surface texture of the teeth and the presence of caries.
X-ray examination: x-rays are a commonly used examination method that can reveal the internal structure and tissue of the teeth. The X-ray image can help detect the position and extent of cavities within the tooth, a method commonly used to find caries hidden under the tooth surface.
Fluorescent examination: fluoroscopy uses special fluorescent dyes or fluorescent cameras to detect changes in tooth surfaces and tiny cavities. The fluorescent dye is attached to the tooth and then irradiated by a specific fluorescent light source to emit fluorescent light of different colors. The caries and lesion sites can show different colors and brightness, which helps doctors judge the existence and degree of caries.
The detection instrument assists: modern technology also provides computer-aided caries detection instruments such as laser scanners, infrared scanners, and the like. These instruments can use specific physical principles and computational algorithms to identify cavities and lesions by scanning tooth surfaces or tissue.
In view of the above technical problems, an embodiment of the present disclosure provides a data display method, which acquires an acquired image acquired when a target object reflects different lights, constructs a three-dimensional model of the target object according to the acquired image, and determines caries by using different light reflection characteristics of a caries region of a tooth, and simultaneously, classifies and learns the caries image through an identification model, identifies a caries region in three-dimensional reconstruction data, and reminds a user of an identification result in real time, so as to rapidly prevent and diagnose intraoral diseases. And in particular by one or more of the following examples.
Specifically, the data display method may be performed by a terminal or a server. Specifically, the terminal or the server builds a digital model capable of identifying caries conditions based on acquired images acquired under different conditions, and renders and displays the digital model in real time.
Fig. 1 is a schematic flow chart of a data display method provided in an embodiment of the present disclosure, where the data display method is applied to a terminal, and the terminal may be understood as an electronic device such as a computer, and specifically includes steps S101 to S104 shown in fig. 1:
s101, acquiring a first acquired image of a target object, wherein the first acquired image is used for three-dimensional reconstruction of the target object.
The first collected image is an image which is collected by the image collecting device and reflected by the target object, wherein the first collected image is obtained by projecting white light and a structured light pattern to the target object.
It can be understood that the first collected image may be scanned by a three-dimensional scanning device (hereinafter referred to as scanning device), and the three-dimensional scanning device may be an intraoral three-dimensional scanner, where the scanning device includes a projection component, an illumination component, and an acquisition component, the projection component is configured to project a structured light pattern onto a target object, the illumination component is configured to project white light onto the target object, the projection component and the illumination component may be periodically projected alternately, or may also project simultaneously, and a specific projection manner is not limited, and the acquisition component is configured to acquire the white light reflected by the target object and the structured light pattern, so as to obtain the first collected image. The first acquired image generated by the acquisition component can be acquired in real time, a certain number of acquired images can be acquired, or all acquired images can be acquired directly. The first acquired image includes a reconstructed image and a color texture image.
S102, acquiring a second acquired image of the target object, wherein the second acquired image is used for caries identification of the target object.
The second acquisition image is acquired by the image acquisition device after near infrared light and/or ultraviolet light is projected to the target object and reflected by the target object.
It is understood that the second acquired image may also be obtained by a scanning device, and specifically, the illumination component is configured to project near infrared light and/or ultraviolet light toward the target object, and the acquisition component is configured to acquire near infrared light and/or ultraviolet light reflected by the target object, so as to obtain the second acquired image, where a light source projected toward the target object by the illumination component includes at least one light source of white light, near infrared light and ultraviolet light, for example, the illumination component projects white light and near infrared light toward the target object, where the white light and the near infrared light may be projected periodically and alternately, or may be projected simultaneously, and a specific projection manner is not limited and may be determined by a user's needs. Near infrared imaging is an imaging technique for observing and acquiring structural and functional information of tissue, and is performed using a specific wavelength range (typically between 600 and 1000 nanometers) of Near Infrared (NIR).
S103, constructing a digital model of the target object according to the first acquired image and the second acquired image.
Wherein the digitized model includes three-dimensional reconstruction data.
It can be understood that, on the basis of S101 and S102, first three-dimensional reconstruction data of the target object is constructed in real time according to the acquired first acquired image, where the first three-dimensional reconstruction data refers to a three-dimensional model, the target object may be an oral cavity, a feasible application scenario, the intraoral scanner acquires the oral cavity of the user in real time to generate an acquired image and transmits the acquired image to the terminal, and the terminal constructs a three-dimensional model of the oral cavity of the user in real time according to the acquired image, where the three-dimensional model may be a tooth model, and the tooth model may be a model of a complete tooth in the oral cavity, a model of a part of teeth, or a model of a single tooth, or a partial model of teeth.
Wherein the digitized model includes first three-dimensional reconstruction data and second three-dimensional reconstruction data.
Optionally, in S103, a digitized model of the target object is constructed according to the first acquired image and the second acquired image, which may be specifically implemented by the following steps:
Constructing the first three-dimensional reconstruction data according to the first acquired image; and correlating the first three-dimensional reconstruction data with the second acquired data to construct the second three-dimensional reconstruction data.
It can be understood that the three-dimensional reconstruction algorithm is adopted to reconstruct three-dimensional data of the target object according to the first acquired image, the three-dimensional reconstruction data is recorded as the first three-dimensional reconstruction data, the specific three-dimensional reconstruction algorithm is not limited, the three-dimensional reconstruction data can be determined by the user according to the requirements, and the three-dimensional reconstruction data can be understood as a three-dimensional model. And then, correlating the second acquired image with the first three-dimensional reconstruction data to construct second three-dimensional reconstruction data.
Optionally, the associating the three-dimensional reconstruction data with the second acquired image to construct the second three-dimensional reconstruction data may specifically be implemented by:
and carrying out position correlation on the second acquired image and the first three-dimensional reconstruction data to construct the second three-dimensional reconstruction data.
It is understood that the position association may directly map the second acquired image to the three-dimensional reconstruction data, or associate the second acquired image with a specific region corresponding to the first three-dimensional reconstruction data. For example, the second acquired image and the first three-dimensional reconstruction data are correlated in position according to the tooth position, i.e. the position of a specific tooth in the first three-dimensional reconstruction data can be located from the second acquired image. Or the second acquired image is directly applied to the first three-dimensional reconstruction data according to the tooth position.
Optionally, the digitized model further includes caries identification data, and after constructing the first three-dimensional reconstruction data from the first acquired image, the method further includes:
And taking the first three-dimensional reconstruction data and the second acquisition image as input of a pre-constructed recognition model, and carrying out caries recognition on the target object through the recognition model to generate caries recognition data. Or performing caries identification on the target object based on the first three-dimensional reconstruction data and the second acquired image through a pre-trained identification model to generate caries identification data.
For example, in one application scenario, a server trains an identification model. The terminal acquires a trained recognition model from the server, performs caries recognition on the acquired acquisition image through the pre-trained recognition model, and displays a recognition result in real time. The acquired image may be scanned by a terminal, which may be a scanning device. Or the acquired image is acquired by the terminal from other scanning devices. Or the acquired image is an image obtained after the terminal performs image processing on a preset image, wherein the preset image can be obtained by scanning other scanning devices, or the preset image can be obtained by the terminal from other scanning devices. Here, the other scanning apparatuses are not particularly limited.
In another application scenario, the server trains the recognition model. Further, the server performs caries identification on the acquired acquisition image through a pre-trained identification model, and displays the identification result in real time. The manner in which the server acquires the acquired image may be similar to the manner in which the terminal acquires the acquired image as described above, and will not be described here again.
In yet another application scenario, the terminal trains the recognition model. Further, the terminal carries out caries recognition on the acquired image through a pre-trained recognition model, and the recognition result is displayed in real time.
It is understood that the first three-dimensional reconstruction data and the second acquisition image are used as the input of the recognition model, or the second three-dimensional reconstruction data of which the position is correlated is used as the input of the recognition model, the recognition of the dental caries is performed through the recognition model, the dental caries included in the target object is recognized, and the dental caries recognition data is generated.
The first acquired image further comprises a reconstructed image and a texture image, wherein the reconstructed image comprises a plurality of images with different stripes, and the images are obtained by acquiring light rays with different preset wave bands; the reconstructed image is acquired by the image acquisition device after the structured light pattern is projected to the target image and reflected by the target image; the texture image is obtained by the image acquisition device after white light is projected to the target object and reflected by the target object.
It may be appreciated that the plurality of images include a first stripe image and a second stripe image, where the first stripe image is acquired by a black camera in the acquisition component, the second stripe image is acquired by a color camera in the acquisition component, the different cameras acquire different stripe images under light rays of different wavebands, and in a possible case, the first stripe image is acquired by acquiring light rays of a first waveband, the second stripe image is acquired by simultaneously acquiring light rays of a second waveband and light rays of a third waveband, for example, light rays of the first waveband emitted by a first light source in the projection component are recorded as first light rays, the first waveband is 435-480, the first light rays are blue light rays, the first stripe image acquired by the black camera is a blue stripe image, the second light source in the projection component emits light rays of two wavebands of 605-600 wavebands and 500-560, that is, the 605-600 wavebands is the second waveband, the second light rays are red light rays, the 500-560 is the third waveband, and the third light rays are the third wavebands, that is, the second stripe image is the green stripe image. In another possible case, the plurality of images further includes a third stripe image, in which case each image respectively collects light of a wavelength band, that is, the first stripe image is obtained by collecting light of a first wavelength band, the second stripe image is obtained by collecting light of a second wavelength band, and the third stripe image is obtained by collecting light of a third wavelength band, where the first wavelength band, the second wavelength band, and the third wavelength band are different, and it is understood that the color camera may also collect color texture images of the target object.
It will be appreciated that the following embodiments will be described taking the example that the first stripe image is a blue stripe image and the second stripe image is a red-green stripe image.
Optionally, the constructing the first three-dimensional reconstruction data according to the first acquired image may specifically be implemented by the following steps:
Performing three-dimensional reconstruction on the target object based on the reconstruction to obtain three-dimensional morphology data, wherein the first three-dimensional reconstruction data comprises the three-dimensional morphology data; mapping the three-dimensional morphology data based on the texture image, and constructing the first three-dimensional reconstruction data.
Optionally, the reconstructing the target object based on the reconstructed image to obtain three-dimensional morphology data includes:
Taking the second stripe image as a reconstruction map to reconstruct the target object in three dimensions; taking the second stripe image as a coding diagram to determine the coding sequence corresponding to each stripe; determining a coding sequence of each stripe based on the coding diagram, and performing stripe matching on the stripes in the reconstructed diagram based on the coding sequence to determine a matching relationship; based on the matching relation, carrying out three-dimensional reconstruction according to the reconstruction map by using a stripe reconstruction algorithm, and constructing three-dimensional morphology data of the target object on the basis of reconstruction by using a splicing fusion algorithm.
It can be understood that the red-green stripe pattern is used as a coding pattern, the combination of red (0, 1) and green (0, 1) is used as coding information, the position of each stripe, that is, the red-green-blue stripe sequence during projection, can be known based on the coding information, the red-green stripe pattern can be used for determining blue stripes, the red-green stripe pattern is used as the coding pattern for determining the sequence of each stripe, and the blue stripe pattern is used as a reconstruction pattern for acquiring the three-dimensional coordinates of an object. Specifically, the light of different wave bands obtained by different cameras is divided into 2 colors (red and green), a sequence of blue stripes is identified and matched by adopting codes composed of 2 preset colors, then a stripe three-dimensional reconstruction algorithm is utilized to reconstruct based on a blue stripe graph, three-dimensional shape data of an object is constructed by utilizing a splicing fusion algorithm, and then colors in an oral cavity are determined based on a color texture graph so as to render teeth, gums and the like, so that a tooth model is obtained.
It will be understood that, based on the above-described S101 and S102, the first three-dimensional reconstruction data and the second acquisition image are input as a recognition model constructed in advance, and caries recognition is performed on the target object through the recognition model, that is, whether caries exists in the oral cavity or not is recognized, and caries recognition data, which means a recognition result, is generated. The recognition model is obtained by training based on a large number of caries images marked with caries through a deep learning mode, and the specific network structure and training mode of the recognition model are not limited herein. In the following embodiments, the second acquired image is taken as an example to describe in detail, in addition to the manner of presenting caries on the tooth surface in the three-dimensional model, the recognition model considers the flocculent manner of caries image presented by the near-infrared image obtained according to the near-infrared imaging principle and the near-infrared absorption characteristic, and it is understood that the principle of near-infrared imaging in caries detection refers to the change of tissue structure caused by the absorption characteristic of near-infrared light by tooth tissue and caries lesions in tooth, and that the near-infrared absorption characteristic refers to the greater penetration depth of near-infrared light in tooth tissue, but the absorption of enamel and dentin is lower, and the lesions or caries lesions in enamel cause the change of tissue structure and optical characteristics, resulting in the increase of the absorption of near-infrared light, that is, the caries position in the near-infrared image is brighter than the position where no lesions occur, and tooth is the strongest part of human bone, and is wrapped on the crown surface to be milky. Therefore, the recognition model can recognize tooth internal caries based on the absorption characteristic of near infrared light in the near infrared image, and can recognize tooth surface caries based on texture characteristics, and recognition results are more accurate and comprehensive.
S104, rendering and displaying the digital model of the target object.
It can be understood that, on the basis of S101 to S103, the digitized model of the selected target object is displayed through the display interface, where the display interface can be understood as a visual interface, and the display interface includes the three-dimensional model, the recognition result, and the captured image, that is, at least one of the three-dimensional model, the recognition result, and the captured image can be displayed in the display interface.
In one embodiment, the display interface includes a first display area, where the first display area is used to display the target three-dimensional reconstruction data, and the rendering displays the digitized model of the target object, and specifically may be implemented by the following steps:
Rendering display target three-dimensional reconstruction data in the first display area; the target reconstruction data refer to the first three-dimensional reconstruction data, the second three-dimensional reconstruction data and/or third three-dimensional reconstruction data, the third three-dimensional data refers to three-dimensional reconstruction data obtained by mapping a caries image in the second acquired image on a local area of the first three-dimensional reconstruction data and mapping a texture image in the first acquired image on the rest areas except the local area, and the local area refers to a caries area in the caries identification data.
It is understood that the target three-dimensional reconstruction data is displayed in the first display area, and the target three-dimensional reconstruction data may be a first three-dimensional reconstruction model to which a texture image is globally or locally applied, a second three-dimensional reconstruction model to which a near infrared image is globally or locally applied, or a third three-dimensional reconstruction model to which a near infrared image is applied to a caries region and a texture image is applied to the rest of the caries region.
It is understood that the caries area identified in the caries identification data is determined and the near infrared image is applied to the caries area of the three-dimensional reconstruction data. The texture image is applied to the non-carious region of the three-dimensional reconstruction data, that is, the three-dimensional reconstruction data after displaying the texture map, a possible scene, in which case the near-infrared image is applied to the carious region and the texture image is applied to the non-carious region to clarify the caries condition in the target object, if the carious region is identified. Different mapping effects may also be displayed based on caries recognition results after mapping, or mapping may also be directly based on caries recognition results.
Optionally, the display interface further includes a second display area, where the second display area is used to display the second acquired image.
Optionally, the second display area is disposed above the first display area.
Referring to fig. 2, an exemplary embodiment of the disclosure is shown in fig. 2, where the display interface includes a first display area and a second display area, where the first display area may be understood as a main display area for displaying target three-dimensional reconstruction data (tooth model), and the second display area may be located above the first display area for displaying a near infrared image, and based on the three-dimensional model, a region where caries occurs on a tooth surface may be intuitively known, and meanwhile, a region where caries occurs inside a tooth may be intuitively known according to a brightness difference of each region in the near infrared image. In a possible implementation manner, the display interface includes a first display area and a first identifier, the second display area is displayed on the first display area in response to a triggering operation of the first identifier, a display position of the second display area in the first display area is not limited, the second display area may be disposed above or below the first display area, and the near infrared image displayed in the second display area may be a spliced and fused complete tooth image, may be a partial tooth image, or may select one or more near infrared images to be displayed from a plurality of near infrared images by itself.
In one embodiment, the display interface further includes a target frame.
Optionally, after the rendering and displaying the three-dimensional reconstruction data of the target in the first display area, the method further includes:
And responding to the moving operation of the target frame on the target three-dimensional reconstruction data, and displaying a second acquired image corresponding to a target area in the second display area according to a preset proportion, wherein the target area is an area selected by the target frame on the target three-dimensional reconstruction data.
It is understood that the display interface further includes a target frame, where the target frame may be understood as a selected frame, and the target frame may be moved on each display area, where a specific target frame may slide on the three-dimensional model, and in response to a sliding operation of the target frame on the three-dimensional model, a second acquired image corresponding to the target area is displayed on the second display area according to a preset ratio, where the target frame is selected on the three-dimensional model, for example, where the target frame is selected on the tooth model is an area where an incisor is located. The preset proportion can be set according to the user requirement, for example, near infrared images of the incisors can be displayed in equal proportion, namely, the second display area and the target frame are synchronously displayed.
In one embodiment, the display interface further includes a third display area, where the third display area is used to display the texture image in the first acquired image, and the third display area is disposed above the first display area and does not overlap the second display area.
Optionally, in response to a movement operation of the target frame on the target three-dimensional reconstruction data, the method further comprises:
And displaying texture images corresponding to the target area in the third display area according to the preset proportion, wherein the second display area and the third display area are synchronously displayed based on the moving operation.
Referring to fig. 3, fig. 3 is a schematic diagram of another display interface provided in an embodiment of the disclosure, where the display interface further includes a third display area, and the third display area is used to display a color texture image. In a possible case, the display interface includes a second identifier, a first display area, and a third display area, and the third area is displayed in response to a triggering operation of the second identifier. In another possible case, the display interface includes a first display area, a second display area, and a third display area, where the second display area and the third display area are both located above the first display area, and the second display area and the third display area do not overlap. In this case, in response to the movement operation of the target frame on the three-dimensional model, the color texture image corresponding to the target region is displayed in the third display region according to the preset proportion, and the target region is also a tooth region framed by the target frame, and the region framed by the target frame is not limited and can be set according to the user requirement. As shown in fig. 3, the second display area and the third display area display images in synchronization in response to a moving operation of the target frame, that is, the second display area displays a color texture image of the target area, and the third display area displays a near infrared image of the target area.
Optionally, the caries identification data includes a number of caries included by the target object, a caries location of each caries in the target object in the first three-dimensional reconstruction data, and a caries sequence number for each caries.
It is understood that the recognition result includes the number of caries included in the target object, the position of each caries in the three-dimensional model, and the serial number of each caries, and the number of caries can be understood as the number of teeth suffering from caries, that is, the number of caries areas in the mouth, that is, the area suffering from caries, that is, the statistical unit, and the number of caries areas may occur in a single tooth, and caries positions refer to the positions of caries areas in the three-dimensional model, and caries serial numbers are sequentially determined based on caries order.
Optionally, the rendering displays a digitized model of the target object, including:
displaying the first three-dimensional reconstruction data identifying the caries sequence number and the caries location, and/or displaying the caries identification data by means of text prompting, and/or broadcasting the caries identification data in real time by means of voice prompting during scanning the target object.
It can be understood that the recognition result can be directly identified on the three-dimensional model, for example, serial numbers, specific areas and the like can be identified in each caries region, a text prompting method can be adopted, caries recognition conditions and caries amount can be displayed on the display interface in a text prompting mode, and the text can be identified on the three-dimensional model or directly displayed on a local position of the display interface. The near infrared image may be applied to the three-dimensional model by applying, for example, the near infrared image directly to the three-dimensional model, or only the area in which caries is recognized may be applied for highlighting, for example. And the method can also broadcast the caries condition in real time in a voice prompt mode after generating the identification result in the process of scanning the target object, accumulate the detected caries quantity and present the real-time identified caries quantity in a numerical mode.
For example, referring to fig. 4, fig. 4 is a schematic diagram of another display interface provided in an embodiment of the present disclosure, where the display interface shown in fig. 4 identifies serial numbers of respective caries and selects respective caries regions according to caries as frames, so that a user intuitively knows caries conditions, for example, a target object includes 3 caries regions, caries serial numbers of 3 caries regions are caries 1, caries 2 and caries 3, respectively, and each caries region is marked, and caries region frames may be selected in a manner shown in fig. 4, where caries 2 and caries 3 are dental surface caries, caries 1 is dental internal caries, and caries number may also be determined according to caries serial numbers. At the same time, the near infrared image displayed by the second display area shows that the brightness of the caries area is higher than that of other areas.
It is understood that the display interface further includes a fourth display area for interpreting the caries morphology in a visual animation, through which the user can understand the caries formation process and caries morphology, and can also compare with caries regions identified by the user, indicating that there is a similar caries in the user's mouth.
According to the data display method provided by the embodiment of the disclosure, the caries condition in the target object is identified by utilizing the near infrared imaging principle and the identification model, the caries condition on the tooth surface can be identified by textures, the caries condition in the tooth can be identified, the identification accuracy is high, and the accuracy and the integrity of auxiliary diagnosis are improved. The method can also remind the user of the oral caries condition in a voice prompt, a text prompt, a three-dimensional display and other modes in the scanning process or after the scanning is finished, and can identify and remind the user in real time, so that the scanning experience is effectively improved.
Fig. 5 is a schematic structural diagram of a data display device according to an embodiment of the disclosure. The data display apparatus provided in the embodiments of the present disclosure may execute a processing flow provided in the embodiments of the data display method, as shown in fig. 5, where an apparatus 500 includes a first obtaining unit 501, a second obtaining unit 502, a constructing unit 503, and a display unit 504, where:
A first acquiring unit 501, configured to acquire a first acquired image of a target object, where the first acquired image is used for three-dimensional reconstruction of the target object;
a second acquisition unit 502 configured to acquire a second acquired image of the target object, the second acquired image being used for caries identification of the target object;
A construction unit 503, configured to construct a digitized model of the target object according to the first acquired image and the second acquired image, where the digitized model includes three-dimensional reconstruction data;
And a display unit 504, configured to render and display the digitized model of the target object.
Optionally, the second acquired image in the apparatus 500 is acquired by the image acquisition apparatus after near infrared light and/or ultraviolet light is projected onto the target object and reflected therefrom.
Optionally, the digitized model in the apparatus 500 includes first three-dimensional reconstruction data and second three-dimensional reconstruction data.
Optionally, the construction unit 503 is configured to:
constructing the first three-dimensional reconstruction data according to the first acquired image;
and constructing the second three-dimensional reconstruction data by associating the three-dimensional reconstruction data with the second acquired image.
Optionally, the construction unit 503 is configured to:
And carrying out caries identification on the target object based on the first three-dimensional reconstruction data and the second acquired image through a pre-trained identification model, and generating caries identification data.
Optionally, the first acquired image in the apparatus 500 includes a reconstructed image, where the reconstructed image is acquired by the image acquisition apparatus after the structured light pattern is projected onto the target image and reflected therefrom.
Optionally, the construction unit 503 is configured to:
And carrying out three-dimensional reconstruction on the target object based on the reconstructed image to obtain three-dimensional morphology data, wherein the first three-dimensional reconstruction data comprises the three-dimensional morphology data.
Optionally, the first acquired image in the apparatus 500 further includes a texture image, where the texture image is acquired by the image acquisition apparatus after white light is projected onto the target object and reflected by the target object.
Optionally, the construction unit 503 is configured to:
Mapping the three-dimensional morphology data based on the texture image, and constructing the first three-dimensional reconstruction data.
Optionally, the display interface in the apparatus 500 includes a first display area.
Optionally, the display unit 504 is configured to:
Rendering display target three-dimensional reconstruction data in the first display area;
The target reconstruction data refer to the first three-dimensional reconstruction data, the second three-dimensional reconstruction data and/or third three-dimensional reconstruction data, the third three-dimensional data refers to three-dimensional reconstruction data obtained by mapping a caries image in the second acquired image on a local area of the first three-dimensional reconstruction data and mapping a texture image in the first acquired image on the rest areas except the local area, and the local area refers to a caries area in the caries identification data.
Optionally, the display interface in the apparatus 500 further includes a second display area and a target frame, where the second display area is used to display the second acquired image.
Optionally, the display unit 504 is further configured to:
and responding to the moving operation of the target frame on the target three-dimensional reconstruction data, and displaying a second acquired image corresponding to a target area in the second display area according to a preset proportion, wherein the target area is an area selected by the target frame on the target three-dimensional reconstruction data.
Optionally, the display interface in the apparatus 500 further includes a third display area, where the third display area is used to display a texture image in the first acquired image, and does not overlap the second display area.
Optionally, the apparatus 500 is further configured to:
And displaying texture images corresponding to the target area in the third display area according to the preset proportion, wherein the second display area and the third display area are synchronously displayed based on the moving operation.
Optionally, the caries identification data in apparatus 500 includes a caries location for each caries in the target object in the first three-dimensional reconstruction data and a caries sequence number for the each caries.
Optionally, the apparatus 500 is further configured to:
displaying the three-dimensional reconstruction data identifying the caries sequence number and the caries location, and/or,
Displaying the caries identification data by means of text prompting, and/or,
And broadcasting the caries identification data in real time in a voice prompt mode in the process of scanning the target object.
The data display device of the embodiment shown in fig. 5 may be used to implement the technical solution of the above-mentioned scanner embodiment, and its implementation principle and technical effects are similar, and will not be described herein again.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now in particular to fig. 6, a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 600 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable electronic devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device 600 may include a processing scanner (e.g., central processor, graphics processor, etc.) 601 that may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage scanner 608 into a Random Access Memory (RAM) 603 to implement a data display method of an embodiment as described in the present disclosure. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing scanner 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following scanners may be connected to the I/O interface 605: input scanner 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output scanner 607 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; a storage scanner 608 including, for example, magnetic tape, hard disk, etc.; and a communication scanner 609. The communication scanner 609 may allow the electronic device 600 to communicate wirelessly or by wire with other devices to exchange data. While fig. 6 shows an electronic device 600 having various scanners, it should be understood that not all of the scanners shown are required to be implemented or provided. More or fewer scanners may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for executing a scanner shown in a flowchart, thereby implementing a data display method as described above. In such an embodiment, the computer program may be downloaded and installed from the network through the communication scanner 609, or installed from the storage scanner 608, or installed from the ROM 602. When executed by the processing scanner 601, performs the above-described functions defined in the scanner of the embodiments of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, scanner, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, scanner, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, scanner, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Alternatively, the electronic device may perform other steps described in the above embodiments when the above one or more programs are executed by the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, scanners, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, scanner, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, scanner, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, scanner, article, or gateway that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, scanner, article, or gateway. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of additional identical elements in a process, scanner, article, or gateway comprising the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. A data display method, comprising:
Acquiring a first acquisition image of a target object, wherein the first acquisition image is used for three-dimensional reconstruction of the target object;
Acquiring a second acquired image of the target object, the second acquired image being used for caries identification of the target object;
Constructing a digital model of the target object according to the first acquired image and the second acquired image, wherein the digital model comprises three-dimensional reconstruction data;
rendering and displaying the digital model of the target object.
2. The method according to claim 1, wherein the second acquired image is acquired by an image acquisition device after near infrared light and/or ultraviolet light is projected onto the target object and reflected therefrom.
3. The method of claim 1, wherein the digitized model comprises first three-dimensional reconstruction data and second three-dimensional reconstruction data, the constructing a digitized model of a target object from the first acquired image and the second acquired image comprising:
constructing the first three-dimensional reconstruction data according to the first acquired image;
And constructing the second three-dimensional reconstruction data by associating the first three-dimensional reconstruction data with the second acquired image.
4. The method of claim 3, wherein the digitized model further comprises caries identification data, the method further comprising, after constructing the first three-dimensional reconstruction data from the first acquired image:
And carrying out caries identification on the target object based on the first three-dimensional reconstruction data and the second acquired image through a pre-trained identification model, and generating caries identification data.
5. A method according to claim 3, wherein the first acquired image comprises a reconstructed image, the reconstructed image being acquired by the image acquisition device after a structured light pattern is projected onto the target image and reflected thereby;
said constructing said first three-dimensional reconstruction data from said first acquired image comprises:
And carrying out three-dimensional reconstruction on the target object based on the reconstructed image to obtain three-dimensional morphology data, wherein the first three-dimensional reconstruction data comprises the three-dimensional morphology data.
6. The method of claim 5, wherein the first acquired image further comprises a texture image, the texture image being acquired by the image acquisition device after white light is projected onto the target object and reflected therefrom;
said constructing said first three-dimensional reconstruction data from said first acquired image comprises:
Mapping the three-dimensional morphology data based on the texture image, and constructing the first three-dimensional reconstruction data.
7. The method of claim 4, wherein the display interface displaying the digitized model of the target object comprises a first display area,
The rendering displays a digitized model of the target object, comprising:
Rendering display target three-dimensional reconstruction data in the first display area;
The target reconstruction data refer to the first three-dimensional reconstruction data, the second three-dimensional reconstruction data and/or third three-dimensional reconstruction data, the third three-dimensional data refers to three-dimensional reconstruction data obtained by mapping a caries image in the second acquired image on a local area of the first three-dimensional reconstruction data and mapping a texture image in the first acquired image on the rest areas except the local area, and the local area refers to a caries area in the caries identification data.
8. The method of claim 7, wherein the display interface further comprises a second display area and a target frame, the second display area for displaying the second acquired image, the method further comprising, after rendering the display target three-dimensional reconstruction data in the first display area:
and responding to the moving operation of the target frame on the target three-dimensional reconstruction data, and displaying a second acquired image corresponding to a target area in the second display area according to a preset proportion, wherein the target area is an area selected by the target frame on the target three-dimensional reconstruction data.
9. The method of claim 8, wherein the display interface further comprises a third display area for displaying a texture image in the first captured image and non-overlapping with the second display area, the method further comprising, in response to movement of the target frame over the target three-dimensional reconstruction data:
And displaying texture images corresponding to the target area in the third display area according to the preset proportion, wherein the second display area and the third display area are synchronously displayed based on the moving operation.
10. The method of claim 4, wherein the caries identification data includes a caries location of each caries in the target object in the first three-dimensional reconstruction data and a caries sequence number for each caries, the rendering displaying a digitized model of the target object, comprising:
Displaying the first three-dimensional reconstruction data identifying the caries sequence number and the caries location, and/or,
Displaying the caries identification data by means of text prompting, and/or,
And broadcasting the caries identification data in real time in a voice prompt mode in the process of scanning the target object.
11. A data display device, comprising:
The first acquisition unit is used for acquiring a first acquisition image of a target object, wherein the first acquisition image is used for three-dimensional reconstruction of the target object;
A second acquisition unit configured to acquire a second acquired image of the target object, the second acquired image being used for caries identification of the target object;
A construction unit, configured to construct a digitized model of a target object according to the first acquired image and the second acquired image, where the digitized model includes three-dimensional reconstruction data;
And the display unit is used for rendering and displaying the digital model of the target object.
12. An electronic device, the electronic device comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the data display method of any of claims 1-10.
13. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a data display method according to any one of claims 1-10.
CN202410325780.0A 2024-03-21 2024-03-21 Data display method, device, equipment and storage medium Pending CN118229877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410325780.0A CN118229877A (en) 2024-03-21 2024-03-21 Data display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410325780.0A CN118229877A (en) 2024-03-21 2024-03-21 Data display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118229877A true CN118229877A (en) 2024-06-21

Family

ID=91498892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410325780.0A Pending CN118229877A (en) 2024-03-21 2024-03-21 Data display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118229877A (en)

Similar Documents

Publication Publication Date Title
KR102351703B1 (en) Identification of areas of interest during intraoral scans
US10685259B2 (en) Method for analyzing an image of a dental arch
US11314983B2 (en) Method for analyzing an image of a dental arch
CA2824665C (en) Intraoral video camera and display system
US11107218B2 (en) Method for analyzing an image of a dental arch
US7477767B2 (en) Systems and methods for analyzing skin conditions of people using digital images
US9911203B2 (en) System and method for size estimation of in-vivo objects
JP5615940B2 (en) Skin care analysis system for generating a 3D RGB model using spectral image data
US20190026893A1 (en) Method for analyzing an image of a dental arch
CN101484066A (en) Methods and products for analyzing gingival tissues
CN114831756A (en) Intraoral scanning processing method and system, electronic device and medium
US20240296558A1 (en) Non-invasive periodontal examination
CN113425440A (en) System and method for detecting caries and position thereof based on artificial intelligence
KR102036043B1 (en) Diagnosis Device of optical skin disease based Smartphone
Koh et al. Development of a checklist tool to assess the quality of skin lesion images acquired by consumers using sequential mobile teledermoscopy
JP2016540622A (en) Medical imaging
CN113361409A (en) Tooth image data processing method and device, electronic equipment and readable storage medium
CN118229877A (en) Data display method, device, equipment and storage medium
WO2023198101A1 (en) Artificial intelligence-based oral cavity examination method and apparatus, electronic device, and medium
Ciaccio et al. Recommendations to quantify villous atrophy in video capsule endoscopy images of celiac disease patients
WO2023142455A1 (en) Multispectral image recognition method and apparatus, storage medium, electronic device and program
JP2004344583A (en) Diagnostic supporting system and terminal device
CN116309432A (en) Lightweight caries detection system and method based on YOLOv5 and IGCC 3 fusion
US20220215547A1 (en) Method for analyzing an image of a dental arch
CN115546413A (en) Method and device for monitoring orthodontic effect based on portable camera and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination