CN117252825A - Dental caries identification method and device based on oral panoramic image - Google Patents

Dental caries identification method and device based on oral panoramic image Download PDF

Info

Publication number
CN117252825A
CN117252825A CN202311169656.1A CN202311169656A CN117252825A CN 117252825 A CN117252825 A CN 117252825A CN 202311169656 A CN202311169656 A CN 202311169656A CN 117252825 A CN117252825 A CN 117252825A
Authority
CN
China
Prior art keywords
image
caries
oral
pixel
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311169656.1A
Other languages
Chinese (zh)
Inventor
葛翘诚
王怡卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Luohu Peoplel's Hospital
Original Assignee
Shenzhen Luohu Peoplel's Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Luohu Peoplel's Hospital filed Critical Shenzhen Luohu Peoplel's Hospital
Priority to CN202311169656.1A priority Critical patent/CN117252825A/en
Publication of CN117252825A publication Critical patent/CN117252825A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a caries identification method and related equipment based on an oral panoramic image, wherein the method comprises the steps of obtaining the oral panoramic image, and carrying out preprocessing operation on the oral panoramic image to obtain a target oral panoramic image, wherein the preprocessing operation comprises a denoising operation and/or a brightness adjustment operation; inputting a target oral panoramic image into a trained image segmentation network model, and determining a tooth area in the oral panoramic image through the image segmentation network model; inputting the tooth area into a trained caries detection network model, and determining the caries in the oral cavity full view image through the caries detection network model. According to the method, the target image is obtained by preprocessing the full-scene image of the oral cavity, then the oral cavity area is obtained by image segmentation of the target image, finally the decayed tooth in the target image is determined by detecting the network model, and the accuracy and the efficiency of decayed tooth identification are improved.

Description

Dental caries identification method and device based on oral panoramic image
Technical Field
The application relates to the technical field of biology, in particular to a dental caries identification method based on an oral panoramic image and a related device.
Background
Early identification of oral health problems, particularly caries, is critical to preventing further dental damage and providing timely treatment. Traditional caries identification methods generally rely on subjective judgment of dentists, and are susceptible to subjective opinion and experience, resulting in misdiagnosis or missed diagnosis. Traditional caries detection methods require specialized dental expertise, limiting the popularity and range of application of detection methods. And for reasons of slower speed of manual interpretation and analysis, traditional methods are not well suited for large-scale oral health screening.
There is thus a need for improvements and improvements in the art.
Disclosure of Invention
The technical problem to be solved by the application is to provide a dental caries identification method and a related device based on oral cavity full-scope images aiming at the defects of the prior art.
To solve the above technical problem, a first aspect of embodiments of the present application provides a caries identification method based on an oral panoramic image, the method including:
acquiring an oral panoramic image, and performing preprocessing operation on the oral panoramic image to obtain a target oral panoramic image, wherein the preprocessing operation comprises a denoising operation and/or a brightness adjustment operation;
Inputting a target oral panoramic image into a trained image segmentation network model, and determining a tooth area in the oral panoramic image through the image segmentation network model;
inputting the tooth area into a trained caries detection network model, and determining the caries in the oral cavity full view image through the caries detection network model.
According to the technical means, the pretreatment operation of denoising operation and/or brightness adjustment enables the oral cavity full-view film to be clearer, and a better target image is provided for later analysis. And secondly, inputting the target image into a trained image segmentation network model, and determining the tooth area in the oral panoramic sheet image. Compared with the traditional manual identification mode, the image identification method using the deep learning technology is more accurate and improves the working efficiency. Finally, the dental caries in the outlet cavity full-scope image is accurately identified by entering the determined dental area into a trained caries detection network model. The detection method based on the deep learning technology improves the efficiency and accuracy of caries detection.
In one implementation manner, the caries identification method based on the oral panorama image, wherein the denoising operation specifically comprises:
For each pixel in the oral cavity full-scene image, determining a denoising window corresponding to the pixel;
determining a pixel median value corresponding to the denoising window, and calculating the weight of the pixel based on the pixel median value to obtain the weight corresponding to each pixel;
denoising the oral cavity full-scene image based on the weight of each pixel.
According to the technical means, noise in an image can be eliminated or reduced, so that the image quality and the readability are improved. In identifying caries, noise in the picture may make the process of identifying caries more difficult. The optimized image can accurately extract image information, especially the information of the tooth parts, through the weight obtained by calculation, so that medical staff can more accurately identify and determine dental caries in the oral cavity.
The weight of each pixel is obtained through the denoising window and the pixel median calculation by the technical means, so that the denoising operation is more accurate and personalized. The denoising method distributes the weight of each pixel, so that the noise in the image is processed more accurately, and the overall quality and definition of the image are improved.
In one implementation, the denoising window calculates a median value of pixels, wherein calculating the weights of the pixels based on the median value of pixels specifically includes:
Obtaining the maximum value and the minimum value of pixels in the denoising window, and removing pixels corresponding to the maximum value and the minimum value from the denoising window to obtain a candidate pixel set corresponding to the denoising window;
and determining a pixel median value corresponding to the denoising window according to the pixel values of all pixels in the candidate pixel set.
According to the technical means, the method helps to reduce or eliminate image noise and enhance important characteristics of images. Possible noise disturbances are removed by removing the maximum and minimum pixel values, which noise usually appears as abnormally high or low pixel values. These two types of pixels typically have a negative impact on the overall quality of the image, and by eliminating them from the candidate set of pixels, the impact of noise on the pixel weight calculation can be reduced, resulting in more accurate pixel weights. While calculating the pixel median can provide a more accurate representation of the pixel value than a simple average pixel value because the median is not affected by outliers or noisy pixels. Therefore, when the weight calculation is carried out, the obtained result is more accurate and stable, so that the emphasis and retention capacity of important features in the image are improved, and the definition and quality of the image are further improved.
In one implementation manner, the caries identification method based on the oral panorama image, wherein the brightness adjustment operation specifically comprises:
calculating gray probability distribution of the oral cavity full-scene image;
calculating a brightness adjustment parameter based on the gray probability distribution;
and carrying out brightness adjustment on the oral cavity full-view image based on the brightness adjustment parameters to obtain the oral cavity full-view image after brightness adjustment.
According to the technical means, the accuracy and the efficiency of caries identification are improved. The brightness change and the image quality in the image are reflected more accurately through the calculation of the gray probability distribution, so that the image can be adjusted more accurately in brightness, and the method can be suitable for all kinds of oral cavity full-scene images. The brightness adjustment can optimize the image quality, so that the area with caries is more obvious and is convenient for accurate identification.
In one implementation manner, the caries identification method based on the oral panoramic image is characterized in that the target oral panoramic image is input into a trained image segmentation network model, a tooth area is determined through the image segmentation network model, and an objective function of the image segmentation network model is as follows:
Wherein,representing the number of pixels in the image of the full view of the mouth, etc>And->Representing the probability that the ith pixel belongs to a dental region and a non-dental region, respectively, j represents the weight, d i,tooth And d i,nontooth Representing the distance between the i-th pixel and the pixel center of the tooth area and the pixel center of the non-tooth area, respectively.
In one implementation, the objective function employed in the training of the caries detection network model is:
wherein, psi represents the grid number to be identified by the caries detection network model,determining function, loc, indicating whether caries is present in the current detection grid k And->Values of kth parameters respectively representing the caries site and the caries true site identified by the caries detection network model,/->A serial number b', alpha indicating whether the current detection grid contains a true caries species 1 And alpha 2 All represent control weights.
A second aspect of embodiments of the present application provides a caries-recognizing apparatus based on oral cavity panoramic film images, the apparatus including:
the preprocessing module is used for acquiring an oral panoramic image and preprocessing the oral panoramic image to obtain a target oral panoramic image, wherein the preprocessing operation comprises a denoising operation and/or a brightness adjustment operation;
The region segmentation module is used for inputting the target oral panoramic image into a trained image segmentation network model, and determining tooth regions in the oral panoramic image through the image segmentation network model;
the dental caries detection module is used for inputting the tooth area into a trained caries detection network model, and determining dental caries in the oral cavity full-scope image through the caries detection network model.
A third aspect of the embodiments provides a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps in a dental caries recognition method based on oral panoramic film images as described in any of the above.
A fourth aspect of the present embodiment provides a terminal device, including: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps in the dental caries identification method based on an oral panorama image as described in any one of the above.
The beneficial effects are that: compared with the prior art, the application provides a caries identification method and a related device based on an oral panoramic image, wherein the method comprises the steps of obtaining the oral panoramic image, and carrying out preprocessing operation on the oral panoramic image to obtain a target oral panoramic image, wherein the preprocessing operation comprises a denoising operation and/or a brightness adjustment operation; inputting a target oral panoramic image into a trained image segmentation network model, and determining a tooth area in the oral panoramic image through the image segmentation network model; inputting the tooth area into a trained caries detection network model, and determining the caries in the oral cavity full view image through the caries detection network model. According to the method, the high-quality target image is obtained through denoising and/or brightness adjustment pretreatment on the full-scene image of the oral cavity, then the image of the target image is segmented to obtain the oral cavity area, the image quality is optimized, the area with caries is more obvious, the caries in the target image is finally determined through the detection network model, and the accuracy and the efficiency of caries identification are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without creative effort for a person of ordinary skill in the art.
Fig. 1 is a flowchart of a caries identification method based on an oral panorama image provided by the present application.
FIG. 2 is a schematic diagram of the structure of a caries detection apparatus based on oral cavity full view images provided by the present application.
Fig. 3 is a schematic structural diagram of a terminal device provided in the present application.
Detailed Description
The application provides a dental caries identification method based on oral cavity full-scope film images and a related device, and in order to make the purposes, technical schemes and effects of the application clearer and more definite, the application is further described in detail below by referring to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be understood that the sequence number and the size of each step in this embodiment do not mean the sequence of execution, and the execution sequence of each process is determined by the function and the internal logic of each process, and should not constitute any limitation on the implementation process of the embodiment of the present application.
It has been found that early identification of oral health problems, particularly caries, is critical to preventing further dental damage and providing timely treatment. Traditional caries identification methods generally rely on subjective judgment of dentists, and are susceptible to subjective opinion and experience, resulting in misdiagnosis or missed diagnosis. Traditional caries detection methods require specialized dental expertise, limiting the popularity and range of application of detection methods. And for reasons of slower speed of manual interpretation and analysis, traditional methods are not well suited for large-scale oral health screening.
In order to solve the above-mentioned problem, in an embodiment of the present application, an oral panoramic image is obtained, and a preprocessing operation is performed on the oral panoramic image to obtain a target oral panoramic image, where the preprocessing operation includes a denoising operation and/or a brightness adjustment operation; inputting a target oral panoramic image into a trained image segmentation network model, and determining a tooth area in the oral panoramic image through the image segmentation network model; inputting the tooth area into a trained caries detection network model, and determining the caries in the oral cavity full view image through the caries detection network model. According to the method, the target image is obtained by preprocessing the full-scene image of the oral cavity, then the oral cavity area is obtained by image segmentation of the target image, finally the decayed tooth in the target image is determined by detecting the network model, and the accuracy and the efficiency of decayed tooth identification are improved.
The application will be further described by the description of embodiments with reference to the accompanying drawings.
The embodiment provides a caries identification method based on an oral panorama image, as shown in fig. 1, the method comprises the following steps:
s10, acquiring an oral panoramic image, and performing preprocessing operation on the oral panoramic image to obtain a target oral panoramic image, wherein the preprocessing operation comprises a denoising operation and/or a brightness adjustment operation.
Specifically, the full view image of the mouth, also known as full mouth dental film or oral panoramic X-ray image, is a form of medical image commonly used in the oral science for showing a comprehensive view of teeth, jawbone, nasal cavity and other adjacent structures. The full-view image of the oral cavity has the advantage that the structure of the oral cavity can be completely and accurately revealed by one image, and comprehensive reference information is provided for clinicians.
The oral panorama image is saved as a high definition digital image file, such as in JPEG, PNG, TIFF, etc., where the number of bits per pixel is typically 8 or 16 bits, reflecting a sufficient gray scale range to reveal the microstructure within the oral cavity. Just like other medical image images, the size (resolution) of the full-view image of the mouth is dependent on the specific diagnostic requirements, and is also affected by the settings of the acquisition device. Typically, the size of the image may reach 2000 x 2000 pixels or even higher for human oral structures to ensure adequate sharpness.
After the oral panoramic image is obtained, the oral panoramic image is preprocessed to improve the quality and ensure that the subsequent steps can be smoothly carried out. Common preprocessing operations include denoising operations and brightness adjustment operations. The denoising operation may employ median filtering, gaussian filtering, or the like to remove various noises in the image due to photographing apparatuses or environmental influences. The brightness adjustment operation compensates for the problem of too dark or too bright image caused by insufficient or too strong light during shooting by adjusting the contrast and brightness of the image.
The quality of the oral cavity full-view image is standardized through preprocessing operation, and a foundation is provided for subsequent deeper image processing and analysis. The preprocessing operation can flexibly select denoising and brightness adjustment according to the needs, or the denoising and the brightness adjustment can be performed together. After the pretreatment operation, the finally obtained oral panoramic image is a target oral panoramic image, and the target oral panoramic image not only keeps all important information in the original image, but also has higher usability.
In one implementation, the denoising operation specifically includes:
s11, for each pixel in the oral cavity full-view image, determining a denoising window corresponding to the pixel;
s12, determining a pixel median value corresponding to the denoising window, and calculating the weight of the pixel based on the pixel median value to obtain the weight corresponding to each pixel;
s13, denoising the oral cavity full-view image based on the weight of each pixel.
Specifically, in step S11, the target oral panoramic sheet image contains the individual' S complete oral structure information. Each pixel of the target oral panoramic sheet image has unique position coordinates and corresponding gray values, and the information carried by the pixels is closely related to the spatial position and adjacent pixels in the oral panoramic sheet image. Thus, this step processes each pixel in the orographic image, emphasizing the uniqueness of each pixel and its relationship to surrounding pixels.
The denoising window is a square or rectangular area with a preset size, and the pixel is positioned at the center of the window. From a technical perspective, the window size may be adjusted according to specific denoising requirements and unique conditions of pixel distribution. For example, in some complex or dense areas, smaller windows are required to more finely eliminate noise; while in some more singular or loose areas, a larger window is needed to achieve more efficient denoising. In a specific implementation of the present application, the denoising window is determined centering on each pixel in the target oral panoramic sheet image, and the denoising window size may be adjusted according to specific applications, for example, 3x3, 5x5, and the like.
In step S12, each denoising window covers an area around the target pixel, and the gray values of the pixels in the area form a data set, so that the median value of the pixels corresponding to the denoising window can be determined by determining the median value of the data set, wherein the median value of the pixels reflects the average brightness condition of the area around the target pixel. In addition, the weighting application reflects the importance degree of each pixel in the target oral panoramic image, and the manner of determining the weighting according to the pixel median value determined by the denoising window corresponding to the pixel can be various manners, for example, the value of each pixel is compared with the corresponding pixel median value, if the pixel value is far away from the median value, the pixel value can be noise, and the weighting of the pixel is relatively low; otherwise, the weight is higher, so that the influence of noise pixels on the final image quality can be controlled by the median.
In one implementation manner of the embodiment of the present application, the weight determination process may adopt a weight allocation manner. Specifically, each pixel in the denoising window is considered one by one, compared with the median value of the pixel in the window, and if the value of the pixel is close to the median value, the pixel is considered to be more reliable, and is given high weight; conversely, if the value of this pixel is far from the median, it is considered a possible noise, giving it a low weight.
Through the mode, the weight value corresponding to each pixel is obtained through calculation. This weight value represents the importance of the pixel in the image denoising process, thereby providing beneficial information for subsequent image processing steps. By using the weight-based processing mode, the quality of the oral panoramic film image is further improved, so that the diagnostic information obtained from the images is more accurate and detailed.
In an implementation manner of the embodiment of the present application, the determining a median value of pixels corresponding to the denoising window specifically includes:
s121, obtaining the maximum value and the minimum value of pixels in the denoising window, and removing pixels corresponding to the maximum value and the minimum value from the denoising window to obtain a candidate pixel set corresponding to the denoising window;
S122, determining a pixel median value corresponding to the denoising window according to the pixel values of all pixels in the candidate pixel set.
Specifically, in steps S121 and S122, the specific implementation manner of the present application is that, based on the maximum value and the minimum value in the denoising window removal window obtained in step S11, the median τ in the remaining pixels is obtained, and the calculation formula for obtaining the median τ is as follows:
τ=Median(pixels)
wherein, pixels is the pixel remaining after the maximum value and the minimum value in the denoising window are removed, namely the candidate pixel set; the Median function is used to obtain the Median value.
Calculating the weights of the pixels based on the median pixel values to obtain weights corresponding to the pixels, wherein the specific implementation manner of the application includes:
calculating the weight of each pixel, wherein the weight is calculated in the following way:
where N represents the number of pixels of the pixels in pixels; n=1, 2, …, N; n' =1, 2, …, N;the calculation mode of (a) is as follows:
wherein delta is n The absolute difference is calculated by the following steps:
Δ n =|pixels n -τ|
wherein, pixels n Representing an nth pixel among pixels remaining after the denoising window removes the maximum value and the minimum value in the window;the mean value of the absolute difference is calculated by the following steps:
in step S13, the specific process of denoising the belonging oral panorama image based on the weights corresponding to the pixels obtained in step S12 includes: all pixels of the oral full-scene image are traversed, and new pixel values are calculated according to the weights of the pixels and the pixel values in the denoising window in which the pixels are located. In general, the higher the weight of a pixel in a window, the greater its effect on the new pixel value. In other words, pixels with high weights have a greater decisive influence on the denoising effect.
For example, a simple weighted average denoising algorithm is used, where the weight of each pixel is proportional to its gray value. In this case, all pixels are traversed and multiplied by their weights in the denoising window for each pixel, then summed, and finally the sum value divided by the sum of the weights, resulting in a new pixel value. The weighted average denoising algorithm described above is advantageous for preserving edge information of an image, because an edge is typically composed of pixels with higher gray values.
And replacing the pixel value of the original oral cavity full-scene image with the new pixel value. The new oral panorama image is the image after noise elimination.
By the denoising method based on the weight, complex information in the oral panoramic image can be effectively utilized, and the quality of the denoised image is improved, so that the accuracy and efficiency of dental diagnosis are further improved.
In a specific implementation of the present application, the denoised pixel is obtained according to the calculated weight, where the formula is:
wherein w is n' And omega is the weight of the pixel, and omega is an oral panoramic image slice obtained after denoising based on the weight.
In summary, the panoramic image of the oral cavity may be affected by negative factors such as noise and artifacts, resulting in poor image quality. Noise in the image can be effectively reduced through denoising treatment, so that the oral cavity structure is clearer and more discernable, and the quality of the image is improved. The denoised image is easier to identify and analyze by subsequent processing algorithms. In the caries recognition method, if the image quality is poor, the extraction and analysis of the features may be affected, and thus the accuracy of recognition may be affected.
In one implementation, the brightness adjustment operation specifically includes:
s101, calculating gray probability distribution of the oral cavity full-scene image;
s102, calculating brightness adjustment parameters based on the gray probability distribution;
and S103, adjusting the brightness of the oral cavity full-view image based on the brightness adjustment parameters to obtain the oral cavity full-view image after brightness adjustment.
In particular, the brightness adjustment operation typically involves increasing or decreasing the brightness value of the oral panoramic image, and in general, the overall brightness of the image is selected primarily depending on the initial quality of the image, the underlying brightness of the image not altering the authenticity of the presented oral internal structure. The brightness is improved as much as possible while the image quality is ensured, and the image is corrected. The image brightness is adjusted automatically or manually by using image preprocessing software. In the brightness adjustment process, contrast adjustment can be performed as necessary. Improving the contrast of the image can increase the difference between two or more colors, so that the details are clearer, and various structures and diseases in the oral cavity can be more conveniently identified and analyzed. And after the brightness is adjusted to a proper degree, previewing whether the modified image meets the requirements, wherein the image meeting the requirements is the obtained target oral panoramic film image. This brightness adjusted panoramic oral image will provide a clearer, more easily interpretable visual source for subsequent oral diagnostic analysis.
Further, in step S101, the target oral panorama sheet image is converted into a grayscale image. This means that the original color image is converted into an image containing only gray information, in which process the color information is lost, but the important information of structure and shape is still preserved. The main purpose of converting an image into a gray image is to improve the efficiency of subsequent processing, simplify the computational complexity, and make the calculation of gray probability distribution in subsequent steps easier and faster.
The calculation of the gray probability distribution of this gray image is done by counting the frequency of occurrence of the individual gray levels in the image. In the field of image processing or analysis, the gray probability distribution counts all pixel values of an image, and the result is a graph or function representing all possible gray levels and corresponding frequencies of occurrence in the image. For a given image, the gray probability distribution provides rich information, and the contrast, brightness, structure and other characteristics of the image can be revealed.
In the specific implementation of the method, the gray probability distribution of the denoised oral cavity full-scene image is calculated at first, and then the cut-off is carried out based on the gray probability distribution. The formula for specifically calculating the gray probability distribution of the denoised oral cavity full-scene image is as follows:
Wherein p is l Representing the pixel probability of the gray value of l in the denoised oral cavity full-scene image, wherein l=0, 1,2, … and 255; count (l) represents the number of pixels having a gray value of l;representing the number of pixels of the oral cavity full view after denoising.
The gray probability distribution is truncated, wherein the truncation mode is as follows:
wherein, delta is a cut-off threshold value, and the calculation mode is as follows:
wherein,representation->Is the maximum value of (2);
in step S102, a luminance adjustment parameter is calculated based on the gradation probability distribution obtained in step S101. The brightness adjustment parameter is mainly a specific value for changing the brightness of an image. The brightness adjustment parameter may be the slope and intercept of a linear function that is used to map each gray level of the image to a new level, the mapped gray level more closely matching the target brightness requirement. In more complex cases, the brightness adjustment parameter may be a parameter of a nonlinear function to purposefully adjust the brightness of a particular region.
In order to obtain the brightness adjustment parameters, preset brightness requirements and visual effects of the image need to be satisfied. Ideally, a moderate brightness adjustment parameter setting needs to be found based on the criteria and habits of human eye observation, but this approach typically requires multiple attempts and requires empirical judgment by image processing specialists. In addition, some automatic parameter optimization methods, such as gradient descent method or genetic algorithm, can be adopted, and the objective is to find a parameter that enables the image to meet the preset requirement and has the best visual effect.
In a specific implementation manner of the present application, according to the truncated gray probability distribution, a formula for calculating a brightness adjustment parameter corresponding to each gray is:
wherein, gamma l A pixel brightness adjustment parameter representing a gray value of l; ρ is a luminance adjustment parameter cutoff value, and ρ=0.5 in the specific implementation of the present application.
In step S103, the brightness adjustment parameter obtained in step S102 is applied to the oral cavity full-view image. The application to the image is done by a point-by-point operation, i.e. each pixel finds its corresponding value in the brightness adjustment parameters according to its gray level, adjusting its gray level. This step requires the same number of operations as the number of pixels.
The luminance-adjusted oral panoramic image is easier to interpret than the denoised oral panoramic image because it has been adjusted to a form that better meets the preset luminance requirements. In some applications where depth information is needed, such as diagnosis of dental conditions, the brightness adjusted image can provide more useful information.
In a specific implementation of the present application, the adjustment is based on brightness adjustment parametersThe formula for the brightness of each pixel is:
Wherein,representing pixels with gray values of l in the denoised oral cavity full-scene image; obtaining a preprocessed oral panoramic image after brightness adjustment>
The brightness adjustment operation can enable the oral cavity structure in the oral cavity panoramic sheet image to be more clearly visible, highlight details of related areas, and provide more visual information for subsequent feature extraction and analysis. The brightness adjustment can enable the oral panoramic image to be more consistent in brightness and darkness, and is beneficial to establishing consistency standards among different oral panoramic images, so that repeatability of a caries identification method is improved. The oral cavity panoramic sheet image with brightness being adjusted can more highlight the characteristics of teeth, so that oral cavity problems such as decayed tooth lesions and the like can be more easily identified and analyzed, and the identification accuracy is improved.
S20, inputting the target oral cavity panoramic film image into a trained image segmentation network model, and determining the tooth area in the oral cavity panoramic film image through the image segmentation network model.
Specifically, the trained image segmentation network model functions to determine dental regions in an input image of the full view of the mouth. The image segmentation network model has generalization capability, can process various oral panoramic film images, and segments the input oral panoramic film images into target areas, namely tooth areas according to preset model structures and parameter settings.
Generally, the process of segmenting the full-view image of the mouth cavity includes: and inputting the oral cavity full-scene image into an image segmentation network model. The image segmentation network model is based on some well-known deep learning architecture, such as U-Net, mask R-CNN, or other convolutional neural network.
Inside the model, the oral panoramic slice image is first processed through a series of convolution layers, an activation function layer, and a pooling layer. These layers will extract and learn key features in the image. For example, the basic structures, such as edges and colors, are first identified by the low-level feature extraction layer. While the intermediate and advanced feature extraction layers will identify more complex characteristics such as tooth shape, size, and relative position.
At the end of the network, a softmax or sigmoid function is applied, the probability of whether each image segmentation network model is a tooth region can be calculated, and the image segmentation network model is segmented according to a set threshold value, so that the tooth region in the oral panoramic image is determined.
In one implementation, for an oral panoramic sheet image, it is assumed that there are 32 teeth in the image. First, this image is input into a pre-trained U-Net network model. Inside the model, the original input image is processed via a series of convolution layers, activation function layers, and pooling layers, learning key features in the image. After network processing, the probability of whether each pixel point is a tooth area is mapped out. A threshold, such as 0.5, is set, and pixels with a probability higher than 0.5 are marked as tooth areas to form a segmentation map. By analyzing the segmentation map, 32 teeth in the outlet cavity full-scene image can be identified and labeled.
After the tooth areas are determined, further subsequent analysis and processing of these areas is performed, such as tooth identification, tooth arrangement, tooth health analysis, etc. By means of the image segmentation technology, efficient and accurate tooth area positioning is achieved, further analysis of teeth is carried out, efficiency of the oral medical industry is improved, more accurate diagnosis information is provided for doctors, and better medical services are provided for patients.
In an implementation manner of the present application, the objective function of the image segmentation network model is:
wherein,representing the number of pixels in the image of the full view of the mouth, etc>And->Representing the probability that the ith pixel belongs to a dental region and a non-dental region, respectively, j represents the weight, d i,tooth And d i,nontooth Representing the distance between the i-th pixel and the pixel center of the tooth area and the pixel center of the non-tooth area, respectively.
d i,tooth And d i,nontooth The calculation formula of (2) is:
d i,tooth =(x i -c tooth ) 2
d i,nontooth =(x i -c nontooth ) 2
wherein x is i The value of the ith pixel of the preprocessed oral panoramic image; c tooth And c nontooth Respectively, a pixel center divided into a tooth region and a pixel center of a non-tooth region.
Generally, an iterative solution process for optimizing segmentation for an image segmentation network model includes the steps of:
updating the probability that each pixel belongs to the tooth area and the non-tooth area, and calculating the probability of whether each pixel belongs to the tooth area or not based on the image segmentation network model. The probability is updated using a preset probability model (e.g., bayesian model, markov model, etc.) based on the current segmentation result and information about pixel characteristics, e.g., color, texture, edges, etc.
The distance between each pixel and the pixel center divided into the tooth region and the pixel center of the non-tooth region is updated in order to update the spatial positional relationship of each pixel with respect to the division result. The distance of the pixel center can be calculated based on Euclidean distance, and can also be based on other similarity measurement methods. The pixel centers of the dental region and the non-dental region may be determined by a clustering algorithm (e.g., K-means algorithm).
The two steps are repeatedly executed, and the accuracy of the segmentation result is continuously improved through an iterative optimization process. The segmentation process is considered complete when the pixel centers of the dental region and the non-dental region no longer change, i.e. the iteration reaches convergence. At this time, the obtained segmentation results clearly distinguish the tooth regions in the preprocessed oral panoramic sheet image from other regions.
In a specific implementation of the present application, the formula for updating the probability that each pixel belongs to a dental region and a non-dental region is:
the formula for updating the distance between each pixel and the center of the pixel divided into the tooth area and the center of the pixel of the non-tooth area is:
s30, inputting the tooth area into a trained caries detection network model, and determining the caries in the oral cavity full-scope image through the caries detection network model.
Specifically, the dental region separated from the oral panorama image is used as an input. The dental region may contain a plurality of teeth and the goal of the detection network model is to find possible caries in these teeth.
After the tooth area is determined, it is input into a trained caries detection network model. The caries detection network model is trained to learn how to acquire and analyze relevant features from the tooth area images, thereby accurately locating the caries position.
The training process of the caries detection network model is performed through a large number of oral cavity full view film samples, including various dental images without caries and with caries. During training, the network model gradually learns and understands what features and patterns may represent caries, which in turn may be used to accurately identify and flag caries as new dental area images are provided.
Caries sites marked by the network model will be provided to a specialist doctor or robot for further treatment procedures. Meanwhile, the identification result can also be used as an auxiliary tool of a doctor, so that the oral health state of a patient can be evaluated more quickly and accurately, and a personalized treatment plan can be formulated for the patient.
In one implementation, a deep learning model of Faster R-CNN is used to determine caries locations in oral panorama images.
First, the tooth region image is input into a neural network, and feature extraction is performed via deep learning of fast R-CNN. The fast R-CNN model is a convolutional neural network designed to quickly and accurately detect target objects in pictures, and is suitable for tooth and caries positioning requirements.
The Faster R-CNN model consists essentially of two parts: a regional proposal network (Region Proposal Network, RPN) for generating candidate boxes; and secondly, a Fast R-CNN network is used for classifying candidate frames and fine-tuning the positions of the candidate frames.
In detecting a target object in a picture, faster R-CNN first searches for a location in the image of a tooth area through an RPN network where caries may exist, generating a series of candidate boxes. These candidate boxes are then classified by Fast R-CNN network for caries and non-caries while the positions of the candidate boxes are fine-tuned to more precisely match the corresponding tooth areas.
For example, one tooth is in caries in one full view image of the mouth. After the Faster R-CNN model is processed, the model can successfully locate the caries and generate an accurate candidate box around it. At the same time, the model can also output a classification result indicating that this is caries.
The oral cavity full-scene image after being trained and identified by the Faster R-CNN model clearly and accurately shows the teeth with decayed teeth and the positions thereof. This provides great convenience and accuracy for subsequent dental treatments, thereby improving the level and efficiency of medical services.
In the implementation of the present application, the caries localization and classification network is defined as:
wherein, DECA is caries detection network model based on Yolo target identification network; type= { type 1 ,type 2 ,…,type H The probability of caries with different caries degrees identified by the caries detection network model is represented by H, the number of caries species is represented by H, and caries categories to be classified in the embodiment are no caries, shallow caries, medium caries and deep caries; loc= { loc 1 ,loc 2 ,loc 3 ,loc 4 The caries detection network model is used for identifying caries position parameters including the length and width of a caries outer bounding box and the center position; w is a parameter of a caries detection network model.
The caries detection network model has the objective function that:
where h=1, 2, …, ψ,psi represents the grid number to be identified by the caries detection network model, and is obtained by uniformly dividing the preprocessed oral cavity full-scene image;a judging function which indicates whether caries exists in the current detected grid, wherein the judging function is 1 when caries exists, and the judging function is 0 otherwise; k=1, 2,3,4; loc k And->Values of kth parameters representing the caries location and the caries true location identified by the caries locating and classifying network, respectively; b=1, 2, …, H; />A serial number b' indicating whether the mesh detected before contains a true caries species, 1 when the true caries species is contained, or 0 when the true caries species is contained; alpha 1 And alpha 2 Representing control weights, in a specific implementation of the present invention, a is preset 1 =5,α 2 =1.2。
In addition, in the specific implementation of the invention, the caries detection network model is optimized based on gradient descent, and the optimized network is used for carrying out caries identification on the full-view image of the oral cavity of the patient:
optimizing a caries detection network model based on gradient descent, wherein the gradient descent process is as follows:
wherein W is f And W is f+1 The f-th and f+1-th optimization results of caries detection network model parameters respectively; grad 1 The calculation mode of (a) is as follows:
wherein,to optimize target pair W f Is a bias guide of (2); θ represents the learning rate of the caries detection network model; grad 2 The calculation mode of (a) is as follows:
/>
wherein f represents the optimization times; η (eta) 1 And eta 2 To adjust the coefficients, η is preset in the specific implementation of the present invention 1 =0.9,η 2 =0.99;ε=0.000001。
In summary, the present embodiment provides a caries identification method based on an oral panoramic image, the method includes obtaining an oral panoramic image, and performing a preprocessing operation on the oral panoramic image to obtain a target oral panoramic image, where the preprocessing operation includes a denoising operation and/or a brightness adjustment operation; inputting a target oral panoramic image into a trained image segmentation network model, and determining a tooth area in the oral panoramic image through the image segmentation network model; inputting the tooth area into a trained caries detection network model, and determining the caries in the oral cavity full view image through the caries detection network model. According to the method, the target image is obtained by preprocessing the full-scene image of the oral cavity, then the oral cavity area is obtained by image segmentation of the target image, finally the decayed tooth in the target image is determined by detecting the network model, and the accuracy and the efficiency of decayed tooth identification are improved.
Based on the above-mentioned caries identification method based on oral panoramic film image, this embodiment provides a caries identification apparatus based on oral panoramic film image, as shown in fig. 2, the apparatus includes:
the preprocessing module 100 is configured to acquire an oral panoramic image, and perform preprocessing operation on the oral panoramic image to obtain a target oral panoramic image, where the preprocessing operation includes a denoising operation and/or a brightness adjustment operation;
the region segmentation module 200 is used for inputting the target oral panoramic patch image into a trained image segmentation network model, and determining the tooth region in the oral panoramic patch image through the image segmentation network model;
a caries detection module 300 for inputting the dental area into a trained caries detection network model, determining caries in the oral cavity full view image from the caries detection network model.
Based on the dental caries recognition method based on oral panorama sheet image as described above, the present embodiment provides a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps in the dental caries recognition method based on oral panorama sheet image as described in the above embodiments.
Based on the dental caries identification method based on the oral panorama image, the application also provides a terminal device, as shown in fig. 3, which comprises at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, which may also include a communication interface (Communications Interface) 23 and a bus 24. Wherein the processor 20, the display 21, the memory 22 and the communication interface 23 may communicate with each other via a bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may invoke logic instructions in the memory 22 to perform the methods of the embodiments described above.
Further, the logic instructions in the memory 22 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 22, as a computer readable storage medium, may be configured to store a software program, a computer executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 performs functional applications and data processing, i.e. implements the methods of the embodiments described above, by running software programs, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the terminal device, etc. In addition, the memory 22 may include high-speed random access memory, and may also include nonvolatile memory. For example, a plurality of media capable of storing program codes such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or a transitory storage medium may be used.
In addition, the specific processes that the storage medium and the plurality of instruction processors in the terminal device load and execute are described in detail in the above method, and are not stated here.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (9)

1. A method for caries identification based on oral panoramic film images, the method comprising:
acquiring an oral panoramic image, and performing preprocessing operation on the oral panoramic image to obtain a target oral panoramic image, wherein the preprocessing operation comprises a denoising operation and/or a brightness adjustment operation;
inputting a target oral panoramic image into a trained image segmentation network model, and determining a tooth area in the oral panoramic image through the image segmentation network model;
inputting the tooth area into a trained caries detection network model, and determining the caries in the oral cavity full view image through the caries detection network model.
2. The method for caries identification based on an oral panorama image according to claim 1, wherein the denoising operation specifically comprises:
for each pixel in the oral cavity full-scene image, determining a denoising window corresponding to the pixel;
determining a pixel median value corresponding to the denoising window, and calculating the weight of the pixel based on the pixel median value to obtain the weight corresponding to each pixel;
denoising the oral cavity full-scene image based on the weight of each pixel.
3. The caries identification method based on an oral panorama image according to claim 2, wherein the determining the pixel median value corresponding to the denoising window specifically comprises:
obtaining the maximum value and the minimum value of pixels in the denoising window, and removing pixels corresponding to the maximum value and the minimum value from the denoising window to obtain a candidate pixel set corresponding to the denoising window;
and determining a pixel median value corresponding to the denoising window according to the pixel values of all pixels in the candidate pixel set.
4. The method for caries identification based on an oral panorama image according to claim 1, wherein the brightness adjustment operation specifically comprises:
calculating gray probability distribution of the oral cavity full-scene image;
calculating a brightness adjustment parameter based on the gray probability distribution;
and carrying out brightness adjustment on the oral cavity full-view image based on the brightness adjustment parameters to obtain the oral cavity full-view image after brightness adjustment.
5. The method for caries identification based on an oral panorama image according to claim 1, wherein the objective function used in the training process of the image segmentation network model is:
wherein,representing the number of pixels in the image of the full view of the mouth, etc >And->Representing the probability that the ith pixel belongs to a dental region and a non-dental region, respectively, j represents the weight, d i,tooth And d i,nontooth Representing the distance between the i-th pixel and the pixel center of the tooth area and the pixel center of the non-tooth area, respectively.
6. The method for caries identification based on an oral panorama image according to claim 1, wherein the objective function used in the training process of the caries detection network model is:
wherein, psi represents the grid number to be identified by the caries detection network model,determining function, loc, indicating whether caries is present in the current detection grid k And->Values of kth parameters respectively representing the caries site and the caries true site identified by the caries detection network model,/->A serial number b', alpha indicating whether the current detection grid contains a true caries species 1 And alpha 2 All represent control weights.
7. A caries identification apparatus based on an image of an oral cavity full scene, the apparatus comprising:
the preprocessing module is used for acquiring an oral panoramic image and preprocessing the oral panoramic image to obtain a target oral panoramic image, wherein the preprocessing operation comprises a denoising operation and/or a brightness adjustment operation;
The region segmentation module is used for inputting the target oral panoramic image into a trained image segmentation network model, and determining tooth regions in the oral panoramic image through the image segmentation network model;
the dental caries detection module is used for inputting the tooth area into a trained caries detection network model, and determining dental caries in the oral cavity full-scope image through the caries detection network model.
8. A computer-readable storage medium storing one or more programs executable by one or more processors to perform the steps in the method for caries identification based on oral panoramic film images as recited in any one of claims 1-6.
9. A terminal device, comprising: a processor and a memory;
the memory has stored thereon a computer readable program executable by the processor;
the processor, when executing the computer readable program, implements the steps in the dental caries identification method based on oral panoramic film images as defined in any one of claims 1-6.
CN202311169656.1A 2023-09-08 2023-09-08 Dental caries identification method and device based on oral panoramic image Pending CN117252825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311169656.1A CN117252825A (en) 2023-09-08 2023-09-08 Dental caries identification method and device based on oral panoramic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311169656.1A CN117252825A (en) 2023-09-08 2023-09-08 Dental caries identification method and device based on oral panoramic image

Publications (1)

Publication Number Publication Date
CN117252825A true CN117252825A (en) 2023-12-19

Family

ID=89125741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311169656.1A Pending CN117252825A (en) 2023-09-08 2023-09-08 Dental caries identification method and device based on oral panoramic image

Country Status (1)

Country Link
CN (1) CN117252825A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952987A (en) * 2024-03-27 2024-04-30 有方(合肥)医疗科技有限公司 CBCT image data processing method and device, electronic equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460762A (en) * 2018-03-16 2018-08-28 鲍志遥 A kind of detection device and its method of quick detection saprodontia
CN109948619A (en) * 2019-03-12 2019-06-28 北京羽医甘蓝信息技术有限公司 The method and apparatus of whole scenery piece dental caries identification based on deep learning
CN112750111A (en) * 2021-01-14 2021-05-04 浙江工业大学 Method for identifying and segmenting diseases in tooth panoramic picture
CN112837278A (en) * 2021-01-25 2021-05-25 浙江工业大学 Tooth panoramic film decayed tooth identification method based on depth boundary supervision
CN114332123A (en) * 2021-12-30 2022-04-12 杭州电子科技大学 Automatic caries grading method and system based on panoramic film
CN115153900A (en) * 2022-07-27 2022-10-11 宋海洋 Dental caries removal method and dental caries removal system based on dental surgery robot
CN116205925A (en) * 2022-12-20 2023-06-02 北京工商大学 Tooth occlusion wing tooth caries segmentation method based on improved U-Net network
CN116228639A (en) * 2022-12-12 2023-06-06 杭州电子科技大学 Oral cavity full-scene caries segmentation method based on semi-supervised multistage uncertainty perception

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460762A (en) * 2018-03-16 2018-08-28 鲍志遥 A kind of detection device and its method of quick detection saprodontia
CN109948619A (en) * 2019-03-12 2019-06-28 北京羽医甘蓝信息技术有限公司 The method and apparatus of whole scenery piece dental caries identification based on deep learning
CN112750111A (en) * 2021-01-14 2021-05-04 浙江工业大学 Method for identifying and segmenting diseases in tooth panoramic picture
CN112837278A (en) * 2021-01-25 2021-05-25 浙江工业大学 Tooth panoramic film decayed tooth identification method based on depth boundary supervision
CN114332123A (en) * 2021-12-30 2022-04-12 杭州电子科技大学 Automatic caries grading method and system based on panoramic film
CN115153900A (en) * 2022-07-27 2022-10-11 宋海洋 Dental caries removal method and dental caries removal system based on dental surgery robot
CN116228639A (en) * 2022-12-12 2023-06-06 杭州电子科技大学 Oral cavity full-scene caries segmentation method based on semi-supervised multistage uncertainty perception
CN116205925A (en) * 2022-12-20 2023-06-02 北京工商大学 Tooth occlusion wing tooth caries segmentation method based on improved U-Net network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952987A (en) * 2024-03-27 2024-04-30 有方(合肥)医疗科技有限公司 CBCT image data processing method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
US11842556B2 (en) Image analysis method, apparatus, program, and learned deep learning algorithm
CN108537784B (en) CT image pulmonary nodule detection method based on deep learning
CN110033456B (en) Medical image processing method, device, equipment and system
CN107180421B (en) Fundus image lesion detection method and device
CN111583227B (en) Method, device, equipment and medium for automatically counting fluorescent cells
US6404936B1 (en) Subject image extraction method and apparatus
JP4529172B2 (en) Method and apparatus for detecting red eye region in digital image
CN110458831B (en) Scoliosis image processing method based on deep learning
CN111062947B (en) X-ray chest radiography focus positioning method and system based on deep learning
Antohe et al. Implications of digital image processing in the paraclinical assessment of the partially edentated patient
JP6342810B2 (en) Image processing
CN113781455B (en) Cervical cell image anomaly detection method, device, equipment and medium
CN108830874A (en) A kind of number pathology full slice Image blank region automatic division method
CN111047559B (en) Method for rapidly detecting abnormal area of digital pathological section
CN113570619A (en) Computer-aided pancreas pathology image diagnosis system based on artificial intelligence
CN111105427A (en) Lung image segmentation method and system based on connected region analysis
CN116934761A (en) Self-adaptive detection method for defects of latex gloves
CN116993764A (en) Stomach CT intelligent segmentation extraction method
CN117252825A (en) Dental caries identification method and device based on oral panoramic image
CN116468923A (en) Image strengthening method and device based on weighted resampling clustering instability
CN115359031A (en) Digital pathological image slice quality evaluation method
JP5383486B2 (en) Method and apparatus for operating a device for cerebral hemorrhage segmentation
CN113706515A (en) Tongue image abnormality determination method, tongue image abnormality determination device, computer device, and storage medium
CN110675402A (en) Colorectal polyp segmentation method based on endoscope image
CN117315378B (en) Grading judgment method for pneumoconiosis and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination