CN113792773B - Dental root canal treatment history detection model training method, image processing method and device - Google Patents

Dental root canal treatment history detection model training method, image processing method and device Download PDF

Info

Publication number
CN113792773B
CN113792773B CN202111021968.9A CN202111021968A CN113792773B CN 113792773 B CN113792773 B CN 113792773B CN 202111021968 A CN202111021968 A CN 202111021968A CN 113792773 B CN113792773 B CN 113792773B
Authority
CN
China
Prior art keywords
image
root canal
processed
treatment history
canal treatment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111021968.9A
Other languages
Chinese (zh)
Other versions
CN113792773A (en
Inventor
许桐楷
刘峰
彭俐
朱宇昂
赵小艇
曹寅
李琳
梁生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University School of Stomatology
Original Assignee
Peking University School of Stomatology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University School of Stomatology filed Critical Peking University School of Stomatology
Priority to CN202111021968.9A priority Critical patent/CN113792773B/en
Publication of CN113792773A publication Critical patent/CN113792773A/en
Application granted granted Critical
Publication of CN113792773B publication Critical patent/CN113792773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The embodiment of the application provides a root canal treatment history detection model training method, an image processing method and an image processing device, which are used for acquiring a plurality of sample images to be processed, wherein the sample images to be processed are images of single teeth and gingiva corresponding to the single teeth; integrating the pixel intensity of each sample image to be processed to obtain the position of the gum line corresponding to the gum; carrying out region of interest extraction processing on a sample image to be processed according to the position of a gum line to obtain a target image; extracting feature vectors corresponding to a plurality of target images, generating a feature vector sample set, and inputting the feature vector sample set into a support vector machine classifier for training to obtain a root canal treatment history detection model; the root canal treatment history detection model is used for detecting the root canal treatment history and acquiring the relevant information of the root canal. The technical scheme provided by the embodiment of the application can improve the accuracy of the detection result of the dental root canal treatment history.

Description

Dental root canal treatment history detection model training method, image processing method and device
Technical Field
The application relates to the technical field of medical treatment, in particular to a dental root canal treatment history detection model training method, an image processing method and an image processing device.
Background
With the development of medical technology, doctors mainly determine oral problems of users through an artificial intelligence-based X-ray image detection method when detecting related problems of teeth. Therefore, development of intelligent medical treatment based on detection by artificial intelligence has important significance.
At present, the artificial intelligence detection method mainly detects caries, periodontitis and wisdom teeth by a convolutional neural network mode, and does not detect the root canal treatment history. The convolutional neural network method may cause loss or unobtrusive of key feature information in the process of convolutional pooling, and other dental diseases, such as coronal teeth and filling teeth, may have similar characteristics of teeth of a patient with a root canal treatment history, so that the characteristic of the root canal treatment history may be lost when the image of the root canal treatment history is processed by the convolutional neural network method, resulting in poor accuracy of detecting the root canal treatment history.
Disclosure of Invention
The embodiment of the application provides a method for training a root canal treatment history detection model, an image processing method and a device, which can accurately extract a single tooth and an interesting area of an image of a gum corresponding to the single tooth, thereby improving the accuracy of detecting the root canal treatment history.
In a first aspect, an embodiment of the present application provides a method for training a root canal treatment history detection model, including:
And acquiring a plurality of sample images to be processed, wherein the sample images to be processed are images of a single tooth and gums corresponding to the single tooth.
And integrating the pixel intensity of each sample image to be processed, so as to obtain the position of the gum line corresponding to the gum.
And extracting the region of interest from the sample image to be processed according to the position of the gum line to obtain a target image.
Extracting feature vectors corresponding to a plurality of target images, generating a feature vector sample set, and inputting the feature vector sample set into a support vector machine classifier for training to obtain a dental root canal treatment history detection model; the root canal treatment history detection model is used for detecting the root canal treatment history and acquiring relevant information of the root canal.
In one possible implementation manner, the integrating the pixel intensity of the sample image to be processed to obtain the position of the gum line corresponding to the gum includes:
And integrating the pixel intensity of each row on the sample image to be processed in the direction parallel to the gum line to obtain the pixel intensity sum of each row.
And determining the position of the straight line parallel to the gum line direction, which is the minimum pixel intensity and the corresponding position of the straight line parallel to the gum line direction in the middle area of the sample image to be processed, in the pixel intensity sums corresponding to all lines of the sample image to be processed, as the position of the gum line.
In one possible implementation, the sample image to be processed is a rectangular image.
And performing region of interest extraction processing on the sample image to be processed according to the position of the gum line to obtain a target image, wherein the method comprises the following steps:
And acquiring pixel blocks with preset sizes at four corner positions of the sample image to be processed.
And calculating the pixel mean value of each pixel block, and arranging the pixel mean values corresponding to the four corner positions in order from small to large to form a pixel mean value sequential sequence.
And determining the position of the gum according to the pixel mean value sequence, and extracting the region of interest of the sample image to be processed according to the position of the gum line to obtain a target image.
In a possible implementation manner, the determining the position of the gum according to the pixel mean sequence, and extracting the region of interest of the sample image to be processed according to the position of the gum line, to obtain a target image, includes:
and determining the positions of pixel blocks corresponding to the pixel mean values in the third position and the fourth position in the pixel mean value sequence as the positions of the gingiva.
And cutting the sample image to be processed by taking the gum line as a dividing line, and determining the region between the gum line and the positions of the two pixel blocks in the sample image to be processed as a region of interest.
And determining the image corresponding to the interest as a target image.
In one possible implementation manner, the extracting feature vectors corresponding to the plurality of target images includes:
And extracting feature points of the multiple target images through a scale-invariant feature transformation algorithm to obtain features corresponding to the target images.
And carrying out cluster analysis on the features by using a cluster algorithm to obtain a first cluster result.
And extracting feature vectors corresponding to a plurality of target images according to the first clustering result.
In one possible implementation, the method further includes:
and taking a preset number of target images as first target images through a scale-invariant feature transformation algorithm, and extracting feature points of the first target images to obtain features corresponding to the first target images.
And carrying out cluster analysis on the features by using a cluster algorithm to obtain a second clustering result.
And extracting the feature vector corresponding to the first target image according to the second aggregation result to generate a feature vector test set.
And inputting the feature vector test set into the root canal treatment history detection model, and testing the root canal treatment history detection model to obtain the accuracy of the root canal treatment history detection model on the root canal treatment history detection.
In a second aspect, an embodiment of the present application provides an image processing method, including:
And acquiring an image to be processed, wherein the image to be processed is an image of a single tooth and a gum corresponding to the single tooth.
And integrating the pixel intensity of the image to be processed to obtain the position of the gum line corresponding to the gum.
And extracting the region of interest from the image to be processed according to the position of the gum line to obtain a second image.
And inputting the second image into a root canal treatment history detection model to detect the root canal treatment history, so as to obtain relevant information of the root canal, wherein the relevant information of the root canal comprises the existing root canal treatment history and the non-existing root canal treatment history, and the root canal treatment history detection model is trained according to the method of the first aspect.
In one possible implementation manner, the integrating the pixel intensity of the image to be processed to obtain the position of the gum line corresponding to the gum includes:
And integrating the pixel intensity of each row on the image to be processed in the direction parallel to the gum line to obtain the pixel intensity sum of each row.
And determining the position of a straight line parallel to the corresponding gum line direction in the minimum pixel intensity of the terminal area of the image to be processed in the pixel intensity sums corresponding to all lines of the image to be processed as the position of the gum line.
In one possible implementation, the image to be processed is a rectangular image.
The step of extracting the region of interest from the image to be processed according to the gum line to obtain a second image includes:
And acquiring pixel blocks with preset sizes at four corner positions of the image to be processed.
And calculating the pixel mean value of each pixel block, and arranging the pixel mean values corresponding to the four corner positions in order from small to large to form a pixel mean value sequential sequence.
And determining the position of the gum according to the pixel mean value sequence, and extracting the region of interest of the image to be processed according to the position of the gum line to obtain a second image.
In a possible implementation manner, the determining the position of the gum according to the pixel mean sequence, and extracting the region of interest of the image to be processed according to the position of the gum line, to obtain a second image, includes:
and determining the positions of pixel blocks corresponding to the pixel mean values in the third position and the fourth position in the pixel mean value sequence as the positions of the gingiva.
And cutting the image to be detected by taking the gum line as a dividing line, and determining the region between the gum line and the positions of the two pixel blocks in the image to be detected as a region of interest.
And determining the image corresponding to the interest as a second image.
In a possible implementation manner, the extracting the feature vector corresponding to the second image includes:
and extracting feature points of the second image through a scale-invariant feature transformation algorithm to obtain features corresponding to the target image.
And carrying out cluster analysis on the characteristics by using a cluster algorithm to obtain a third class result.
And extracting the feature vector corresponding to the second image according to the third classification result.
In a third aspect, an embodiment of the present application provides a dental root canal treatment history detection model training apparatus, the apparatus comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a plurality of sample images to be processed, and the sample images to be processed are images of a single tooth and gum corresponding to the single tooth.
And the processing module is used for integrating the pixel intensity of each sample image to be processed to obtain the position of the gum line corresponding to the gum.
And the processing module is also used for extracting and processing the region of interest of the sample image to be processed according to the position of the gum line to obtain a target image.
And the training module is used for extracting the feature vectors corresponding to the target images, generating a feature vector sample set, and inputting the feature vector sample set into a support vector machine classifier for training to obtain a root canal treatment history detection model.
In a fourth aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring an image to be processed, wherein the image to be processed is an image of a single tooth and a gum corresponding to the single tooth.
And the determining module is used for carrying out integral processing on the pixel intensity of the image to be processed to obtain the position of the gum line corresponding to the gum.
The determining module is further used for extracting the region of interest from the image to be processed according to the position of the gum line to obtain a second image; the second image is obtained according to the method of the first aspect.
And the output module is used for extracting the feature vector corresponding to the second image, inputting the feature vector corresponding to the second image into the root canal treatment history detection model to obtain relevant information of the root canal, wherein the relevant information of the root canal comprises the existing root canal treatment history and the non-existing root canal treatment history, and the root canal treatment history detection model is trained according to the method of the first aspect.
In a fifth aspect, the present application provides an electronic device, comprising: at least one processor, memory;
the memory stores computer-executable instructions.
The at least one processor executes computer-executable instructions stored by the memory to cause the electronic device to perform the method of the first or second aspect.
In a sixth aspect, the present application provides a computer readable storage medium having stored thereon computer executable instructions which, when executed by a processor, implement the method of the first or second aspect.
Therefore, the embodiment of the application provides a root canal treatment history detection model training method, an image processing method and an image processing device, which are used for acquiring a plurality of sample images to be processed, wherein the sample images to be processed are images of single teeth and gum corresponding to the single teeth; integrating the pixel intensity of each sample image to be processed to obtain the position of the gum line corresponding to the gum; carrying out region of interest extraction processing on a sample image to be processed according to the position of a gum line to obtain a target image; extracting feature vectors corresponding to a plurality of target images, generating a feature vector sample set, and inputting the feature vector sample set into a support vector machine classifier for training to obtain a root canal treatment history detection model; the root canal treatment history detection model is used for detecting the root canal treatment history and acquiring the relevant information of the root canal. According to the technical scheme provided by the embodiment of the application, the interested region is extracted from the image to be processed according to the determined position of the gum line, the target image can be accurately obtained, the interference of other dental diseases is avoided, the dental root canal treatment history detection model is obtained by training the support vector machine classifier, the problem of characteristic information loss caused by a convolutional neural network is avoided, and the accuracy of the dental root canal treatment history detection result can be improved when the dental root canal treatment history detection model is used for detection.
Drawings
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a flow chart of a method for training a dental root canal treatment history detection model according to an embodiment of the present application;
fig. 3 is a schematic diagram of a method for obtaining a pixel block in an image of a sample to be processed according to an embodiment of the present application;
FIG. 4 is a flow chart of another embodiment of a method for training a dental root canal treatment history detection model;
FIG. 5 is a flowchart of a method for acquiring a region of interest according to an embodiment of the present application;
FIG. 6 is a schematic diagram of obtaining a horizontal pixel intensity integral curve according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a cutting line acquisition process according to an embodiment of the present application;
FIG. 8 is a schematic diagram of an image of a region of interest according to an embodiment of the present application;
FIG. 9 is a flow chart of a method for training and testing a dental root canal treatment history detection model according to an embodiment of the present application;
fig. 10 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a device for training a model for detecting a treatment history of a dental root canal according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
Fig. 13 is a schematic structural diagram of an electronic device according to the present application.
Specific embodiments of the present disclosure have been shown by way of the above drawings and will be described in more detail below. These drawings and the written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the disclosed concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. In the text description of the present application, the character "/" generally indicates that the front-rear associated object is an or relationship.
Intelligent medical treatment is mainly performed by an artificial intelligence method, for example, an oral cavity problem is detected by an oral cavity X-ray image artificial intelligence detection method. Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application. When the tooth problem in the oral cavity is detected according to the method shown in fig. 1, a plurality of images of the teeth and gums in the oral cavity are mainly acquired through an acquisition device, the acquired images are removed, so that the images with higher noise points are removed, the removed images are subjected to marginalization, invalid information of the edges of the images is removed, a sample set is constructed by the processed images and is input into a convolutional neural network model, the convolutional neural network extracts image features through continuous convolution pooling, and classification is carried out through a full connection layer, so that the convolutional neural network model is trained, and a trained model is obtained. Enabling detection of dental problems using the trained model. The image is preprocessed to separate the image of the whole row of teeth and gums from the background, so that the image is more obviously compared with the background.
It will be appreciated that the method shown in fig. 1 is used for detecting caries, periodontitis and wisdom teeth, the method for extracting image features is too simple, and the loss or the lack of key feature information may be caused in the process of convolution pooling, and the image representing the dental disease with similar existing dental root canal treatment history may be misjudged to be the existing dental root canal treatment history, so that whether the dental root canal treatment history exists or not cannot be accurately determined by the method shown in fig. 1, that is, the dental root canal treatment history cannot be detected by the method.
In order to solve the problem of low accuracy when applied to detection of root canal treatment history by using the current convolutional neural network method for caries, periodontitis and wisdom teeth detection, the method can integrate a plurality of single teeth and corresponding gum images to obtain the position of a gum line, and extract a region of interest to obtain a target image. The feature vectors corresponding to the target images are used for constructing a sample set, the support vector machine classifier is used for training to obtain the detection model of the treatment history of the root canal, interference of other dental diseases on the detection of the treatment history of the root canal can be avoided, the tooth images to be processed can be processed by using the model, and the related information of the root canal is obtained, so that the accuracy of detecting the treatment history of the root canal is improved.
The following describes in detail the method for training the model for detecting the treatment history of dental root canal provided by the present application by way of specific examples. It is to be understood that the following embodiments may be combined with each other and that some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 2 is a flowchart of a method for training a dental root canal treatment history detection model according to an embodiment of the present application. The method of root canal treatment history detection model training may be performed by software and/or hardware means, for example, the hardware means may be a root canal treatment history detection model training means, which may be a terminal or a processing chip in the terminal. For example, referring to fig. 2, the root canal treatment history detection model training method may include:
s201, acquiring a plurality of sample images to be processed.
In the embodiment of the application, the sample image to be processed is an image of a single tooth and a gum corresponding to the single tooth. The plurality of sample images to be processed can be images acquired by dentists or can be acquired from the internet, and the embodiment of the application does not limit the specific acquisition mode.
For example, when a plurality of sample images to be processed are obtained, the sample images to be processed can be marked, the marks are that the root canal treatment history exists and the root canal treatment history does not exist, the images are subjected to threshold segmentation processing, and the sample images to be processed can be processed by adopting the methods of left-right overturning, contrast enhancement and brightness enhancement so as to remove noise among teeth, so that the distinction between single teeth and corresponding gums in the sample images to be processed and the background is more obvious.
After the threshold segmentation process is performed on the sample image to be processed, the following S202 may be performed:
s202, integrating the pixel intensity of each sample image to be processed according to each sample image to be processed, and obtaining the position of the gum line corresponding to the gum.
When the pixel intensity of the sample image to be processed is subjected to integral processing to obtain the position of the gum line corresponding to the gum, the pixel intensity of each row on the sample image to be processed can be subjected to integral processing in the direction parallel to the gum line to obtain the pixel intensity sum of each row; and determining the position of the minimum pixel intensity and the position of a corresponding straight line parallel to the gum line direction in the middle area of the sample image to be processed as the position of the gum line in the pixel intensity sums corresponding to all lines of the sample image to be processed. Since the sample image to be processed is an image of a single tooth and the gum corresponding to the single tooth, that is, the position of the gum line is in the middle area of the sample image to be processed, the minimum pixel intensity in the middle area of the sample image to be processed and the position of the straight line parallel to the gum line direction are determined to be the position of the gum line, and the pixel intensity which can be avoided and the position which can be lower are misjudged to be the position of the gum line, so that the position of the gum line is determined more accurately.
In another possible implementation manner, when determining the position of the gum line, it may be assumed that a direction parallel to the gum line direction is an X-axis direction, and a direction perpendicular to the gum line direction is a Y-axis direction, that is, the pixel intensities of each row on the sample image to be processed are integrated in the X-axis direction, so as to obtain a pixel intensity sum of each row, and a pixel intensity curve of the entire sample image to be processed is drawn on the X-Y coordinate axis, so as to obtain a pixel intensity graph. Since the lowest point of the pixel intensity curve is the smallest sum of the pixel intensities, the position of the straight line parallel to the gum line direction corresponding to the low valley point of the pixel intensity curve in the middle area of the sample image to be processed can be directly determined as the position of the gum line.
In the possible implementation manner, in order to further improve the accuracy of the determined gum line position, the Savitzky-Golay filter may be used to perform S-G filtering on the obtained pixel intensity curve, so that the pixel intensity curve is smoother, and thus the accuracy of the determined gum line position can be improved.
In the embodiment of the application, the position of the gum line is determined by calculating the pixel intensity sum of each row on the sample image to be processed and determining the position of the smallest pixel intensity sum corresponding to the straight line parallel to the gum line direction as the position of the gum line, and the position of the gum line is the pixel intensity sum smallest in the middle area of the sample to be processed.
S203, extracting and processing the region of interest of the sample image to be processed according to the position of the gum line to obtain a target image.
For example, the sample image to be processed is a rectangular image, so that when the region of interest of the sample image to be processed is extracted according to the position of the gum line to obtain the target image, pixel blocks with preset sizes can be obtained at four corner positions of the sample image to be processed; calculating the pixel mean value of each pixel block, and arranging the pixel mean values corresponding to the four corner positions in order from small to large to form a pixel mean value sequential sequence; and determining the position of the gum according to the pixel mean value sequence, and extracting the region of interest of the sample image to be processed according to the position of the gum line to obtain a target image. The preset size of the pixel block can be set according to the size of the sample image to be processed, when the size of the sample image to be processed is large, the preset size of the pixel block is large, otherwise, the preset size of the pixel block is small, and the preset size of the pixel block is not limited in any way.
For example, referring to fig. 3, fig. 3 is a schematic diagram of a method for obtaining a pixel block in a sample image to be processed according to an embodiment of the present application. The four white boxes in fig. 3 are the locations of the pixel blocks of the acquired sample image to be processed. The pixel mean values corresponding to the four pixel blocks in fig. 3 are obtained and arranged in order from small to large to form a pixel mean value sequence.
In the embodiment of the application, the position of the gum is determined according to a pixel mean value sequence formed by pixel mean values corresponding to pixel blocks at four corner positions of the sample image to be processed, and the region of interest of the sample image to be processed is extracted according to the position of the gum line, so that a target image is obtained. Because the pixel mean value of the four corner positions of the sample image to be processed is related to the position of the gum, the accuracy of extracting the region of interest can be improved, and the accuracy of the obtained target image is improved.
Determining the position of the gum according to the pixel mean value sequence, extracting a region of interest of a sample image to be processed according to the position of a gum line, and determining the position of a pixel block corresponding to the pixel mean value in the third position and the fourth position in the pixel mean value sequence as the position of the gum when a target image is obtained; cutting a sample image to be processed by taking a gum line as a dividing line, and determining an area between the gum line and the positions of two pixel blocks in the sample image to be processed as an area of interest; the corresponding image of interest is determined as the target image.
According to the image shown in fig. 3, two pixel blocks exist as background images in the pixel blocks at four corners of the sample image to be processed, and two pixel blocks exist as images at the positions where the gums are located. Therefore, the positions of the pixel blocks corresponding to the pixel mean values in the third bit and the fourth bit in the pixel mean value sequence are determined as the positions of the gingiva.
For example, when the position of the pixel block corresponding to the pixel mean value in the third position and the fourth position in the pixel mean value sequence is determined to be the position of the gum, it may also be determined, according to the pixel mean value sequence, whether the tooth is an upper tooth or a lower tooth, and if the position of the pixel block corresponding to the pixel mean value in the third position and the fourth position in the pixel mean value sequence is located at the upper part of the sample image to be processed, the sample image to be processed is the image of the upper tooth; if the pixel block corresponding to the pixel mean value in the third position and the fourth position in the pixel mean value sequence is positioned at the lower part of the sample image to be processed, the sample image to be processed is an image of lower teeth. Wherein, the upper teeth are teeth positioned at the upper part of the oral cavity, and the lower teeth are teeth positioned at the lower part of the oral cavity. It will be appreciated that fig. 3 shows an image of the lower teeth.
In the embodiment of the application, the image to be processed is cut by determining the position of the gum and taking the gum line as the dividing line, and the area between the gum line and the positions of the two pixel blocks is determined as the area of interest, so that the area of interest is the area of the gum, the interference of other odontopathy can be eliminated, and the accuracy of the root canal treatment history detection model is improved when model training is carried out according to the target image.
For example, after determining the target image, the determined size of the target image may be different due to different positions of the gum line, and in order to ensure accuracy of model training, the sizes of the plurality of target images may be unified into the target image with the same size.
S204, extracting feature vectors corresponding to a plurality of target images, generating a feature vector sample set, and inputting the feature vector sample set into a support vector machine classifier for training to obtain a root canal treatment history detection model; the root canal treatment history detection model is used for detecting the root canal treatment history and acquiring the relevant information of the root canal.
When extracting feature vectors corresponding to a plurality of target images, extracting feature points of the plurality of target images through a scale-invariant feature transformation algorithm to obtain features corresponding to the target images; performing cluster analysis on the features by using a cluster algorithm to obtain a first cluster result; and extracting feature vectors corresponding to the multiple target images according to the first clustering result. The Scale-INVARIANT FEATURE TRANSFORM (SIFT) algorithm is a computer vision algorithm for detecting and describing local features in images, and is used for searching extreme points in spatial Scale and extracting position, scale and rotation invariants.
For example, when feature point extraction is performed on a plurality of target images through a scale-invariant feature transform algorithm to obtain features corresponding to the target images, feature descriptors of the target images can be obtained after the target images are extracted through SIFT feature points, and each feature descriptor has three pieces of information: position, scale and orientation. Further, the SIFT features corresponding to all the target images can be put together to construct a dictionary, and the k-means clustering algorithm can be used for gathering the SIFT features corresponding to all the target images into preset categories, for example, 50 categories, so that the constructed dictionary is converted into word bags, and a first clustering result is obtained. Further, for each feature in each picture, the closest center point of the SIFT feature corresponding to the target image is determined, that is, the frequency of occurrence of the target image in each bag of words is found, and the frequency of occurrence is determined as the feature vector of the target image. If the preset class is 50 classes, the number of word bags is 50, and the feature vector of the target image may be a 50×1 feature vector. The feature descriptors are mainly distributed at the position boundaries with larger image gray gradient changes.
For example, feature vectors corresponding to the target images determined by the method are generated to form feature vector sample sets, the feature vector sample sets are input into a support vector machine (support vector machines, for short, SVM) classifier for training, so that the SVM classifier can classify the feature vector sample sets to obtain a root canal treatment history detection model,
In the embodiment of the application, the target images are processed through the scale-invariant feature transformation algorithm and the clustering algorithm, and the feature vectors corresponding to a plurality of target images are extracted, so that the determined feature vectors are more accurate, and the accuracy of the obtained root canal treatment history detection model is improved.
In order to further improve the accuracy of the root canal treatment history detection model, the root canal treatment history detection model can be tested, and for example, a preset number of target images can be used as first target images through a scale-invariant feature transformation algorithm, and feature point extraction is performed on the first target images to obtain features corresponding to the first target images; performing cluster analysis on the features by using a cluster algorithm to obtain a second clustering result; extracting a feature vector corresponding to the first target image according to the second aggregation result, and generating a feature vector test set; and inputting the feature vector test set into the root canal treatment history detection model, and testing the root canal treatment history detection model to obtain the accuracy of the root canal treatment history detection model on the root canal treatment history detection. The method for determining the feature vector corresponding to the first target image is similar to the method for determining the feature vector corresponding to the target image, and it can be understood that when the feature vector corresponding to the target image is determined, the SIFT features corresponding to the target image are collected into a preset category and word bags are generated through a clustering algorithm, so that the word bags generated by the embodiment can be directly used when the feature vector corresponding to the first target image is determined, and the efficiency of testing the root canal treatment history detection model is improved.
It will be appreciated that after obtaining the accuracy of the dental root canal treatment history detection model for the dental root canal treatment history detection, if the accuracy is lower than the preset threshold, the dental root canal treatment history detection model may be modified according to the method described in the above embodiment to improve the accuracy.
In the embodiment of the application, the accuracy of the root canal treatment history detection model for the root canal treatment history detection is obtained by testing the root canal treatment history detection model, so that the root canal treatment history detection model can be further corrected according to the accuracy of the root canal treatment history detection, and the accuracy of the root canal treatment history detection model is improved.
Therefore, according to the method for training the root canal treatment history detection model, provided by the embodiment of the application, a plurality of sample images to be processed are obtained, and the sample images to be processed are images of single teeth and gum corresponding to the single teeth; integrating the pixel intensity of each sample image to be processed to obtain the position of the gum line corresponding to the gum; carrying out region of interest extraction processing on a sample image to be processed according to the position of a gum line to obtain a target image; extracting feature vectors corresponding to a plurality of target images, generating a feature vector sample set, and inputting the feature vector sample set into a support vector machine classifier for training to obtain a root canal treatment history detection model; the root canal treatment history detection model is used for detecting the root canal treatment history and acquiring the relevant information of the root canal. According to the technical scheme provided by the application, the region of interest of the image to be processed can be extracted according to the determined position of the gum line to obtain the target image, the interference of other dental diseases can be avoided, the obtained target image is more accurate, in addition, the characteristic vector sample set is input into the support vector machine classifier for training to obtain the root canal treatment history detection model, the problem of characteristic information loss caused by the convolutional neural network can be avoided, and the accuracy of the obtained root canal related information can be improved when the root canal treatment history is detected by using the root canal treatment history detection model.
In order to facilitate understanding of the method for training the dental root canal treatment history detection model provided by the embodiment of the present application, a detailed description will be given below of the technical solution provided by the embodiment of the present application by way of a specific example. Referring specifically to fig. 4, fig. 4 is a schematic flow chart of another method for training a root canal treatment history detection model according to an embodiment of the present application.
The method for training the root canal treatment history detection model comprises the following steps: step 1: a sample set is constructed. Wherein, the sample images in the sample set are images of individual teeth positive and negative to the root canal on the root tip and gums corresponding to the individual teeth, which are batch cut and marked by the dentist. Illustratively, a positive root canal refers to a root canal having a history of root canal treatment, and a negative root canal refers to a root canal not having a history of root canal treatment.
Step 2: data enhancement. The data enhancement can be performed by four times of amplification of the sample image in the sample set by adopting three methods of left-right inversion, contrast enhancement and brightness enhancement. The sample image is amplified by four times, namely the proportion of the intensity between the background in the sample image and the tooth image is amplified by four times, and the embodiment of the application does not limit any specific mode.
Step 3: extraction of the region of interest of the root canal. The area of interest of the root canal may be extracted by a method of acquiring the gingival dividing line and the intra-gingival teeth may be extracted.
And 4, machine learning training samples. The samples may be trained using the SIFT-SVM method to construct a root canal detection model.
And 5, calling the model to test. And inputting the test set into the model to obtain the accuracy of model prediction, namely determining the accuracy of the model for the detection of the root canal treatment history.
For example, in the method for acquiring the region of interest in step 3, fig. 5 may be referred to, and fig. 5 is a schematic flow chart of a method for extracting the region of interest according to an embodiment of the present application. According to the illustration of fig. 5, when extracting a region of interest of a sample image, the steps are included:
Step 1: and acquiring a horizontal pixel intensity integral curve.
In the step, the influence of noise among gum gaps in a sample image is removed by a threshold segmentation method, so that a processed sample image is obtained; and carrying out pixel integration on the processed sample image, namely summing the intensities of pixels on each row parallel to the x axis to obtain a plurality of pixel intensity sums, and drawing a pixel integration curve according to the plurality of pixel intensity sums. Further, a Savitzky-Golay filter may filter the pixel integration curve to smooth the pixel integration curve, thereby obtaining a horizontal pixel intensity integration curve. Wherein the x circumferential direction is the direction of the gum line.
It can be understood that the threshold segmentation, the pixel integration and the Savitzky-Golay filter filtering in this step are similar to the methods described in the above embodiments, and the embodiments of the present application are not repeated here.
Fig. 6 is a schematic diagram of acquiring a horizontal pixel intensity integral curve according to an embodiment of the present application. The sample images of the filled teeth, the root canal positives and the normal teeth shown in fig. 6 are all sample images obtained after threshold segmentation treatment, and the pixel intensity curves corresponding to the sample images of the filled teeth, the root canal positives and the normal teeth are all curves after filtration treatment by a Savitzky-Golay filter. The embodiment of the present application is only illustrated by a Savitzky-Golay filter, but the embodiment of the present application is not limited thereto. Wherein root canal positives indicate the presence of a history of root canal treatment.
Step 2: and obtaining a cutting line.
In this step, all the trough points in the middle area of the sample image are acquired, and since the gum line is distributed in the middle area of the sample image, and the pixel intensity integral of the gum line is the smallest among all the trough points acquired, the trough point with the smallest pixel intensity value can be screened out in the pixel intensity integral curve, and the position with the smallest pixel intensity value can be determined as the position of the gum line, namely the position of the cutting line. The gum line is substantially horizontal, and after determining the trough point, a lateral cut is made at the determined trough point, cutting the sample image into two parts, the sample image being cut at the location of the gum line, and thus into an image of the tooth portion, and an image of the gum portion.
Fig. 7 is a schematic diagram of a cutting line acquisition according to an embodiment of the present application. According to the method of the embodiment of the present application, as shown in fig. 7, a cut line of sample images of filled teeth, root canal positives, and normal teeth, that is, a cut line of sample images, can be obtained.
Step 3: a region of interest is acquired.
In this step, the average value of 10×10 pixel blocks of the left upper, right upper, left lower and right lower sample images which are not cut can be obtained, whether the cut image is an upper tooth or a lower tooth can be determined according to the average value of the pixels, specifically, whether the image is an upper tooth or a lower tooth is determined according to the pixel block with the smallest average value obtained by comparison, wherein the average value of the left lower pixel block and the right lower pixel block of the upper tooth is small, and the average value of the left upper pixel block and the right upper pixel block of the lower tooth is small. When the region of interest is acquired, the upper teeth pick up the upper half of the image and the lower teeth are the opposite, it being understood that the gum line is substantially horizontal and the lateral cut can be made after the trough point is determined. After cutting, the sample may be sized to be uniform, for example, the gum line may be substantially horizontal and the cut may be made laterally after determining the trough point. The embodiments of the present application are described in the size of the pixel blocks and the region of interest, but are not limited thereto.
For example, as shown in fig. 8, fig. 8 is a schematic diagram of an image of a region of interest according to an embodiment of the present application. From the image shown in fig. 8, it is obvious that the image of the region of interest of positive root canal is different from the image of the region of interest of filled teeth and the image of the region of interest of normal teeth, that is, the method provided by the embodiment of the application can accurately extract the features of the image of positive root canal, thereby avoiding the interference of other odontopathy.
For example, after the region of interest of the sample image is acquired, a root canal treatment history detection model may be determined from the region of interest of the acquired sample image and tested. Referring to fig. 9, fig. 9 is a flowchart of a training and testing method for a dental root canal treatment history detection model according to an embodiment of the present application.
According to fig. 9, after a plurality of images of the region of interest are acquired, a sample set is constructed, the sample set after the region of interest is extracted is divided into a training set and a test set, and in order to improve the accuracy of the obtained root canal treatment history detection model, 80% of the images in the sample set can be used as the training set, and 20% of the images can be used as the test set.
When the training set is used for model training, feature points are extracted through a scale-invariant feature transformation algorithm, representative features in each image are extracted, and the features of all the images are put together to form a dictionary. Further, generating feature vectors through a K-means clustering algorithm, specifically, gathering the feature points of the training set images into 50 classes to generate word bags and clustering results, storing the word bags, and generating the feature vectors of each image of the training set according to the clustering results. Further, training is carried out through a support vector machine classifier, specifically, feature vectors converted by a training set are input into the classifier for training, a root canal treatment history detection model is obtained, and the model is stored.
To verify the accuracy of the resulting endodontic history detection model, the endodontic history detection model can be tested according to the test set shown in fig. 9. According to the method shown in fig. 9, when the root canal treatment history detection model is tested, feature points can be extracted by a scale-invariant feature transform algorithm, and the method is the same as the model generation method, and the embodiment of the application will not be repeated. Further, generating a feature vector through a K-means clustering algorithm, specifically, calling a word bag generated by a training set, clustering the features of the images of the test set on the basis of the word bag generated by the training set, and generating the feature vector of each image of the test set according to a clustering result. And performing a support vector machine classifier test, specifically, calling the trained model, and inputting the feature vector converted by the test set into the model, thereby obtaining the accuracy of the model. So that it is determined whether model training needs to be performed again according to the accuracy of the model.
It will be appreciated that the method of model training and testing is the same as that described in the embodiment shown in fig. 2 above, and reference may be made to the steps described in fig. 2, which will not be described in detail in connection with the embodiment of the present application.
In summary, according to the technical scheme provided by the embodiment of the application, after the extraction of the key characteristic region, namely the region of interest, the dental root canal therapy history detection model is obtained by training through an algorithm, so that the accuracy of the detection of the dental root canal therapy history detection model is higher when the dental root canal therapy Shi Jian is carried out.
Fig. 10 is a flowchart of an image processing method according to an embodiment of the present application. The root canal treatment history detection model training method may be performed by software and/or hardware means in the electronic device. For example, as can be seen in fig. 10, the image processing method may include:
S1001, acquiring an image to be processed, wherein the image to be processed is an image of a single tooth and a gum corresponding to the single tooth.
The acquired image to be processed is, for example, an image of the user's teeth acquired by a doctor through an acquisition device. The embodiment of the application does not limit the specific acquisition mode.
S1002, integrating the pixel intensity of the image to be processed to obtain the position of the gum line corresponding to the gum.
It can be understood that the method for integrating the pixel intensities of the image to be processed to obtain the position of the gum line corresponding to the gum is the same as the method for determining the position of the gum line corresponding to the gum of the sample image to be processed in the above embodiment, and the embodiments of the present application will not be repeated.
S1003, extracting a region of interest from the image to be processed according to the position of the gum line, and obtaining a second image.
For example, in the embodiment of the present application, the method for obtaining the second image is the same as the method for obtaining the target image in the above embodiment, and may be described in the above embodiment, which is not repeated herein.
S1004, inputting the second image into a root canal treatment history detection model to detect the root canal treatment history, and obtaining relevant information of the root canal, wherein the relevant information of the root canal comprises the existing root canal treatment history and the non-existing root canal treatment history.
For example, when the second image is input to the root canal treatment history detection model to detect the root canal treatment history, the feature vector corresponding to the second image may be extracted, and the feature vector is input to the classifier to detect, so as to obtain the relevant information of the root canal. The method for extracting the feature vector corresponding to the second image may refer to the method for extracting the feature vector corresponding to the target image in the above embodiment, which is not described in detail in the embodiments of the present application. The root canal treatment history detection model was trained from the above examples.
It will be appreciated that in obtaining information about the root canal, a history of the presence of a root canal treatment of the tooth and corresponding gum in the image to be processed may be obtained directly. In outputting the relevant information of the root canal, the relevant information of the existence of the root canal treatment history or the absence of the root canal treatment history may be directly output, or the marking information may be output, for example, output 0 indicates the absence of the root canal treatment history and output 1 indicates the existence of the root canal treatment history. The embodiment of the application does not limit the specific output mode.
Fig. 11 is a schematic structural diagram of a dental root canal treatment history detection model training device 110 according to an embodiment of the present application, for example, referring to fig. 11, the dental root canal treatment history detection model training device 110 may include:
The obtaining module 1101 is configured to obtain a plurality of sample images to be processed, where the sample images to be processed are images of a single tooth and gums corresponding to the single tooth.
And the processing module 1102 is configured to integrate the pixel intensities of the sample images to be processed for each sample image to be processed, so as to obtain the position of the gum line corresponding to the gum.
The processing module 1102 is further configured to perform region of interest extraction processing on the sample image to be processed according to the position of the gum line, so as to obtain a target image.
The training module 1103 is configured to extract feature vectors corresponding to the multiple target images, generate a feature vector sample set, and input the feature vector sample set to a support vector machine classifier for training, so as to obtain a root canal treatment history detection model.
The processing module 1102 is specifically configured to integrate the pixel intensities of each row on the sample image to be processed in a direction parallel to the gum line to obtain a sum of the pixel intensities of each row; and determining the position of the minimum pixel intensity and the position of a corresponding straight line parallel to the gum line direction in the middle area of the sample image to be processed as the position of the gum line in the pixel intensity sums corresponding to all lines of the sample image to be processed.
The sample image to be processed is a rectangular image; the processing module 1102 is specifically configured to obtain pixel blocks with preset sizes at four corner positions of a sample image to be processed; calculating the pixel mean value of each pixel block, and arranging the pixel mean values corresponding to the four corner positions in order from small to large to form a pixel mean value sequential sequence; and extracting the region of interest of the sample image to be processed according to the pixel mean value sequence and the gum line to obtain a target image.
The processing module 1102 is specifically configured to determine a position of a pixel block corresponding to a pixel mean value in the third position and the fourth position in the pixel mean value sequence as a position where a gum is located; cutting a sample image to be processed by taking a gum line as a dividing line, and determining an area between the gum line and the positions of two pixel blocks in the sample image to be processed as an area of interest; the corresponding image of interest is determined as the target image.
The training module 1103 is specifically configured to extract feature points of multiple target images through a scale-invariant feature transform algorithm, so as to obtain features corresponding to the target images; performing cluster analysis on the features by using a cluster algorithm to obtain a first cluster result; and extracting feature vectors corresponding to the multiple target images according to the first clustering result.
The root canal treatment history detection model training device further comprises a testing module 1104, wherein the testing module 1104 is used for taking a preset number of target images as first target images through a scale-invariant feature transformation algorithm, extracting feature points of the first target images and obtaining features corresponding to the first target images; performing cluster analysis on the features by using a cluster algorithm to obtain a second clustering result; extracting a feature vector corresponding to the first target image according to the second aggregation result, and generating a feature vector test set; and inputting the feature vector test set into the root canal treatment history detection model, and testing the root canal treatment history detection model to obtain the accuracy of the root canal treatment history detection model on the root canal treatment history detection.
Fig. 12 is a schematic structural diagram of an image processing apparatus 120 according to an embodiment of the present application, for example, referring to fig. 12, the image processing apparatus 120 may include:
the acquisition module 1201 is configured to acquire an image to be processed, where the image to be processed is an image of a single tooth and a gum corresponding to the single tooth.
The determining module 1202 is configured to integrate the pixel intensities of the image to be processed to obtain a position of a gum line corresponding to the gum.
The determining module 1202 is further configured to perform a region of interest extraction process on the image to be processed according to the position of the gum line, so as to obtain a second image.
The output module 1203 is configured to extract a feature vector corresponding to the second image, and input the feature vector corresponding to the second image to a root canal treatment history detection model to obtain relevant information of a root canal, where the relevant information of the root canal includes a presence of a root canal treatment history and an absence of a root canal treatment history, and the root canal treatment history detection model is obtained by training a root canal treatment history detection model training method.
The determining module 1202 is specifically configured to integrate the pixel intensities of each row on the image to be processed in a direction parallel to the gum line to obtain a sum of the pixel intensities of each row; and determining the position of a straight line parallel to the corresponding gum line direction as the position of the gum line from pixel intensity sums corresponding to all lines of the image to be processed.
The image to be processed is a rectangular image; the determining module 1202 is specifically configured to obtain pixel blocks with preset sizes at four corner positions of an image to be processed; calculating the pixel mean value of each pixel block, and arranging the pixel mean values corresponding to the four corner positions in order from small to large to form a pixel mean value sequential sequence; and extracting the region of interest of the image to be processed according to the pixel mean value sequence and the gum line to obtain a second image.
The determining module 1202 is specifically configured to determine a position of a pixel block corresponding to a pixel mean value in the third bit and the fourth bit in the pixel mean value sequence; cutting an image to be detected by taking a gum line as a dividing line, and determining an area between the gum line and the positions of two pixel blocks in the image to be detected as an area of interest; the corresponding image of interest is determined as the second image.
The output module 1203 is specifically configured to extract feature points of the second image by using a scale invariant feature transform algorithm, so as to obtain features corresponding to the target image; performing cluster analysis on the features by using a cluster algorithm to obtain a third class result; and extracting the feature vector corresponding to the second image according to the third classification result.
The embodiment of the application provides a dental root canal treatment history detection model training device and an image processing device, which can execute the technical scheme in any embodiment, and the implementation principle and beneficial effects of the device are similar to those of the embodiment, and can be seen from the implementation principle and beneficial effects of the embodiment, and the description is omitted herein.
Fig. 13 is a schematic structural diagram of an electronic device according to the present application. As shown in fig. 13, the electronic device 130 may include: at least one processor 1301 and a memory 1302.
A memory 1302 for storing programs. In particular, the program may include program code including computer-operating instructions.
The memory 1302 may comprise high-speed RAM memory or may further comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
Processor 1301 is configured to execute computer-executable instructions stored in memory 1302 to implement the root canal treatment history detection model training method or the image processing method described in the foregoing method embodiments. Processor 1301 may be a central processing unit (Central Processing Unit, abbreviated as CPU), or an Application SPECIFIC INTEGRATED Circuit (abbreviated as ASIC), or one or more integrated circuits configured to implement embodiments of the present application. Specifically, when implementing the method for training the root canal treatment history detection model described in the foregoing method embodiment, the electronic device may be, for example, an electronic device having a processing function, such as a terminal or a server. In carrying out the image processing method described in the foregoing method embodiments, the electronic device may be, for example, an electronic control unit of a container truck.
Optionally, the electronic device 1300 may also include a communication interface 1303. In a specific implementation, if the communication interface 1303, the memory 1302, and the processor 1301 are implemented independently, the communication interface 1303, the memory 1302, and the processor 1301 may be connected to each other through a bus and complete communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (PERIPHERAL COMPONENT, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. Buses may be divided into address buses, data buses, control buses, etc., but do not represent only one bus or one type of bus.
Alternatively, in a specific implementation, if the communication interface 1303, the memory 1302, and the processor 1301 are integrated on a chip, the communication interface 1303, the memory 1302, and the processor 1301 may complete communication through internal interfaces.
The present application also provides a computer-readable storage medium, which may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random-access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, etc., in which program codes may be stored, and in particular, the computer-readable storage medium stores program instructions for the methods in the above embodiments.
The present application also provides a program product comprising execution instructions stored in a readable storage medium. The at least one processor of the electronic device may read the execution instructions from the readable storage medium, and execution of the execution instructions by the at least one processor causes the electronic device to implement the root canal treatment history detection model training method or the image processing method provided by the various embodiments described above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (7)

1. A method for training a dental root canal therapy history detection model, comprising:
Acquiring a plurality of sample images to be processed, wherein the sample images to be processed are images of a single tooth and gums corresponding to the single tooth;
for each sample image to be processed, carrying out integral processing on the pixel intensity of each row on the sample image to be processed in the direction parallel to the gum line to obtain the pixel intensity sum of each row;
Determining the position of a straight line parallel to the direction of the gum line, which is the minimum pixel intensity of the middle area of the sample image to be processed, in the pixel intensity sums corresponding to all lines of the sample image to be processed; wherein the sample image to be processed is a rectangular image;
acquiring pixel blocks with preset sizes at four corner positions of the sample image to be processed;
calculating the pixel mean value of each pixel block, and arranging the pixel mean values corresponding to the four corner positions in order from small to large to form a pixel mean value sequential sequence;
Determining the positions of pixel blocks corresponding to the pixel mean values in the third position and the fourth position in the pixel mean value sequence as the positions of the gingiva;
Cutting the sample image to be processed by taking the gum line as a dividing line, and determining a region between the gum line and the positions of the two pixel blocks in the sample image to be processed as a region of interest;
Determining the image corresponding to the interest as a target image;
extracting feature vectors corresponding to a plurality of target images, generating a feature vector sample set, and inputting the feature vector sample set into a support vector machine classifier for training to obtain a dental root canal treatment history detection model; the root canal treatment history detection model is used for detecting the root canal treatment history and acquiring relevant information of the root canal.
2. The method according to claim 1, wherein extracting feature vectors corresponding to a plurality of the target images includes:
Extracting feature points of the multiple target images through a scale-invariant feature transformation algorithm to obtain features corresponding to the target images;
Performing cluster analysis on the features by using a cluster algorithm to obtain a first cluster result;
And extracting feature vectors corresponding to a plurality of target images according to the first clustering result.
3. The method according to claim 2, wherein the method further comprises:
taking a preset number of target images as first target images through a scale-invariant feature transformation algorithm, and extracting feature points of the first target images to obtain features corresponding to the first target images;
Performing cluster analysis on the features by using a cluster algorithm to obtain a second clustering result;
extracting feature vectors corresponding to the first target image according to the second aggregation result to generate a feature vector test set;
and inputting the feature vector test set into the root canal treatment history detection model, and testing the root canal treatment history detection model to obtain the accuracy of the root canal treatment history detection model on the root canal treatment history detection.
4. An image processing method, comprising:
collecting an image to be processed, wherein the image to be processed is an image of a single tooth and a gum corresponding to the single tooth;
integrating the pixel intensity of the image to be processed to obtain the position of the gum line corresponding to the gum;
Extracting the region of interest from the image to be processed according to the position of the gum line to obtain a second image; the second image is obtained according to the method of claim 1;
Inputting the second image into a root canal treatment history detection model to detect the root canal treatment history, and obtaining relevant information of the root canal, wherein the relevant information of the root canal comprises the existence of the root canal treatment history and the absence of the root canal treatment history, and the root canal treatment history detection model is trained according to the method of any one of claims 1-3.
5. A dental root canal treatment history detection model training device, comprising:
The device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a plurality of sample images to be processed, and the sample images to be processed are images of single teeth and gums corresponding to the single teeth;
The processing module is used for integrating the pixel intensity of each row on each sample image to be processed in the direction parallel to the gum line to obtain the pixel intensity sum of each row; determining the position of a straight line parallel to the direction of the gum line, which is the minimum pixel intensity of the middle area of the sample image to be processed, in the pixel intensity sums corresponding to all lines of the sample image to be processed; wherein the sample image to be processed is a rectangular image;
The processing module is further used for obtaining pixel blocks with preset sizes at four corner positions of the sample image to be processed; calculating the pixel mean value of each pixel block, and arranging the pixel mean values corresponding to the four corner positions in order from small to large to form a pixel mean value sequential sequence; determining the positions of pixel blocks corresponding to the pixel mean values in the third position and the fourth position in the pixel mean value sequence as the positions of the gingiva; cutting the sample image to be processed by taking the gum line as a dividing line, and determining a region between the gum line and the positions of the two pixel blocks in the sample image to be processed as a region of interest; determining the image corresponding to the interest as a target image;
And the training module is used for extracting the feature vectors corresponding to the target images, generating a feature vector sample set, and inputting the feature vector sample set into a support vector machine classifier for training to obtain a root canal treatment history detection model.
6. An image processing apparatus, comprising:
The device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an image to be processed, and the image to be processed is an image of a single tooth and a gum corresponding to the single tooth;
the determining module is used for carrying out integral processing on the pixel intensity of the image to be processed to obtain the position of the gum line corresponding to the gum;
The determining module is further used for extracting the region of interest from the image to be processed according to the position of the gum line to obtain a second image; the second image is obtained according to the method of claim 1;
The output module is used for extracting the feature vector corresponding to the second image, and inputting the feature vector corresponding to the second image into a root canal treatment history detection model to obtain relevant information of a root canal, wherein the relevant information of the root canal comprises a root canal treatment history and a root canal treatment history which does not exist, and the root canal treatment history detection model is trained according to the method of any one of claims 1-3.
7. An electronic device, comprising: at least one processor, memory;
the memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory to cause the electronic device to perform the method of any one of claims 1-4.
CN202111021968.9A 2021-09-01 2021-09-01 Dental root canal treatment history detection model training method, image processing method and device Active CN113792773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111021968.9A CN113792773B (en) 2021-09-01 2021-09-01 Dental root canal treatment history detection model training method, image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111021968.9A CN113792773B (en) 2021-09-01 2021-09-01 Dental root canal treatment history detection model training method, image processing method and device

Publications (2)

Publication Number Publication Date
CN113792773A CN113792773A (en) 2021-12-14
CN113792773B true CN113792773B (en) 2024-05-14

Family

ID=79182592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111021968.9A Active CN113792773B (en) 2021-09-01 2021-09-01 Dental root canal treatment history detection model training method, image processing method and device

Country Status (1)

Country Link
CN (1) CN113792773B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109363789A (en) * 2018-10-19 2019-02-22 上海交通大学 For predicting the method and data collection system of sono-explorer
CN110084237A (en) * 2019-05-09 2019-08-02 北京化工大学 Detection model construction method, detection method and the device of Lung neoplasm
CN111462333A (en) * 2014-09-08 2020-07-28 3M创新有限公司 Method for aligning intraoral digital 3D models

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10467815B2 (en) * 2016-12-16 2019-11-05 Align Technology, Inc. Augmented reality planning and viewing of dental treatment outcomes
US11464467B2 (en) * 2018-10-30 2022-10-11 Dgnct Llc Automated tooth localization, enumeration, and diagnostic system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462333A (en) * 2014-09-08 2020-07-28 3M创新有限公司 Method for aligning intraoral digital 3D models
CN109363789A (en) * 2018-10-19 2019-02-22 上海交通大学 For predicting the method and data collection system of sono-explorer
CN110084237A (en) * 2019-05-09 2019-08-02 北京化工大学 Detection model construction method, detection method and the device of Lung neoplasm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像处理和支持向量的肺癌CT图像的分类研究;巩萍;王阿明;;中国医学工程(第12期);全文 *

Also Published As

Publication number Publication date
CN113792773A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
Chen et al. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films
Oktay Tooth detection with convolutional neural networks
KR20210068077A (en) 3D model creation method, device, device and storage medium
CN112465060A (en) Method and device for detecting target object in image, electronic equipment and readable storage medium
CN111652974B (en) Method, device, equipment and storage medium for constructing three-dimensional face model
WO2020029915A1 (en) Artificial intelligence-based device and method for tongue image splitting in traditional chinese medicine, and storage medium
Oktay Human identification with dental panoramic radiographic images
CN108108731A (en) Method for text detection and device based on generated data
CN112508835B (en) GAN-based contrast agent-free medical image enhancement modeling method
Jusman et al. Machine learnings of dental caries images based on Hu moment invariants features
CN114004970A (en) Tooth area detection method, device, equipment and storage medium
Jusman et al. Comparison of dental caries level images classification performance using knn and svm methods
CN115601299A (en) Intelligent liver cirrhosis state evaluation system and method based on images
CN113516639B (en) Training method and device for oral cavity abnormality detection model based on panoramic X-ray film
CN113792773B (en) Dental root canal treatment history detection model training method, image processing method and device
CN110910409B (en) Gray image processing method, device and computer readable storage medium
CN115439733A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN115393314A (en) Deep learning-based oral medical image identification method and system
CN114943695A (en) Medical sequence image anomaly detection method, device, equipment and storage medium
CN114581467A (en) Image segmentation method based on residual error expansion space pyramid network algorithm
CN113505784A (en) Automatic nail annotation analysis method and device, electronic equipment and storage medium
Li et al. Detection of tooth position by YOLOv4 and various dental problems based on CNN with bitewing radiograph (July 2023)
CN116363740B (en) Deep learning-based ophthalmic disease category intelligent analysis method and device
CN110674872A (en) High-dimensional magnetic resonance image classification method and device
CN113379676B (en) Method, device, equipment and storage medium for detecting and correcting gray matter volume abnormality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant