CN113822904B - Image labeling device, method and readable storage medium - Google Patents

Image labeling device, method and readable storage medium Download PDF

Info

Publication number
CN113822904B
CN113822904B CN202111034189.2A CN202111034189A CN113822904B CN 113822904 B CN113822904 B CN 113822904B CN 202111034189 A CN202111034189 A CN 202111034189A CN 113822904 B CN113822904 B CN 113822904B
Authority
CN
China
Prior art keywords
image
training
standard
classification
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111034189.2A
Other languages
Chinese (zh)
Other versions
CN113822904A (en
Inventor
马斌
阎淑敏
郑旭
王冠
牛牧
席梦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Alemu Health Technology Co ltd
Original Assignee
Shanghai Alemu Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Alemu Health Technology Co ltd filed Critical Shanghai Alemu Health Technology Co ltd
Priority to CN202111034189.2A priority Critical patent/CN113822904B/en
Publication of CN113822904A publication Critical patent/CN113822904A/en
Application granted granted Critical
Publication of CN113822904B publication Critical patent/CN113822904B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention provides an image labeling device, an image labeling method and a readable storage medium. Reduces visual fatigue of doctors and has the advantages of easy identification and high accuracy.

Description

Image labeling device, method and readable storage medium
Technical Field
The invention relates to the field of computer image processing, in particular to an image labeling device, an image labeling method and a readable storage medium for segmenting and labeling tooth information on a case image.
Background
The tooth curved surface X-ray film is short for tooth curved surface X-ray film, is clinically generated by carrying out low-dose curved surface X-ray film on the oral cavity of a patient, and can judge pathological changes such as tooth shedding, decayed tooth and the like through analyzing the tooth curved surface X-ray film.
The analysis of the tooth curved-breaking X-ray film is directly observed by naked eyes for a long time in clinic, but the arrangement condition is complicated, the contours of different teeth are distinguished in the tooth curved-breaking X-ray film of a black-white image, and the phenomena of label staggering, label missing and the like are easy to occur, so that the difficulty degree of analyzing and judging the tooth lesions is increased to a great extent.
The image example segmentation technology in the field of computers is applied to the auxiliary diagnosis of dental images and becomes an additional technology for more people, but because the dental structures in the oral cavity are complex, the traditional image example segmentation technology is directly applied to the segmentation and labeling of the dental curved-breaking X-ray film, the effect is greatly limited, and the dental images are difficult to accurately and efficiently segment automatically.
Based on the above problems in the oral medical field, there is a need for a device, method and readable storage medium that can quickly and accurately perform image segmentation, labeling, etc. on a tooth curved X-ray film.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provide a device, a method and a readable storage medium which can quickly and accurately segment and mark diseased teeth from a tooth curved-breaking X-ray film.
The technical problems of the invention can be solved by the following technical scheme:
one aspect of the present invention provides an image labeling apparatus for segmenting and labeling dental information on a case image, the case image being a dental curved-broken X-ray image of a case, comprising:
a training set determination unit configured to determine a training data set based on a plurality of training original images, the training original images being dental curve-broken X-ray images of cases for training;
The training unit is configured to train the image classification model through the training data set to obtain a trained image classification model;
the segmentation and labeling unit is configured to segment and label the case images by using the trained image classification model, and generate auxiliary diagnosis images, wherein the auxiliary diagnosis images are dental curved-broken X-ray film images of the cases containing labeling and coloring information; the method comprises the steps of,
and a display unit configured to display the plurality of training original images, the training data set, the case image, and the auxiliary diagnostic image.
Further, the training set determining unit includes:
the image segmentation module is configured to manually trace the plurality of training original images and determine tooth contour information of the plurality of training original images;
the classification calibration module is configured to generate classification label data files of the plurality of training original images according to tooth profile information of the plurality of training original images, wherein the classification label data files comprise profile information and lesion information corresponding to N image classes, N is the number of the image classes which are determined in advance, and the number of the image classes is determined according to the positions of teeth and tissue structures, the lesion condition and the position information of the background and the foreground;
And the training set output module is configured to output a training data set, wherein the training data set comprises the plurality of training original images and classification label data files of the plurality of training original images.
Further, the training unit includes:
an image preprocessing module configured to determine a plurality of batches of training standard image sets from the training data set, each batch of the training standard image sets including K m×m pixels of training standard images, where K is the number of images of each batch of training standard image sets and m×m is the number of pixels of each training standard image;
a classifying mask determining module configured to determine a classifying mask C corresponding to each training standard image based on the training data set ij I= … M, j= … M, wherein C ij A class label for the (i, j) th pixel of said each training standard image;
loss function weight mask determining moduleA block configured to determine a loss function mask w corresponding to each training standard image based on the classification mask ij I= … M, j= … M, where w ij A loss function weight for the (i, j) th pixel of each of the training standard images, the w ij Determining according to the inverse of the ratio of the area included in the image category to which the (i, j) th pixel belongs to the area included in the background;
a model training module configured to: and sequentially inputting the standard image sets for training of the batches into the image classification model for iterative training, and stopping training after the average weighted loss function is smaller than a preset value or the standard image sets for training of the batches are completely trained, so as to obtain the trained image classification model.
Further, the image classification model is a multi-layer deep learning model based on a convolution network; the output result of the image classification model is N classification probability matrixes, wherein the classification probability matrixes are M multiplied by N matrixes, and represent the probability that each pixel of the training standard image belongs to the N classifications.
Further, the average weighted loss function is determined by the following formula:
wherein E is wright For averaging the weighted loss function, E ij A standard loss function, E, for the (i, j) th pixel of each of said standard training images ij Output results from the image classification model and the classification mask C ij The calculation result shows that the method comprises the steps of,and a weighted loss function for the output result of each standard training image.
Further, the training standard image sets of a plurality of batches are sequentially input into the image classification model for iterative training, and the method comprises the steps of calculating a back propagation gradient by using a back propagation algorithm, updating weight parameters of each layer of the image classification model, and inputting the training standard image sets of a new batch for forward propagation;
the method for calculating the back propagation gradient by using the back propagation algorithm and updating the weight parameters of each layer of the image classification model is realized by the following steps:
when the training batch is not equal to the integer multiple of I, the operation of clearing the counter-propagation gradient and updating the weight parameters of each layer of the image classification model is not performed, wherein I is the preset batch number of delaying clearing the counter-propagation gradient;
when the training batch is equal to an integer multiple of I, the back-propagation gradient is cleared and the average weighted loss function E wright Divided by I.
Further, the segmentation labeling unit includes:
an image processing module configured to obtain a classification probability matrix of the case image by:
processing the case images, converting the case images into M x M standard case images,
inputting the standard case images into the trained image classification model to obtain a classification probability matrix of the standard case images;
A category determination module configured to determine a category determination threshold according to a classification probability matrix of the standard case image, and determine a classification label of each pixel of the standard case image according to the category determination threshold;
a contour segmentation module configured to contour segment the standard case image by:
removing the pixel area corresponding to the background category of the standard case image,
drawing a contour of each tooth of the standard case image based on the classification label of each pixel of the standard case image,
and coloring and marking the interior of each tooth of the standard case image based on the classification mark of each pixel of the standard case image, and generating the auxiliary diagnosis image.
Further, the category decision threshold is determined by a least squares method.
Another aspect of the present invention provides a method for training and labeling dental information on a case image, which is a dental curved-breaking X-ray image of a case, using the image labeling apparatus, the method comprising the steps of:
s1, determining a training set, and determining a training data set based on a plurality of training original images, wherein the training original images are dental curve-breaking X-ray film images of cases for training;
S2, training the image classification model through the training data set to obtain a trained image classification model;
s3, segmenting and labeling, namely segmenting and labeling the case images by using the trained image classification model, and generating an auxiliary diagnosis image which is a tooth curved-breaking X-ray film image of the case containing labeling and coloring information;
and S4, displaying the plurality of training original images, the training data set, the case images and the auxiliary diagnostic images.
Further, in the step S1, determining a training set, specifically including the following steps:
s101, image segmentation, namely manually tracing the plurality of training original images, and determining tooth contour information of the plurality of training original images;
s102, calibrating categories, namely generating classification label data files of the plurality of training original images according to tooth profile information of the plurality of training original images, wherein the classification label data files comprise profile information and lesion information corresponding to N image categories, N is the number of predetermined image categories, and the number of the image categories is determined according to the positions of teeth and tissue structures, the lesion condition and the position information of the background and the foreground;
S103, outputting a training set, namely outputting a plurality of training original images and classification label data files of the plurality of training original images.
Further, in the S2 training, the method specifically includes the following steps:
s201, preprocessing images, namely determining a plurality of batches of standard image sets for training according to the training data set, wherein each batch of standard image sets for training comprises K standard images for training with M multiplied by M pixels, K is the number of images of each batch of standard image sets for training, and M multiplied by M is the number of pixels of each standard image for training;
s202, determining an image classification mask, and determining a classification mask C corresponding to each training standard image according to the training data set ij I= … M, j= … M, wherein C ij A class label for the (i, j) th pixel of said each training standard image;
s203, determining a loss function weight mask, and determining a loss function mask w corresponding to each training standard image according to the classification mask ij I= … M, j= … M, where w ij A loss function weight for the (i, j) th pixel of each of the training standard images, the w ij Determining according to the inverse of the ratio of the area included in the image category to which the (i, j) th pixel belongs to the area included in the background;
S204, training the model, namely sequentially inputting the training standard image sets of the batches into the image classification model for iterative training, and stopping training after the average weighting loss function is smaller than a preset value or the training of the training standard image sets of the batches is completed, so as to obtain a trained image classification model;
further, the image classification model is a multi-layer deep learning model based on a convolution network; the output result of the image classification model is N classification probability matrixes, wherein the classification probability matrixes are M multiplied by N matrixes, and represent the probability that each pixel of the training standard image belongs to the N classifications.
Further, the average weighted loss function is determined by the following formula:
wherein E is wright For averaging the weighted loss function, E ij A standard loss function for the (i, j) th pixel of each of the standard training images, by the output result of the image classification model and the classification mask C ij The calculation result shows that the method comprises the steps of,and a weighted loss function for the output result of each standard training image.
Further, sequentially inputting the training standard image sets of the plurality of batches into the image classification model for iterative training, wherein the iterative training comprises the steps of calculating a back propagation gradient by using a back propagation algorithm, updating weight parameters of each layer of the image classification model, and inputting the training standard image sets of a new batch for forward propagation;
The method for calculating the back propagation gradient by using the back propagation algorithm and updating the weight parameters of each layer of the image classification model is realized by the following steps:
when the training batch is not equal to the integral multiple of I, not performing the operation of clearing the counter-propagation gradient and updating the weight parameters of each layer of the image classification model, wherein I is the batch number of delaying clearing the counter-propagation gradient;
when the training batch is equal to an integer multiple of I, the back-propagation gradient is cleared and the average weighted loss function E wright Divided by I.
Further, in the S3 segmentation labeling, the method specifically includes the following steps:
s301, processing the case image, converting the case image into an MxM standard case image, and inputting the standard case image into the trained image classification model to obtain a classification probability matrix of the standard case image;
s302, judging the class, namely determining a class judgment threshold according to a classification probability matrix of the standard case image, and judging the classification label of each pixel of the standard case image according to the class judgment threshold;
s303, contour segmentation is carried out on the standard case image by the following steps:
removing the pixel area corresponding to the background category of the standard case image,
Drawing a contour of each tooth of the standard case image based on the classification label of each pixel of the standard case image,
and coloring and marking the interior of each tooth of the standard case image based on the classification mark of each pixel of the standard case image, and generating the auxiliary diagnosis image.
Further, the category decision threshold is determined by a least squares method.
Yet another aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described training and a method of segmenting and labeling case images using the above-described image segmentation labeling device.
Compared with the prior art, the invention has the following beneficial effects:
1. the method can color different areas of the X-ray film for tooth bending and breaking, has obvious diseased teeth, can directly mark on the image, is convenient for doctors to identify, and reduces interference factors for dentists to see the X-ray film for bending and breaking.
2. The system training is performed based on the residual error network, and the effect of increasing the number of the teeth curved-breaking X-ray film training batches on the basis of not increasing the video memory of the GPU of the computer can be achieved by superposing gradient information, so that the system convergence is faster and more stable.
3. In the semantic segmentation network, the algorithm of the loss function is optimized, so that the effect of data enhancement can be achieved, and the accuracy of image segmentation is improved.
4. The deep learning model which is output and stored after the system training optimization is carried out can exclude background information of non-teeth, interference noise is much less, and when semantic segmentation is carried out, the segmented tooth polygons are more accurate, and the accuracy can be improved by 9.3%.
Drawings
FIG. 1 is a schematic diagram of an overall architecture of an image labeling apparatus according to an embodiment of the invention;
FIG. 2 is a training original image of a specific completed artificial tracing in accordance with an embodiment of the present invention;
FIG. 3 is a diagram showing the label information of a training original image according to an embodiment of the present invention;
fig. 4 is a graph showing the tooth contour segmentation result of a specific case image according to the present embodiment;
FIG. 5 is a graph showing the result of tooth contour segmentation based on a prior art semantic segmentation method;
FIG. 6 is an auxiliary diagnostic image generated by coloring and labeling a specific case image according to the present embodiment;
FIG. 7 is a schematic diagram showing the steps of the present embodiment;
fig. 8 is a schematic diagram of specific steps for determining a training set in the embodiment S1;
FIG. 9 is a diagram showing the specific steps of training in the embodiment S2;
fig. 10 is a schematic diagram illustrating specific steps of the segmentation markers in the embodiment S3.
Detailed Description
The present invention will be further described below based on preferred embodiments with reference to the accompanying drawings.
In addition, various components on the drawings have been enlarged (thick) or reduced (thin) for ease of understanding, but this is not intended to limit the scope of the invention.
The singular forms also include the plural and vice versa.
In the description of the embodiments of the present invention, it should be noted that, if the terms "upper," "lower," "inner," "outer," and the like indicate an azimuth or a positional relationship based on the azimuth or the positional relationship shown in the drawings, or the azimuth or the positional relationship in which the product of the present invention is conventionally put in use, it is merely for convenience of describing the present invention and simplifying the description, and does not indicate or imply that the device or element to be referred to must have a specific azimuth, be configured and operated in a specific azimuth, and thus should not be construed as limiting the present invention. Furthermore, in the description of the present invention, terms first, second, etc. are used herein for distinguishing between different elements, but not limited to the order of manufacture, and should not be construed as indicating or implying any relative importance, as such may be different in terms of its detailed description and claims.
The embodiment of the invention provides an image labeling device which is used for segmenting and labeling tooth information on a case image, wherein the case image is a tooth curved-broken X-ray film image of a case, and fig. 1 is a system architecture diagram of the image labeling device.
As shown in fig. 1, an image labeling apparatus according to an embodiment of the present invention includes:
a training set determination unit configured to determine a training data set based on a plurality of training original images, the training original images being dental curve-broken X-ray images of cases for training;
the training unit is configured to train the image classification model through the training data set to obtain a trained image classification model;
the segmentation and labeling unit is configured to segment and label the case images by using the trained image classification model, and generate auxiliary diagnosis images, wherein the auxiliary diagnosis images are dental curved-broken X-ray film images of the cases containing labeling and coloring information; the method comprises the steps of,
and a display unit configured to display the plurality of training original images, the training data set, the case image, and the auxiliary diagnostic image.
Hereinafter, the training set determining unit of the present embodiment will be described in detail, and the training set determining unit includes:
And the image segmentation module is configured to manually trace the plurality of training original images and determine tooth contour information of the plurality of training images.
Specifically, in the embodiment of the invention, a person with experience uses mark drawing software to manually trace a plurality of training original images in sequence, so as to determine tooth profile information of the plurality of training original images, and the specific operation steps are as follows: and (3) carrying out observation by naked eyes, dotting and marking each tooth in the tooth curved-broken X-ray film, dotting and marking the crown and the root corresponding to each tooth in a profile, connecting each tooth profile into a closed area by points, and finally obtaining tooth profile information of a plurality of training original images by corresponding one X coordinate axis and one y coordinate axis of each point.
In some embodiments of the present invention, the plurality of training original images may be dental curve X-ray films obtained from a case database system, or may be electronic medical record photographs of patients who are received from an oral hospital, and clinical diagnosis results of the patient case photographs may be used as criteria for data labeling basis; the storage format of the plurality of training original images may be JPG or other standard image formats.
FIG. 2 illustrates a specific training raw image in which artificial tracing has been completed, according to an embodiment of the present invention.
The training set determining unit further includes:
the classification calibration module is configured to generate classification label data files of the plurality of training original images according to the original tooth profile information, wherein the classification label data files comprise profile information and lesion information corresponding to N image classes, N is the number of the image classes which are determined in advance, and the number of the image classes is determined according to the positions of teeth and tissue structures, the lesion condition and the position information of the background and the foreground.
Specifically, in this embodiment of the present invention, N categories included in the dental curved-breaking X-ray film are predetermined according to the positions and the number of the teeth and the tissue structures of the human body, the typical lesion conditions and the background and foreground position information, and then the manual classification and labeling are performed on the plurality of training original images for completing the manual tracing according to the predetermined N categories, so that the digital labels of the corresponding categories are marked in each tooth outline. Fig. 3 shows label information of a specific training original image according to an embodiment of the present invention, in which the number of predetermined categories is 52. In some embodiments of the invention, the number of N may also take on a larger value depending on the location of the tissue structure, the condition of the lesion, etc.
And after the training original images are manually classified and labeled, a class calibration module generates classification label data files of a plurality of training original images according to the label information of each tooth. In the embodiment of the present application, the classification label data file may be a json format file, where after each training original image is manually classified and labeled, N tooth numbers are stored as keys, and contour data and lesion data of corresponding teeth are located under each key, so as to obtain the classification label data file of the training original image. For example, the tooth numbers such as the tooth number 01 and the tooth number 02 of a specific training original image shown in fig. 3 are stored as keys, and contour data and lesion type data of teeth corresponding to the tooth numbers are set under each key, so that a classification label data file of the training original image is obtained.
The category calibration module sequentially classifies and marks the plurality of training original images and generates classification mark files of the plurality of training original images.
The training set determining unit further includes:
and the training set output module is configured to output a training data set, wherein the training data set comprises the plurality of training original images and classification label data files of the plurality of training original images.
Specifically, in the embodiment of the present invention, the training set output module outputs, as the training data set, the plurality of training original images and the classification label file in json format corresponding to each training original image, for training by the training unit.
The training unit of this embodiment is described in detail below, and includes:
an image preprocessing module configured to determine a plurality of batches of training standard image sets for training from a plurality of training images of the training data set, each batch of the training standard image sets including K mxm training standard images, where K is the number of images of the training standard image sets for each batch and mxm is the number of pixels of each training standard image;
specifically, in this embodiment, the image preprocessing module sequentially performs operations such as twisting, translation, rotation, scaling, and the like on the plurality of training original images, so that the training original images are uniformly adjusted to be m×m training standard images, where M is the number of pixels in the horizontal and vertical directions; in addition, operations such as Gaussian blur and the like can be performed on the training original image so as to enhance the segmentation capability.
In order to improve training speed, the image preprocessing module divides the plurality of training standard images into a plurality of training standard image sets for batches, wherein each training standard image set for batch comprises K training standard images.
In a specific implementation manner of this embodiment, the image preprocessing module uniformly adjusts the plurality of training standard images to 800×800 pixel training standard images, and divides each 24 training standard images into the same training standard image set, so as to finally obtain a plurality of training standard image sets.
The training unit further includes:
a classifying mask determining module configured to determine a classifying mask C corresponding to each training standard image based on the training data set ij I= … M, j= … M, wherein C ij The classification labels of the (i, j) th pixel of each training standard image.
Specifically, each classifying mask corresponds to a standard image for training, and the size of the classifying mask is the same as that of the standard image for training, so as to store the classifying label to which each pixel of the standard image for training belongs, namely: classifying mask C ij I= … M, j= … M is an m×m matrix, each matrix element C ij Representing the class label to which the (i, j) th pixel of the training standard image belongs.
In an embodiment of the application, the classification mask determination module may determine the classification mask for each training standard image by:
Firstly, generating an M multiplied by M matrix with all element values of 0 as an initial classification mask;
step two, reading the classification label data file corresponding to the training standard image;
thirdly, acquiring coordinate information of a closed area corresponding to tooth 01 in the classified label data file;
fourthly, performing operations such as torsion, translation, rotation, scaling and the like which are the same as those of the standard image generation process for training on the coordinate information of the closed region so as to obtain the coordinate information of the closed region corresponding to the 01 tooth on the standard image for training, and marking element values of corresponding coordinates as a reference numeral 01 on the initial classification mask;
and fifthly, repeating the third step and the fourth step on the 02 th tooth to the N th tooth in sequence, and finally generating the classifying mask of the standard image for training.
Classifying mask C ij The element values representing the class information determined by manual calibration of the (i, j) th pixel of the training standard image for comparison with the training results to obtain a standard loss function.
The training unit further includes:
a loss function weight mask determining module configured to determine a loss function mask w corresponding to each training standard image according to the classification mask ij I= … M, j= … M, where w ij A loss function weight for the (i, j) th pixel of each of the training standard images, the w ij And determining according to the inverse of the ratio of the area included in the image category to which the (i, j) th pixel belongs to the area included in the background.
Specifically, each loss function weight mask corresponds to one training standard image, and the size of the classification mask is the same as that of the training standard image, so as to store the loss function weight to which each pixel of the training standard image belongs, namely: w (w) ij I= … M, j= … M is an m×m matrix, each matrix element w ij The loss function weight representing the (i, j) th pixel of the training standard image.
In the embodiment of the application, the loss function weight mask determining module calculates the area included in each classification label according to the classification mask of each training standard image, calculates the reciprocal of the area ratio of the area to the area included in the background, and calculates the weight w corresponding to the pixel (i, j) belonging to the class label ij Let this reciprocal be.
The loss function weight mask is used for carrying out weighting treatment on the standard loss function of the training result, and because the size difference of the areas included in each category in the tooth curved-breaking X-ray film is large, the area of partial lesion areas is smaller, the contribution of the partial lesion areas to the loss function is smaller, and the segmentation capability of the tooth curved-breaking X-ray film cannot be improved through training. The weight of the class with smaller area in the loss function calculation can be effectively improved by resetting the loss function weight of each pixel to be the reciprocal of the area-background area ratio of the class of the pixel, and the segmentation capability of lesion tissues with certain small areas can be greatly improved.
The training unit further includes:
a model training module configured to: and sequentially inputting the standard image sets for training of the batches into the image classification model for iterative training, and stopping training after the average weighted loss function is smaller than a preset value or the standard image sets for training of the batches are completely trained, so as to obtain the trained image classification model.
Further, the image classification model is a multi-layer deep learning model based on a convolution network; the output result of the image classification model is N classification probability matrixes, wherein the classification probability matrixes are M multiplied by N matrixes, and represent the probability that each pixel of the training standard image belongs to the N classifications.
Further, the average weighted loss function is determined by the following formula:
wherein E is wright For averaging the weighted loss function, E ij A standard loss function, E, for the (i, j) th pixel of each of said standard training images ij Output results from the image classification model and the classification mask C ij The calculation result shows that the method comprises the steps of,and a weighted loss function for the output result of each standard training image.
Further, the step of sequentially inputting the standard image sets for training of a plurality of batches into the image classification model for iterative training comprises the steps of calculating a back propagation gradient by using a back propagation algorithm, updating weight parameters of each layer of the image classification model, and inputting the standard image sets of a new batch for forward propagation.
Specifically, in this embodiment, the model training module is configured to train an image classification model, where the image classification model may be a multi-layer deep learning model based on a convolutional network, and the specific training steps are as follows:
in the first step, the model training module inputs K training standard images of one training standard image set into the image classification model at a time, the output result of the image classification model is corresponding N classification probability matrices, each classification probability matrix may be an mxmxn matrix, that is, a 1×n vector is generated for each pixel of each training standard image, and the vectors correspond to the probabilities that the pixel belongs to N classifications respectively.
In a specific implementation manner of this embodiment, the image classification model may be a 10-layer convolution residual network, the first 5 layers are Feature extraction layers, the number of Feature maps of layer 1 (Feature Map) is 1024, the number of Feature maps of layers 2 to 5 is halved layer by layer (i.e. 1024- >512- >256- >128- > 64), the second 5 layers are Feature matching layers, the number of Feature maps of layer 6 is 64, and the number of Feature maps of layers 7 to 10 is doubled layer by layer (i.e. 64- >128- >256- >512- > 1024); the image classification model may further include a SoftMax layer for normalizing the output result; the 24 800×800 training standard images of each batch are input into an image classification model, and output into 24 matrices of 800×800×52, representing the segmentation results of the 24 training standard images, each segmentation result comprising 800×800 vectors of 1×52, and each vector element representing the probability that the pixel belongs to the preset 52 categories respectively.
Second, using the output result of the image classification model to classify mask C ij And loss function weight mask w ij The average weighted loss function is calculated according to the following formula:
wherein E is wright For averaging the weighted loss function, E ij A standard loss function, E, for the (i, j) th pixel of each of said standard training images ij Output results from the image classification model and the classification mask C ij The calculation result shows that the method comprises the steps of,and a weighted loss function for the output result of each standard training image.
In one embodiment of the present example, 24 matrices of 800×800×52 are used for the output and a classification mask C is determined by manual calibration ij Standard loss function E can be obtained ij Wherein the standard loss function E ij May be a Cross-Entropy Loss function (Cross-Entropy Loss), a mean square error Loss function (Mean Squared Error), or other generic Loss function for evaluating the output of the deep learning model; standard loss function E of output result using loss function weight mask ij And carrying out weighted average so as to obtain a weighted loss function of the output result of the batch of training:
in this embodiment, the standard loss function is weighted and averaged by using a loss function weight mask, and the loss function weight is related to the inverse of the area included in each category, so that the segmentation capability of the image classification model for some local or fine lesion areas of the teeth is effectively increased.
And thirdly, calculating a back propagation gradient by using a back propagation algorithm based on the average weighted loss function and updating weight parameters of each layer of the image classification model.
And fourthly, repeating the training steps from the first step to the third step until the average weighted loss function is smaller than a preset threshold value or training is completed for all batches of training standard image sets, and storing the image classification model obtained at the moment as a trained image classification model.
In a specific implementation of this embodiment, the preset threshold is 0.3, the initial training learning rate is 0.1, and the training is reduced to 1/10 of the original training per 10 batches, when the weighted average loss function E wright And finishing training when the training batch is less than 0.3 or the training is completed, and storing the image classification model at the moment as a trained image classification model.
In an optimized implementation of the present embodiment, to increase the training speed in the existing training environment, the following manner is used to calculate the back propagation gradient and update the weighting parameters of each layer of the image classification model:
when the training batch is not equal to the integer multiple of I, the operation of clearing the counter-propagation gradient and updating the weight parameters of each layer of the image classification model is not performed, wherein I is the preset batch number of delaying clearing the counter-propagation gradient;
When the training batch is equal to an integer multiple of I, the back-propagation gradient is cleared and the average weighted loss function E wright Divided by I.
Specifically, in this embodiment, the value of I is 4, and the training standard images of each batch are 24, that is, after training for 96 training standard images in total of 4 batches, the network gradient is cleared and the weights of each layer are updated, and the weighted average loss function E is obtained wright Divided by 4.
In this embodiment, a delayed emptying and gradient information superposition manner may be adopted when the back propagation is performed, so as to achieve the effect of simultaneous training of multiple batches. When the gradient is reduced and the weight is updated, the weighted average loss function is divided by I by emptying the network gradient after I batches pass, so that the number of standard images for training in the same batch is increased on the basis of not increasing the GPU video memory, and the better effect of faster and more stable system convergence is achieved.
The following describes the segmentation labeling unit of the present embodiment in detail, where the segmentation labeling unit includes:
an image processing module configured to obtain a classification probability matrix of the case image by:
the first step, processing the case image and converting the case image into an MxM standard case image;
And secondly, inputting the standard case images into the trained image classification model to obtain a classification probability matrix of the standard case images.
In a specific implementation manner of the embodiment, the case image is a dental curved-breaking X-ray film of a case, and the image processing module performs processing such as torsion, translation, rotation, scaling and the like on the case image to convert the case image into a standard case image of 800×800 so as to be suitable for a trained image classification model; the processed standard case images are input into a trained image classification model for classification, and a classification probability matrix of the case images is obtained, wherein the data structure of the classification probability matrix is an 800×800×52 matrix, the data structure of the classification probability matrix comprises 800×800 1×52 vectors, and 52 element values of each vector respectively represent the probability that the pixel belongs to 52 categories.
The segmentation labeling unit further includes:
a category determination module configured to determine a category determination threshold according to a classification probability matrix of the standard case image, and determine a classification label of each pixel of the standard case image according to the category determination threshold;
preferably, the class decision threshold is determined by a least squares method.
In a specific implementation manner of this embodiment, based on the classification probability matrix of 800×800×52, a class determination threshold is obtained by using a least square method, so that the class determination threshold is used to determine the class of 800×800 pixels, and the class to which the element value greater than the determination threshold in the corresponding 1×52 vector belongs is determined as the class of the pixel, and if all the element values in the 1×52 vector do not exceed the determination threshold, the determination is made as the background.
The segmentation labeling unit further includes:
a contour segmentation module configured to contour segment the standard case image by:
the first step, removing the pixel area corresponding to the background category of the standard case image,
a second step of drawing a contour of each tooth of the standard case image based on the classification label of each pixel of the standard case image,
and thirdly, coloring and marking the interior of each tooth of the standard case image based on the classification mark of each pixel of the standard case image, and generating the auxiliary diagnosis image.
In this embodiment, the contour segmentation module firstly performs a background removal operation by using the class label of each pixel of the standard case image generated by the class determination module, then draws the contour of each tooth based on the class label of each pixel, and finally performs coloring and labeling inside each tooth to generate an auxiliary diagnostic image.
Fig. 4 is a tooth contour segmentation result of a specific case image in this embodiment, and fig. 5 is a result of tooth contour segmentation (a part inside a rectangular frame in the figure) based on a semantic segmentation method in the prior art, and it can be seen by comparing that, in the technical solution in this embodiment, on the basis of removing the background interference factors of the non-teeth, the target tooth image is accurately marked by a dotting marking manner, and compared with the prior art that the accuracy of the tooth image segmentation result can be effectively improved by using the contour rectangular classification method.
Fig. 6 is an auxiliary diagnostic image generated by coloring and labeling a specific case image according to the present embodiment.
In this embodiment, the display unit may be a display of a desktop computer or a screen of an intelligent device such as a notebook computer or a tablet computer, and is configured to display the plurality of training original images, the training data set, the case image and the auxiliary diagnostic image.
Another aspect of the present invention provides a method for training and labeling dental information on a case image, wherein the case image is a dental curved-breaking X-ray image of a case, and fig. 7 is a schematic diagram of specific steps of the present embodiment, and as shown in fig. 7, the method is implemented by the following steps:
s1, determining a training set, and determining a training data set based on a plurality of training original images, wherein the training original images are dental curve-breaking X-ray film images of cases used for training.
S2, training is carried out on the image classification model through the training data set, and a trained image classification model is obtained.
S3, segmenting and labeling, namely segmenting and labeling the case images by using the trained image classification model, and generating an auxiliary diagnosis image, wherein the auxiliary diagnosis image is a tooth curved-breaking X-ray film image of the case containing labeling and coloring information.
And S4, displaying the plurality of training original images, the training data set, the case images and the auxiliary diagnostic images.
Fig. 8 is a schematic diagram of specific steps for determining the training set in S1 of this embodiment, and as shown in fig. 8, the following details of determining the training set in S1 of this embodiment are described, which specifically includes the following steps:
s101, image segmentation, namely manually tracing the plurality of training original images, and determining tooth contour information of the plurality of training original images.
S102, calibrating categories, namely generating classification label data files of the plurality of training original images according to tooth profile information of the plurality of training original images, wherein the classification label data files comprise profile information and lesion information corresponding to N image categories, N is the number of the image categories which are determined in advance, and the number of the image categories is determined according to the positions of teeth and tissue structures, the lesion condition and the position information of the background and the foreground.
S103, outputting a training set, namely outputting a plurality of training original images and classification label data files of the plurality of training original images.
Fig. 9 is a schematic diagram of specific steps of the training of S2 in this embodiment, as shown in fig. 9, in the training of S2, the method specifically includes the following steps:
S201, preprocessing images, namely determining a plurality of batches of standard image sets for training according to the training data set, wherein each batch of standard image sets for training comprises K standard images for training with M multiplied by M pixels, K is the number of images of each batch of standard image sets for training, and M multiplied by M is the number of pixels of each standard image for training.
S202, determining an image classification mask, and determining a classification mask C corresponding to each training standard image according to the training data set ij I= … M, j= … M, wherein C ij The classification labels of the (i, j) th pixel of each training standard image.
S203, determining a loss function weight mask, and determining a loss function mask w corresponding to each training standard image according to the classification mask ij I= … M, j= … M, where w ij A loss function weight for the (i, j) th pixel of each of the training standard images, the w ij And determining according to the inverse of the ratio of the area included in the image category to which the (i, j) th pixel belongs to the area included in the background.
And S204, training the model, namely sequentially inputting the training standard image sets for the plurality of batches into the image classification model for iterative training, and stopping training after the average weighting loss function is smaller than a preset value or the training of the training standard image sets for the plurality of batches is completed, so as to obtain the trained image classification model.
Specifically, the image classification model is a multi-layer deep learning model based on a convolution network; the output result of the image classification model is N classification probability matrixes, wherein the classification probability matrixes are M multiplied by N matrixes, and represent the probability that each pixel of the training standard image belongs to the N classifications.
Wherein the average weighted loss function is determined by the following formula:
wherein E is wright For averaging the weighted loss function, E ij A standard loss function for the (i, j) th pixel of each of the standard training images, by the output result of the image classification model and the classification mask C ij The calculation result shows that the method comprises the steps of,and a weighted loss function for the output result of each standard training image.
Specifically, the standard image sets for training of the batches are sequentially input into the image classification model for iterative training, and the method comprises the steps of calculating a back propagation gradient by using a back propagation algorithm, updating weight parameters of each layer of the image classification model, and inputting the standard image sets of a new batch for forward propagation.
The method comprises the following steps of calculating a back propagation gradient by using a back propagation algorithm and updating weight parameters of each layer of the image classification model:
When the training batch is not equal to the integral multiple of I, the operation of clearing the counter-propagation gradient and updating the weight parameters of each layer of the labeling segmentation model is not performed, wherein I is the batch number of delaying clearing the counter-propagation gradient;
when the training batch is equal to an integer multiple of I, the back-propagation gradient is cleared and the average weighted loss function E wright Divided by I.
Fig. 10 is a schematic diagram of specific steps of the S3 segmentation labeling in this embodiment, as shown in fig. 10, in the S3 segmentation labeling, the specific steps include:
s301, processing the case image, converting the case image into an MxM standard case image, and inputting the standard case image into the trained image classification model to obtain a classification probability matrix of the standard case image;
s302, judging the class, namely determining a class judgment threshold according to a classification probability matrix of the standard case image, and judging the classification label of each pixel of the standard case image according to the class judgment threshold;
s303, contour segmentation is carried out on the standard case image by the following steps:
removing a pixel region corresponding to the background category of the standard case image;
Drawing the outline of each tooth of the standard case image based on the classification mark of each pixel of the standard case image;
and coloring and marking the interior of each tooth of the standard case image based on the classification mark of each pixel of the standard case image, and generating the auxiliary diagnosis image.
Specifically, the category determination threshold is determined by a least square method.
Yet another aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described training and a method of segmenting and labeling case images using the above-described image segmentation labeling device.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (15)

1. An image labeling device for segmenting and labeling dental information on a case image, the case image being a dental curved-broken X-ray image of a case, comprising:
A training set determination unit configured to determine a training data set based on a plurality of training original images, the training original images being dental curve-broken X-ray images of cases for training;
the training unit is configured to train the image classification model through the training data set to obtain a trained image classification model;
the segmentation and labeling unit is configured to segment and label the case images by using the trained image classification model to generate auxiliary diagnosis images, wherein the auxiliary diagnosis images are dental curved-broken X-ray film images of the cases containing labeling and coloring information; the method comprises the steps of,
a display unit configured to display the plurality of training original images, the training data set, the case image, and the auxiliary diagnostic image;
the training unit includes:
an image preprocessing module configured to determine a plurality of batches of training standard image sets from the training data set, each batch of the training standard image sets including K m×m pixels of training standard images, where K is the number of images of each batch of training standard image sets and m×m is the number of pixels of each training standard image;
A classifying mask determining module configured to determine a classifying mask C corresponding to each training standard image based on the training data set ij I= … M, j= … M, wherein C ij A class label for the (i, j) th pixel of said each training standard image;
a loss function weight mask determining module configured to determine a loss function mask w corresponding to each training standard image according to the classification mask ij I= … M, j= … M, where w ij A loss function weight for the (i, j) th pixel of each of the training standard images, the w ij Determining according to the inverse of the ratio of the area included in the image category to which the (i, j) th pixel belongs to the area included in the background;
a model training module configured to: and sequentially inputting the standard image sets for training of the batches into the image classification model for iterative training, and stopping training after the average weighted loss function is smaller than a preset value or the standard image sets for training of the batches are completely trained, so as to obtain the trained image classification model.
2. The image labeling apparatus according to claim 1, wherein the training set determining unit comprises:
the image segmentation module is configured to manually trace the plurality of training original images and determine tooth contour information of the plurality of training original images;
The classification calibration module is configured to generate classification label data files of the plurality of training original images according to tooth profile information of the plurality of training original images, wherein the classification label data files comprise profile information and lesion information corresponding to N image classes, N is the number of the image classes which are determined in advance, and the number of the image classes is determined according to the positions of teeth and tissue structures, the lesion condition and the position information of the background and the foreground;
and the training set output module is configured to output a training data set, wherein the training data set comprises the plurality of training original images and classification label data files of the plurality of training original images.
3. An image tagging device according to claim 1 wherein:
the image classification model is a multi-layer deep learning model based on a convolution network;
the output result of the image classification model is N classification probability matrixes, wherein the classification probability matrixes are M multiplied by N matrixes, and represent the probability that each pixel of the training standard image belongs to the N classifications.
4. An image tagging device according to claim 1 wherein:
The average weighted loss function is determined by the following formula:
wherein E is wright For averaging the weighted loss function, E ij A standard loss function for the (i, j) th pixel of each training standard image, by the output result of the image classification model and the classification mask C ij The calculation result shows that the method comprises the steps of,and a weighted loss function for the output result of each training standard image.
5. An image tagging device according to claim 4 wherein:
sequentially inputting the training standard image sets of the plurality of batches into the image classification model for iterative training, wherein the iterative training comprises the steps of calculating a back propagation gradient by using a back propagation algorithm, updating weight parameters of each layer of the image classification model, and inputting the training standard image sets of a new batch for forward propagation;
the method for calculating the back propagation gradient by using the back propagation algorithm and updating the weight parameters of each layer of the image classification model is realized by the following steps:
when the training batch is not equal to the integral multiple of I, not performing the operation of clearing the counter-propagation gradient and updating the weight parameters of each layer of the image classification model, wherein I is the batch number of delaying clearing the counter-propagation gradient;
When the training batch is equal to an integer multiple of I, the back-propagation gradient is cleared and the average weighted loss function E wright Divided by I.
6. An image tagging device according to claim 1, wherein the segmentation tagging unit comprises:
an image processing module configured to obtain a classification probability matrix of the case image by:
processing the case images, converting the case images into M x M standard case images,
inputting the standard case images into the trained image classification model to obtain a classification probability matrix of the standard case images;
a category determination module configured to determine a category determination threshold according to a classification probability matrix of the standard case image, and determine a classification label of each pixel of the standard case image according to the category determination threshold;
a contour segmentation module configured to contour segment the standard case image by:
removing the pixel area corresponding to the background category of the standard case image,
drawing a contour of each tooth of the standard case image based on the classification label of each pixel of the standard case image,
and coloring and marking the interior of each tooth of the standard case image based on the classification mark of each pixel of the standard case image, and generating the auxiliary diagnosis image.
7. An image tagging device according to claim 6 wherein:
the class decision threshold is determined by a least squares method.
8. An image labeling method is used for segmenting and labeling tooth information on a case image, wherein the case image is a tooth curved-broken X-ray film image of a case, and is characterized in that: comprises the following steps
S1, determining a training set, and determining a training data set based on a plurality of training original images, wherein the training original images are dental curve-breaking X-ray film images of cases for training;
s2, training the image classification model through the training data set to obtain a trained image classification model;
s3, segmenting and labeling, namely segmenting and labeling the case images by using the trained image classification model, and generating an auxiliary diagnosis image which is a tooth curved-breaking X-ray film image of the case containing labeling and coloring information;
s4, displaying the plurality of training original images, the training data set, the case images and the auxiliary diagnostic images;
in the S2 training, specifically comprising the following steps,
s201, preprocessing images, namely determining a plurality of batches of standard image sets for training according to the training data set, wherein each batch of standard image sets for training comprises K standard images for training with M multiplied by M pixels, K is the number of images of each batch of standard image sets for training, and M multiplied by M is the number of pixels of each standard image for training;
S202, determining an image classification mask, and determining a classification mask C_ij corresponding to each standard image for training according to the training data set, wherein i= … M, j= … M, and C_ij is a classification label of the (i, j) th pixel of each standard image for training;
s203, determining a loss function weight mask, and determining a loss function mask w_ij corresponding to each training standard image according to the classifying mask, wherein i= … M, j= … M, w_ij is the loss function weight of the (i, j) th pixel of each training standard image, and w_ij is determined according to the inverse of the area ratio of the image category to which the (i, j) th pixel belongs to the background;
and S204, training the model, namely sequentially inputting the training standard image sets for the plurality of batches into the image classification model for iterative training, and stopping training after the average weighting loss function is smaller than a preset value or the training of the training standard image sets for the plurality of batches is completed, so as to obtain the trained image classification model.
9. An image annotation method as claimed in claim 8, wherein: in the step S1, determining a training set, specifically comprising the following steps of
S101, image segmentation, namely manually tracing the plurality of training original images, and determining tooth contour information of the plurality of training original images;
S102, calibrating categories, namely generating classification label data files of the plurality of training original images according to tooth profile information of the plurality of training original images, wherein the classification label data files comprise profile information and lesion information corresponding to N image categories, N is the number of predetermined image categories, and the number of the image categories is determined according to the positions of teeth and tissue structures, the lesion condition and the position information of the background and the foreground;
s103, outputting a training set, namely outputting a plurality of training original images and classification label data files of the plurality of training original images.
10. An image annotation method as claimed in claim 9, wherein:
the image classification model is a multi-layer deep learning model based on a convolution network;
the output result of the image classification model is N classification probability matrixes, wherein the classification probability matrixes are M multiplied by N matrixes, and represent the probability that each pixel of the training standard image belongs to the N classifications.
11. An image annotation method as claimed in claim 9, wherein:
the average weighted loss function is determined by the following formula:
wherein E is wright For averaging the weighted loss function, E ij A standard loss function for the (i, j) th pixel of each training standard image, by the output result of the image classification model and the classification mask C ij The calculation result shows that the method comprises the steps of,and a weighted loss function for the output result of each training standard image.
12. An image annotation method as claimed in claim 11, wherein:
sequentially inputting the training standard image sets of the plurality of batches into the image classification model for iterative training, wherein the iterative training comprises the steps of calculating a back propagation gradient by using a back propagation algorithm, updating weight parameters of each layer of the image classification model, and inputting the training standard image sets of a new batch for forward propagation;
the method for calculating the back propagation gradient by using the back propagation algorithm and updating the weight parameters of each layer of the image classification model is realized by the following steps:
when the training batch is not equal to the integral multiple of I, not performing the operation of clearing the counter-propagation gradient and updating the weight parameters of each layer of the image classification model, wherein I is the batch number of delaying clearing the counter-propagation gradient;
when the training batch is equal to an integer multiple of I, the back-propagation gradient is cleared and the average weighted loss function E wright Divided by I.
13. An image annotation method as claimed in claim 8, wherein: the S3 segmentation marking specifically comprises the following steps of
S301, processing the case image, converting the case image into an MxM standard case image, and inputting the standard case image into the trained image classification model to obtain a classification probability matrix of the standard case image;
s302, judging the class, namely determining a class judgment threshold according to a classification probability matrix of the standard case image, and judging the classification label of each pixel of the standard case image according to the class judgment threshold;
s303, contour segmentation is carried out on the standard case image by the following steps:
removing the pixel area corresponding to the background category of the standard case image,
drawing a contour of each tooth of the standard case image based on the classification label of each pixel of the standard case image,
and coloring and marking the interior of each tooth of the standard case image based on the classification mark of each pixel of the standard case image, and generating the auxiliary diagnosis image.
14. An image annotation method as claimed in claim 13, wherein:
The class decision threshold is determined by a least squares method.
15. A computer-readable storage medium having stored thereon a computer program, characterized by:
the computer program, when executed by a processor, implements the image annotation method according to any of claims 8 to 14.
CN202111034189.2A 2021-09-03 2021-09-03 Image labeling device, method and readable storage medium Active CN113822904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111034189.2A CN113822904B (en) 2021-09-03 2021-09-03 Image labeling device, method and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111034189.2A CN113822904B (en) 2021-09-03 2021-09-03 Image labeling device, method and readable storage medium

Publications (2)

Publication Number Publication Date
CN113822904A CN113822904A (en) 2021-12-21
CN113822904B true CN113822904B (en) 2023-08-08

Family

ID=78914088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111034189.2A Active CN113822904B (en) 2021-09-03 2021-09-03 Image labeling device, method and readable storage medium

Country Status (1)

Country Link
CN (1) CN113822904B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019021243A (en) * 2017-07-21 2019-02-07 Kddi株式会社 Object extractor and super pixel labeling method
CN110473243A (en) * 2019-08-09 2019-11-19 重庆邮电大学 Tooth dividing method, device and computer equipment based on depth profile perception
CN111328397A (en) * 2017-10-02 2020-06-23 普罗马顿控股有限责任公司 Automatic classification and categorization of 3D dental data using deep learning methods
CN111931931A (en) * 2020-09-29 2020-11-13 杭州迪英加科技有限公司 Deep neural network training method and device for pathology full-field image
CN112446381A (en) * 2020-11-11 2021-03-05 昆明理工大学 Mixed semantic segmentation method driven by full convolution network and based on geodesic active contour
CN113139977A (en) * 2021-04-23 2021-07-20 西安交通大学 Mouth cavity curve image wisdom tooth segmentation method based on YOLO and U-Net
CN113223010A (en) * 2021-04-22 2021-08-06 北京大学口腔医学院 Method and system for fully automatically segmenting multiple tissues of oral cavity image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11464467B2 (en) * 2018-10-30 2022-10-11 Dgnct Llc Automated tooth localization, enumeration, and diagnostic system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019021243A (en) * 2017-07-21 2019-02-07 Kddi株式会社 Object extractor and super pixel labeling method
CN111328397A (en) * 2017-10-02 2020-06-23 普罗马顿控股有限责任公司 Automatic classification and categorization of 3D dental data using deep learning methods
CN110473243A (en) * 2019-08-09 2019-11-19 重庆邮电大学 Tooth dividing method, device and computer equipment based on depth profile perception
CN111931931A (en) * 2020-09-29 2020-11-13 杭州迪英加科技有限公司 Deep neural network training method and device for pathology full-field image
CN112446381A (en) * 2020-11-11 2021-03-05 昆明理工大学 Mixed semantic segmentation method driven by full convolution network and based on geodesic active contour
CN113223010A (en) * 2021-04-22 2021-08-06 北京大学口腔医学院 Method and system for fully automatically segmenting multiple tissues of oral cavity image
CN113139977A (en) * 2021-04-23 2021-07-20 西安交通大学 Mouth cavity curve image wisdom tooth segmentation method based on YOLO and U-Net

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Tooth Detection and Numbering with Instance Segmentation in Panoramic Radiographs;Elif Meşeci 等;ICIDAAI21;第97-100页 *

Also Published As

Publication number Publication date
CN113822904A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN108648172B (en) CT (computed tomography) map pulmonary nodule detection system based on 3D-Unet
CN109461495B (en) Medical image recognition method, model training method and server
Saikumar et al. A novel implementation heart diagnosis system based on random forest machine learning technique.
CN107766874B (en) Measuring method and measuring system for ultrasonic volume biological parameters
CN110503652B (en) Method and device for determining relationship between mandible wisdom tooth and adjacent teeth and mandible tube, storage medium and terminal
JP2008520344A (en) Method for detecting and correcting the orientation of radiographic images
CN114549552A (en) Lung CT image segmentation device based on space neighborhood analysis
CN112750531A (en) Automatic inspection system, method, equipment and medium for traditional Chinese medicine
CN112785609B (en) CBCT tooth segmentation method based on deep learning
CN113012173A (en) Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN112381811A (en) Method, device and equipment for realizing medical image data labeling
CN113643297A (en) Computer-aided age analysis method based on neural network
CN112669235A (en) Method and device for adjusting image gray scale, electronic equipment and storage medium
CN115602320B (en) Difficult airway assessment method and system
Liu et al. Tracking-based deep learning method for temporomandibular joint segmentation
CN115880266B (en) Intestinal polyp detection system and method based on deep learning
CN115439478B (en) Pulmonary lobe perfusion intensity assessment method, system, equipment and medium based on pulmonary perfusion
CN116758087A (en) Lumbar vertebra CT bone window side recess gap detection method and device
CN111513743A (en) Fracture detection method and device
CN113822904B (en) Image labeling device, method and readable storage medium
CN115439423B (en) CT image-based identification method, device, equipment and storage medium
Chen et al. Detection of Various Dental Conditions on Dental Panoramic Radiography Using Faster R-CNN
Singh et al. Semantic segmentation of bone structures in chest X-rays including unhealthy radiographs: A robust and accurate approach
CN115761226A (en) Oral cavity image segmentation identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant