WO2022100005A1 - 牙齿图像的处理方法、装置、电子设备、存储介质及程序 - Google Patents

牙齿图像的处理方法、装置、电子设备、存储介质及程序 Download PDF

Info

Publication number
WO2022100005A1
WO2022100005A1 PCT/CN2021/089058 CN2021089058W WO2022100005A1 WO 2022100005 A1 WO2022100005 A1 WO 2022100005A1 CN 2021089058 W CN2021089058 W CN 2021089058W WO 2022100005 A1 WO2022100005 A1 WO 2022100005A1
Authority
WO
WIPO (PCT)
Prior art keywords
tooth
pixel
instance
image
center
Prior art date
Application number
PCT/CN2021/089058
Other languages
English (en)
French (fr)
Inventor
刘畅
赵亮
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to JP2021576347A priority Critical patent/JP2023504957A/ja
Priority to KR1020227001270A priority patent/KR20220012991A/ko
Publication of WO2022100005A1 publication Critical patent/WO2022100005A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Definitions

  • the present disclosure relates to the technical field of computer vision, and in particular, to a dental image processing method, device, electronic device, storage medium and program.
  • CBCT Cone Beam Computed Tomography
  • CT Computed Tomography
  • CBCT has the advantages of low radiation dose, short scanning time, and high image spatial resolution, and has been more and more widely used in the field of stomatology.
  • Automatic tooth positioning on CBCT images is of great significance to the field of stomatology.
  • the embodiments of the present disclosure provide at least a dental image processing method, apparatus, electronic device, storage medium, and program.
  • An embodiment of the present disclosure provides a method for processing a tooth image, the method is executed by an electronic device, and the method includes:
  • the tooth position location is performed based on the tooth instance segmentation result, and the tooth location location result of the to-be-processed image is obtained.
  • the tooth instance segmentation result of the to-be-processed image is obtained by segmenting the tooth instance of the to-be-processed image, and the tooth position location is performed based on the tooth instance segmentation result to obtain the tooth-position location result of the to-be-processed image.
  • performing tooth instance segmentation on the to-be-processed image to obtain a tooth instance segmentation result of the to-be-processed image includes: sequentially predicting, from a plurality of pixels in the to-be-processed image, the teeth belonging to different teeth the pixel set of the instance, to obtain the prediction results of the multiple pixel sets included in the multiple tooth instances in the image to be processed; obtain the image to be processed according to the prediction results of the multiple pixel sets included in the multiple tooth instances The tooth instance segmentation results.
  • the prediction results of the plurality of pixel sets included in the plurality of tooth instances in the to-be-processed image are obtained, and according to the plurality of tooth instances including
  • the prediction results of multiple pixel sets based on the CBCT image can be used to obtain the tooth instance segmentation results of the image to be processed, so that accurate tooth instance segmentation results can be obtained, which can effectively deal with noise interference, blurred tooth boundaries, and close gray values of tooth roots and jaws in CBCT images. and other complex situations.
  • the pixel sets belonging to different tooth instances are sequentially predicted from a plurality of pixels in the image to be processed, so as to obtain a plurality of pixels included in the plurality of tooth instances in the image to be processed
  • the set prediction result includes: predicting the center pixel of the target tooth instance from a plurality of to-be-processed pixels of the to-be-processed image; wherein the to-be-processed pixel indicates that the to-be-processed image is not predicted to belong to any
  • the pixel of the tooth instance, the target tooth instance represents the currently predicted tooth instance; according to the coordinates of the center pixel of the target tooth instance, the pixel set belonging to the target tooth instance is predicted from a plurality of the pixels to be processed, and obtains The prediction result for the set of pixels that the target tooth instance contains.
  • the predicting the center pixel of the target tooth instance from the plurality of to-be-processed pixels of the to-be-processed image includes: from the plurality of to-be-processed pixels of the to-be-processed image, determining The first pixel with the highest probability of being located at the center of the tooth instance; when the probability of the first pixel being located at the center of the tooth instance is greater than or equal to the first preset value, the first pixel is predicted as the target tooth instance center pixel.
  • the first pixel with the highest probability of being located at the center of the tooth instance is determined from a plurality of pixels to be processed in the image to be processed, and when the probability that the first pixel is located at the center of the tooth instance is greater than or equal to the first preset value , the first pixel is predicted as the center pixel of the target tooth instance, so that the center pixel of the tooth instance can be more accurately determined, thereby helping to accurately segment the tooth instance.
  • the first pixel is predicted to be the center of the target tooth instance when the probability that the first pixel is located at the center of the tooth instance is greater than or equal to a first preset value pixels, including: among the plurality of pixels to be processed, the number of pixels whose probability of being located at the center of the tooth instance is greater than or equal to the first preset value is greater than or equal to a second preset value, and the first pixel When the probability of being located at the center of the tooth instance is greater than or equal to the first preset value, the first pixel is predicted to be the center pixel of the target tooth instance.
  • the number of pixels whose probability of being located at the center of the tooth instance is greater than or equal to the first preset value is greater than or equal to the second preset value, and the probability that the first pixel is located at the center of the tooth instance is greater than or equal to the first pixel.
  • the first pixel is predicted as the center pixel of the target tooth instance, and the prediction is continued based on the first pixel; among the plurality of pixels to be processed, the probability of being located at the center of the tooth instance is greater than or equal to the first predicted value.
  • the prediction can be stopped, thereby improving the prediction efficiency and accuracy.
  • the predicting a pixel set belonging to the target tooth instance from a plurality of the pixels to be processed according to the coordinates of the central pixel of the target tooth instance includes: determining a plurality of the The predicted coordinates of the center of the tooth instance pointed to by the second pixel in the pixels to be processed; wherein the second pixel represents any pixel in the plurality of pixels to be processed, and the center of the tooth instance pointed to by the second pixel
  • the predicted coordinates represent the coordinates of the center pixel of the tooth instance to which the second pixel belongs based on the prediction of the second pixel; according to the predicted coordinates of the center of the tooth instance pointed to by the second pixel, and the target tooth instance
  • the coordinates of the center pixel predict the probability that the second pixel belongs to the center of the target tooth instance; according to the probability that the second pixel belongs to the center of the target tooth instance, predict from a plurality of the pixels to be processed belong to The set of pixels for the target tooth instance.
  • the set of pixels belonging to the target tooth instance can be predicted from the plurality of pixels to be processed. Accurately predict pixels belonging to the target tooth instance.
  • the determining the predicted coordinates of the center of the tooth instance pointed to by the second pixel in the plurality of pixels to be processed includes: determining the second pixel in the plurality of pixels to be processed to the predicted offset of the center pixel of the tooth instance to which the second pixel belongs; based on the coordinates of the second pixel, and the predicted offset of the second pixel to the center pixel of the tooth instance to which the second pixel belongs quantity, to determine the predicted coordinates of the center of the tooth instance pointed to by the second pixel.
  • the predicted coordinates of the center of the tooth instance pointed to by the second pixel are determined, thereby obtaining a more accurate The predicted coordinates of the center of the tooth instance pointed to by the second pixel of .
  • the probability of the center of the instance includes: predicting the clustering parameter corresponding to the target tooth instance; wherein, the clustering parameter is used to represent the discrete degree of the predicted coordinates of the central pixel of the target tooth instance; according to the second The predicted coordinates of the center of the tooth instance pointed to by the pixel, the coordinates of the center pixel of the target tooth instance, and the clustering parameters corresponding to the target tooth instance, predict the probability that the second pixel belongs to the center of the target tooth instance .
  • the second pixel is predicted.
  • the probability that the pixel belongs to the center of the target tooth instance thereby improving the accuracy of the predicted probability that the second pixel belongs to the center of the target tooth instance in some embodiments of the present disclosure.
  • the method further includes: inputting the image to be processed into a first neural network, and obtaining through the first neural network the tooth instance to which the second pixel belongs to the second pixel The predicted offset of the center pixel of , the clustering parameter of the tooth instance to which the second pixel belongs, and the probability that the second pixel is located at the center of the tooth instance.
  • the accuracy of the obtained predicted offset, clustering parameters and the probability that the pixel is located in the center of the tooth instance can be improved, and the predicted offset and clustering parameters can be improved. and the velocity of the probability that the pixel is located at the center of the tooth instance.
  • the first neural network includes a first decoder and a second decoder; the to-be-processed image is input into the first neural network, and the first neural network is used to obtain the The predicted offset of the second pixel from the center pixel of the tooth instance to which the second pixel belongs, the clustering parameter of the tooth instance to which the second pixel belongs, and the probability that the second pixel is located at the center of the tooth instance, including : input the image to be processed into the first neural network, obtain the predicted offset from the second pixel to the center pixel of the tooth instance to which the second pixel belongs, and the second The clustering parameter of the tooth instance to which the pixel belongs, and the probability that the second pixel is located at the center of the tooth instance is obtained through the second decoder.
  • the accuracy of the resulting predicted offsets, clustering parameters, and the probability that a pixel is located at the center of a tooth instance can be improved in some embodiments of the present disclosure.
  • the method before inputting the image to be processed into the first neural network, the method further includes: inputting the training image into the first neural network, and obtaining the image through the first neural network The predicted offset from the third pixel in the training image to the center pixel of the first tooth instance to which the third pixel belongs, the clustering parameter corresponding to the first tooth instance, and the location of the third pixel in the tooth the probability of an instance center, where the third pixel represents any pixel in the training image, and the first tooth instance represents the tooth instance to which the third pixel belongs; according to the coordinates of the third pixel, and the predicted offset of the third pixel to the center pixel of the first tooth instance, to determine the predicted coordinate of the tooth instance center pointed to by the third pixel, wherein the tooth instance center pointed to by the third pixel
  • the predicted coordinates represent the coordinates of the center pixel of the first tooth instance predicted based on the third pixel; according to the predicted coordinates of the center of the tooth instance pointed to by the third pixel, different pixels belonging to the
  • the first neural network can learn the ability to segment different tooth instances in the tooth image.
  • Using the first neural network trained by this implementation to segment teeth instances can obtain stable and accurate segmentation results of teeth instances in complex scenes. Regular shape teeth, low density shadows in the teeth, etc.
  • the performing tooth position positioning based on the tooth instance segmentation result to obtain the tooth position positioning result of the to-be-processed image includes: predicting a second tooth instance in the tooth instance segmentation result The tooth class to which the included pixels belong; wherein the second tooth instance represents any tooth instance in the tooth instance segmentation result; according to the tooth class to which the pixels included in the second tooth instance belong, determine the tooth class The tooth class to which the second tooth instance belongs.
  • the tooth class to which the second tooth instance belongs is determined, as follows: This enables accurate determination of the tooth class to which the second tooth instance belongs.
  • the method before performing tooth instance segmentation on the image to be processed, the method further includes: down-sampling the image to be segmented to a first resolution to obtain a first image; and according to the first image , obtain the image to be processed; after obtaining the tooth instance segmentation result of the image to be processed, the method further includes: obtaining a second image according to the image to be segmented; wherein, the second image is The resolution is a second resolution, and the second resolution is higher than the first resolution; according to the coordinates of the center pixel of the third tooth instance in the tooth instance segmentation result, from the second image, Crop out the image corresponding to the third tooth instance; wherein, the third tooth instance represents any tooth instance in the segmentation result of the tooth instance; segment the image corresponding to the third tooth instance to obtain the Segmentation result of the third tooth instance at the second resolution.
  • the tooth instance segmentation and tooth position location can be quickly performed at a lower resolution first, and the segmentation results of each tooth instance at a higher
  • the method before performing tooth instance segmentation on the image to be processed, the method further includes: performing upper and lower tooth segmentation according to the image to be segmented, and determining a region of interest in the image to be segmented; The region of interest is cropped, and the to-be-segmented image is cropped to obtain the to-be-processed image.
  • the obtained image to be processed can retain most of the tooth information in the image to be segmented, and can remove most of the irrelevant information (such as background information) in the image to be segmented, which is helpful for subsequent tooth instance segmentation and tooth location positioning Efficiency and accuracy etc.
  • Embodiments of the present disclosure also provide a dental image processing device, including:
  • a tooth instance segmentation module configured to perform tooth instance segmentation on the image to be processed to obtain a tooth instance segmentation result of the to-be-processed image; wherein one tooth instance corresponds to a tooth, and the tooth instance segmentation result includes the to-be-processed image Information about the tooth instance to which the pixel in ;
  • the tooth position location module is configured to perform tooth location location based on the tooth instance segmentation result, and obtain the tooth location location result of the to-be-processed image.
  • the tooth instance segmentation module is configured to sequentially predict pixel sets belonging to different tooth instances from a plurality of pixels in the to-be-processed image to obtain a plurality of pixel sets in the to-be-processed image. Prediction results of multiple pixel sets included in the tooth instance; according to the prediction results of multiple pixel sets included in the multiple tooth instances, the segmentation result of the tooth instance of the image to be processed is obtained.
  • the tooth instance segmentation module is configured to predict a center pixel of a target tooth instance from a plurality of to-be-processed pixels of the to-be-processed image; wherein the to-be-processed pixel represents the Pixels in the image to be processed that are not predicted to belong to any tooth instance, and the target tooth instance represents the currently predicted tooth instance; according to the coordinates of the center pixel of the target tooth instance, predict from a plurality of the pixels to be processed A pixel set belonging to the target tooth instance is obtained, and a prediction result of the pixel set included in the target tooth instance is obtained.
  • the tooth instance segmentation module is configured to, from a plurality of pixels to be processed in the to-be-processed image, determine a first pixel with the highest probability of being located at the center of the tooth instance; When the probability that the pixel is located at the center of the tooth instance is greater than or equal to the first preset value, the first pixel is predicted to be the center pixel of the target tooth instance.
  • the tooth instance segmentation module is configured to, among the plurality of pixels to be processed, the number of pixels whose probability of being located at the center of the tooth instance is greater than or equal to the first preset value is greater than or equal to or equal to the second preset value, and the probability that the first pixel is located at the center of the tooth instance is greater than or equal to the first preset value, the first pixel is predicted as the target tooth instance. center pixel.
  • the tooth instance segmentation module is configured to determine the predicted coordinates of the tooth instance center pointed to by a second pixel of the plurality of pixels to be processed; wherein the second pixel represents a plurality of For any one of the pixels to be processed, the predicted coordinates of the center of the tooth instance pointed to by the second pixel represent the coordinates of the center pixel of the tooth instance to which the second pixel belongs based on the prediction of the second pixel; According to the predicted coordinates of the center of the tooth instance pointed to by the second pixel, and the coordinates of the center pixel of the target tooth instance, predict the probability that the second pixel belongs to the center of the target tooth instance; The probability that a pixel belongs to the center of the target tooth instance, a set of pixels belonging to the target tooth instance is predicted from a plurality of the pixels to be processed.
  • the tooth instance segmentation module is configured to determine a predicted offset from a second pixel of the plurality of pixels to be processed to a center pixel of the tooth instance to which the second pixel belongs;
  • the predicted coordinates of the center of the tooth instance pointed to by the second pixel are determined according to the coordinates of the second pixel and the predicted offset of the second pixel from the center pixel of the tooth instance to which the second pixel belongs.
  • the tooth instance segmentation module is configured to predict a clustering parameter corresponding to the target tooth instance; wherein the clustering parameter is used to represent the prediction of the center pixel of the target tooth instance The degree of dispersion of coordinates; according to the predicted coordinates of the center of the tooth instance pointed to by the second pixel, the coordinates of the center pixel of the target tooth instance, and the clustering parameters corresponding to the target tooth instance, predict the second pixel The probability of belonging to the center of the target tooth instance.
  • the apparatus further includes:
  • a first prediction module configured to input the image to be processed into a first neural network, and obtain a predicted offset from the second pixel to the center pixel of the tooth instance to which the second pixel belongs via the first neural network , the clustering parameter of the tooth instance to which the second pixel belongs, and the probability that the second pixel is located at the center of the tooth instance.
  • the first neural network includes a first decoder and a second decoder
  • the first prediction module is configured to input the image to be processed into a first neural network, and obtain a prediction bias from the second pixel to the center pixel of the tooth instance to which the second pixel belongs via the first decoder; The shift amount, and the clustering parameter of the tooth instance to which the second pixel belongs, and the probability that the second pixel is located at the center of the tooth instance is obtained through the second decoder.
  • the apparatus further includes:
  • the second prediction module is configured to input the training image into the first neural network, and obtain through the first neural network the center pixel of the first tooth instance to which the third pixel in the training image belongs to The predicted offset of the first tooth instance, the clustering parameter corresponding to the first tooth instance, and the probability that the third pixel is located at the center of the tooth instance, wherein the third pixel represents any pixel in the training image, so the first tooth instance represents the tooth instance to which the third pixel belongs;
  • a first determination module configured to determine the center of the tooth instance pointed to by the third pixel according to the coordinates of the third pixel and the predicted offset from the third pixel to the center pixel of the first tooth instance The predicted coordinates, wherein the predicted coordinates of the center of the tooth instance pointed to by the third pixel represent the coordinates of the center pixel of the first tooth instance predicted based on the third pixel;
  • a second determining module configured to be based on the predicted coordinates of the center of the tooth instance pointed to by the third pixel, the predicted coordinates of the center of the tooth instance pointed to by different pixels belonging to the first tooth instance, and the first tooth instance Corresponding clustering parameters, determine the probability that the third pixel belongs to the center of the first tooth instance;
  • the training module is configured to train the data according to the probability that the third pixel is located at the center of the tooth instance, the probability that the third pixel belongs to the center of the first tooth instance, and the true value that the third pixel belongs to the interior of the tooth. Describe the first neural network.
  • the tooth location module is configured to predict the tooth class to which pixels included in the second tooth instance in the tooth instance segmentation result belong; wherein the second tooth instance represents the Any tooth instance in the segmented result of the tooth instance; according to the tooth class to which the pixels included in the second tooth instance belong, determine the tooth class to which the second tooth instance belongs.
  • the apparatus further includes:
  • a downsampling module configured to downsample the image to be divided to a first resolution to obtain a first image; and obtain the image to be processed according to the first image;
  • a third determining module configured to obtain a second image according to the to-be-segmented image; wherein the resolution of the second image is a second resolution, and the second resolution is higher than the first resolution;
  • a first cropping module configured to crop an image corresponding to the third tooth instance from the second image according to the coordinates of the center pixel of the third tooth instance in the tooth instance segmentation result; wherein the The third tooth instance represents any tooth instance in the tooth instance segmentation result;
  • the first segmentation module is configured to segment the image corresponding to the third tooth instance to obtain a segmentation result of the third tooth instance at the second resolution.
  • the apparatus further includes:
  • the second segmentation module is configured to perform upper and lower teeth segmentation according to the to-be-segmented image, and to determine the region of interest in the to-be-segmented image;
  • the second cropping module is configured to crop the to-be-segmented image according to the region of interest to obtain the to-be-processed image.
  • Embodiments of the present disclosure also provide an electronic device, comprising: one or more processors; a memory configured to store executable instructions; wherein the one or more processors are configured to call executable instructions stored in the memory Execute the instruction to execute the above method.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • Embodiments of the present disclosure further provide a computer program, where the computer program includes computer-readable codes, and when the computer-readable codes are executed in an electronic device, the processor of the electronic device executes any of the foregoing implementations The image object classification method described in the example.
  • the embodiments of the present disclosure provide a method, device, device, storage medium, and program for processing a tooth image.
  • a tooth instance segmentation result of the image to be processed is obtained, and a tooth instance segmentation result is performed based on the tooth instance segmentation result.
  • the tooth location location can be performed based on the tooth instance segmentation result that can not only distinguish between teeth and background, but also different teeth, which can improve the accuracy of tooth location location.
  • FIG. 1 shows a schematic diagram of an application scenario of a method for processing a tooth image provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic flowchart of a method for processing a tooth image provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a tooth instance segmentation result of an image to be processed provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of a system architecture to which the dental image processing method according to the embodiment of the present disclosure can be applied;
  • FIG. 5 shows a schematic diagram of a CBCT cross-sectional image with highlight artifacts and missing teeth provided by an embodiment of the present disclosure
  • FIG. 6 shows a block diagram of an apparatus 600 for processing a tooth image provided by an embodiment of the present disclosure
  • FIG. 7 shows a block diagram of an electronic device 700 provided by an embodiment of the present disclosure
  • FIG. 8 shows a block diagram of an electronic device 1900 provided by an embodiment of the present disclosure.
  • CBCT images are more and more widely used in the field of modern oral medicine, especially in dentistry.
  • doctors need accurate geometric information such as the three-dimensional shape of the teeth to make a diagnosis and determine a personalized treatment plan.
  • Obtaining the patient's dental anatomy and tooth position information automatically through the algorithm can improve the doctor's reading efficiency and provide information for the production of dental restoration materials. Therefore, automatic tooth segmentation and tooth position determination algorithms based on CBCT images are of great clinical significance. Due to factors such as noise interference, blurred tooth boundaries, and the closeness of the tooth root and jaw gray value in CBCT images, there are many difficulties in the accurate segmentation of teeth. The automatic judgment of bits is also a difficult problem to solve.
  • the method of obtaining the 3D model of the tooth by manual delineation by the dentist requires a lot of time, and there are occasional errors; the threshold-based method is difficult to deal with the uneven gray distribution of the teeth in the CBCT image and the blurred boundary; the interactive segmentation method Manual participation is required, and it is difficult to deal with teeth with irregular shapes and blurred boundaries; the automatic segmentation method based on level set is sensitive to initialization, and it is difficult to adaptively evolve the segmentation boundary in different parts of the tooth; the method based on the active contour model also needs to be determined A better initialization curve is not effective for blurred tooth boundaries and low-density shadows in teeth.
  • the embodiments of the present disclosure provide a method, device, electronic device, storage medium and program for processing a tooth image. Instance segmentation results, and perform tooth location based on the tooth instance segmentation results to obtain the tooth location results of the to-be-processed image, thereby performing tooth location based on the tooth instance segmentation results that can not only distinguish between teeth and backgrounds, but also different teeth. It can improve the accuracy of tooth positioning.
  • FIG. 1 shows a schematic diagram of an application scenario of a method for processing a tooth image provided by an embodiment of the present disclosure.
  • the image to be divided 101 ie, the original data
  • upper and lower teeth can be segmented according to the image to be segmented, and a region of interest for teeth in the image to be segmented is determined, that is, 102 .
  • the image to be segmented may be downsampled to the first resolution to obtain a first image with a low spatial resolution, namely 103 , and the first image is cropped according to the region of interest to obtain an image to be processed, namely 104 .
  • the unilateral tooth for example, The right tooth
  • the image to be processed is turned left and right
  • the tooth location of the other tooth such as the left tooth
  • a second image may be obtained according to the image to be segmented, namely 107 , wherein the resolution of the second image is the second resolution, and the second resolution is higher than the first resolution. Then, according to the coordinates of the center pixel of any tooth instance in the tooth position positioning result, from the second image, the image corresponding to the tooth instance, that is, 108, can be cropped, and the image corresponding to the tooth instance can be processed for a single tooth. Segmentation is performed to obtain and output the segmentation result of the tooth instance at the second resolution, namely 109, so that the segmentation result of each tooth instance at a higher resolution can be obtained.
  • FIG. 2 shows a schematic flowchart of a method for processing a tooth image provided by an embodiment of the present disclosure.
  • the processing method of the tooth image may be performed by a terminal device or a server or other processing device.
  • the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, or a wearable devices, etc.
  • a method for processing a tooth image may be implemented by a processor invoking computer-readable instructions stored in a memory. As shown in FIG. 2 , the processing method of the tooth image includes steps S21 to S22.
  • step S21 the image to be processed is subjected to tooth instance segmentation to obtain a tooth instance segmentation result of the to-be-processed image.
  • One tooth instance corresponds to one tooth
  • the tooth instance segmentation result includes the information of the tooth instance to which the pixel in the image to be processed belongs.
  • the image to be processed may represent a tooth image to be processed, wherein the tooth image may represent an image containing at least part of tooth information.
  • the image to be processed may be a CBCT image.
  • CBCT images can be acquired by cone beam projection computer reconstruction tomography equipment and other equipment.
  • the image to be processed may also be a CT image or other images containing tooth information, which is not limited here.
  • the image to be processed may be a three-dimensional image or a two-dimensional image.
  • the image to be processed may be a three-dimensional CBCT image.
  • tooth instance segmentation may mean segmenting different teeth. That is, not only teeth and backgrounds, but also different teeth can be distinguished by tooth instance segmentation.
  • Performing tooth instance segmentation on the image to be processed may mean segmenting different teeth in the image to be processed to obtain a set of pixels included in each tooth in the image to be processed.
  • the information of the tooth instance to which the pixel in the image to be processed belongs may be represented by a category.
  • the tooth instance segmentation result may include 33 categories, which are 32 tooth instance categories and background categories respectively. Among them, any tooth instance category corresponds to a tooth instance, and the background category indicates that it does not belong to the interior of the tooth.
  • any pixel in the image to be processed can belong to any of the 33 categories.
  • the tooth instance segmentation result may be represented in the form of data such as images, tables, and matrices, as long as the information of the tooth instance to which the pixels in the image to be processed belong can be represented.
  • FIG. 3 shows a schematic diagram of a tooth instance segmentation result of an image to be processed provided by an embodiment of the present disclosure.
  • the pixel value of the pixel that does not belong to the inside of the tooth (that is, belongs to the background part) in the image to be processed is 0, the gray value of the pixel value of the pixel belonging to different tooth instances is different, and the pixel value of the pixel belonging to the same tooth instance is different. the same value.
  • step S22 the tooth position location is performed based on the tooth instance segmentation result, and the tooth location result of the image to be processed is obtained.
  • the tooth position location may represent at least one of the information for determining the tooth position to which the tooth instance belongs, and the information for determining the tooth position to which the pixel in the image to be processed belongs. That is, by locating the tooth position based on the tooth instance segmentation result, it can be determined to which tooth position each tooth instance in the image to be processed belongs.
  • the tooth position location result may include at least one of the information of the tooth position to which the tooth instance in the to-be-processed image belongs, and the information of the tooth position to which the pixel in the to-be-processed image belongs.
  • the tooth position positioning result may be represented by the Fédération Dentaire Internationale (FDI) tooth position representation; wherein, the FDI tooth position representation is also called the International Organization for Standardization (International Organization for Standardization). , ISO)-3950 notation.
  • the tooth position positioning result may also be represented by tooth position notation such as part recording method (also known as Palmer tooth position representation), universal recording method (Universal Numbering System, UNS).
  • the tooth instance segmentation result of the to-be-processed image is obtained by performing tooth instance segmentation on the to-be-processed image, and the tooth position location is performed based on the tooth instance segmentation result to obtain the tooth-position location result of the to-be-processed image.
  • the tooth position positioning is performed based on the results of tooth instance segmentation that can not only distinguish between teeth and backgrounds, but also different teeth, which can improve the accuracy of tooth position positioning.
  • by first performing tooth instance segmentation, and then performing tooth position positioning based on the tooth instance segmentation results more accurate tooth position positioning results can be obtained for complex situations such as different tooth shapes, missing teeth, and implants.
  • FIG. 4 shows a schematic diagram of a system architecture to which the dental image processing method according to the embodiment of the present disclosure can be applied; as shown in FIG. 4 , the system architecture includes an image acquisition terminal 401 , a network 402 and a dental image processing terminal 403 .
  • the image acquisition terminal 401 and the dental image processing terminal 403 establish a communication connection through the network 402, and the image acquisition terminal 401 reports the image to be processed to the dental image processing terminal 403 through the network 402, and the dental image processing terminal 403.
  • the tooth image processing terminal 403 In response to the image to be processed, using the tooth instance segmentation model and the tooth position location model, perform tooth instance segmentation and tooth location location on the to-be-processed image to obtain a tooth location location result of the to-be-processed image. Finally, the tooth image processing terminal 403 uploads the tooth position positioning result of the image to be processed to the network 402 , and sends the result to the image acquisition terminal 401 through the network 402 .
  • the image acquisition terminal 401 may include an image acquisition device, and the dental image processing terminal 403 may include a visual processing device or a remote server with visual information processing capability.
  • Network 402 may employ wired or wireless connections.
  • the processing terminal 403 of the dental image is a visual processing device
  • the image acquisition terminal 401 can be connected to the visual processing device through a wired connection, such as data communication through a bus;
  • the processing terminal 403 of the dental image is a remote server
  • the image acquisition terminal 401 can perform data interaction with a remote server through a wireless network.
  • the image acquisition terminal 401 may be a visual processing device with an image acquisition module, and is specifically implemented as a host with a camera.
  • the dental image processing method according to the embodiment of the present disclosure may be executed by the image acquisition terminal 401, and the above-mentioned system architecture may not include the network 402 and the dental image processing terminal 403.
  • performing tooth instance segmentation on the to-be-processed image to obtain a tooth instance segmentation result of the to-be-processed image includes: sequentially predicting pixel sets belonging to different tooth instances from a plurality of pixels in the to-be-processed image, and obtaining Prediction results of multiple pixel sets included in the multiple tooth instances in the image to be processed; obtaining a segmented result of the teeth instances in the image to be processed according to the prediction results of multiple pixel sets included in the multiple tooth instances.
  • a set of pixels belonging to any tooth instance may represent a set of pixels contained in the tooth instance. Pixels belonging to different tooth instances can be sequentially predicted from a plurality of pixels in the image to be processed, so as to obtain a set of pixels included in the plurality of tooth instances in the image to be processed. For example, first predict the pixel set belonging to the first tooth instance, after the pixel set belonging to the first tooth instance is predicted, predict the pixel set belonging to the second tooth instance, and then predict the pixel set belonging to the second tooth instance. After the prediction is complete, the set of pixels belonging to the 3rd tooth instance is predicted, and so on. That is, in this implementation, predictions may be made for only one tooth instance at a time.
  • the prediction result of the pixel set included in any tooth instance may include information of the predicted pixels belonging to the tooth instance, for example, may include the coordinates of the predicted pixels belonging to the tooth instance.
  • a pixel set belonging to each tooth instance may be sequentially predicted from a plurality of pixels in the image to be processed, to obtain a prediction result of the pixel set included in each tooth instance in the image to be processed; The prediction result of the pixel set included in the instance is obtained, and the segmentation result of the tooth instance of the image to be processed is obtained.
  • pixel sets belonging to some tooth instances may be predicted only from a plurality of pixels of the image to be processed, without predicting pixel sets belonging to each tooth instance.
  • the prediction result of the pixel set included in any tooth instance may be represented by a prediction mask (mask) corresponding to the tooth instance.
  • the size of the prediction mask corresponding to the tooth instance may be the same as the image to be processed.
  • the predicted pixel value of the pixel belonging to the tooth instance is different from the predicted pixel value of the pixel not belonging to the tooth instance.
  • the pixel value of the predicted pixel belonging to the tooth instance is 1, and the pixel value of the predicted pixel not belonging to the tooth instance is 0.
  • data forms such as tables and matrices can also be used to represent the prediction results of the pixel sets included in any tooth instance.
  • the prediction results of multiple pixel sets included in the multiple tooth instances in the image to be processed are obtained, and according to the prediction results of multiple pixel sets contained in multiple tooth instances, the segmentation results of tooth instances of the image to be processed can be obtained, so that accurate segmentation results of tooth instances can be obtained, which can effectively deal with noise interference in CBCT images, blurred tooth boundaries, Complex situations such as the closeness of the gray value of the tooth root and the jaw bone.
  • different tooth instances can also be predicted in parallel, for example, the pixel sets belonging to each tooth instance can be predicted at the same time, and the prediction result of the pixel set included in each tooth instance in the image to be processed can be obtained, Then, according to the prediction result of the pixel set included in each tooth instance, the segmentation result of the tooth instance of the image to be processed is obtained.
  • pixel sets belonging to different tooth instances are sequentially predicted from multiple pixels in the image to be processed, and prediction results of multiple pixel sets included in the multiple tooth instances in the image to be processed are obtained, including : Predict the center pixel of the target tooth instance from a plurality of to-be-processed pixels of the to-be-processed image; wherein, the to-be-processed pixel represents the pixel in the to-be-processed image that is not predicted to belong to any tooth instance, and the target tooth instance represents the currently predicted Tooth instance; according to the coordinates of the center pixel of the target tooth instance, a pixel set belonging to the target tooth instance is predicted from a plurality of pixels to be processed, and a prediction result of the pixel set included in the target tooth instance is obtained.
  • the center pixel of the target tooth instance may be predicted from all the pixels of the image to be processed without performing prediction of any tooth instance. That is, in the case where the prediction of any tooth instance is not performed, all the pixels of the image to be processed can be regarded as the pixels to be processed. After the prediction of the pixel set belonging to a certain tooth instance is completed, the center pixel of the next tooth instance (ie, the target tooth instance) can be predicted from a plurality of to-be-processed pixels of the to-be-processed image.
  • the set of pixels predicted to belong to any tooth instance includes the predicted central pixel of that tooth instance, as well as other pixels predicted to belong to that tooth instance (ie, non-central pixels).
  • the coordinates of the center pixel of the target tooth instance can be written as
  • the center pixel of the target tooth instance by predicting the center pixel of the target tooth instance from a plurality of to-be-processed pixels of the to-be-processed image, and according to the coordinates of the center pixel of the target tooth instance, predicting from the plurality of to-be-processed pixels belonging to the target
  • the pixel set of the tooth instance is used to obtain the prediction result of the pixel set included in the target tooth instance, so that the accuracy of the obtained prediction result of the pixel set included in any tooth instance can be improved.
  • predicting the central pixel of the target tooth instance from the plurality of to-be-processed pixels in the to-be-processed image may include: The first pixel with the highest probability; when the probability that the first pixel is located at the center of the tooth instance is greater than or equal to the first preset value, the first pixel is predicted to be the center pixel of the target tooth instance.
  • the probability that pixel i of the image to be processed is located at the center of the tooth instance may be denoted as s i .
  • the first pixel represents the pixel with the highest probability of being located at the center of the tooth instance among the plurality of pixels to be processed.
  • the first preset value may be 0.5.
  • those skilled in the art can flexibly set the first preset value according to actual application scenario requirements, which is not limited here.
  • the first pixel with the highest probability of being located at the center of the tooth instance is determined from a plurality of pixels to be processed in the image to be processed, and the probability that the first pixel is located at the center of the tooth instance is greater than or equal to the first preset value
  • the first pixel is predicted as the center pixel of the target tooth instance, so that the center pixel of the tooth instance can be determined more accurately, thereby helping to accurately segment the tooth instance.
  • predicting the first pixel as the center pixel of the target tooth instance may include: Among the pixels to be processed, the number of pixels whose probability of being located at the center of the tooth instance is greater than or equal to the first preset value is greater than or equal to the second preset value, and the probability that the first pixel is located at the center of the tooth instance is greater than or equal to the first preset value. case, the first pixel is predicted to be the center pixel of the target tooth instance.
  • the number of pixels whose probability of being located at the center of the tooth instance is greater than or equal to the first preset value indicates that the pixels in the image to be processed that are not predicted to belong to any tooth instance are located in the tooth instance.
  • the second preset value may be determined according to an average or empirical value of the number of pixels contained in a single tooth.
  • the second preset value may be 32.
  • those skilled in the art can also flexibly determine the second preset value according to at least one of actual application scenario requirements and experience, which is not limited here.
  • the number of pixels whose probability of being located at the center of the tooth instance is greater than or equal to the first preset value is greater than or equal to the second preset value, and the probability that the first pixel is located at the center of the tooth instance is greater than or equal to
  • the first pixel is predicted as the center pixel of the target tooth instance, and the prediction is continued based on the first pixel; among the plurality of pixels to be processed, the probability of being located at the center of the tooth instance is greater than or equal to
  • the prediction can be stopped, thereby improving the prediction efficiency and accuracy.
  • predicting a set of pixels belonging to the target tooth instance from the plurality of pixels to be processed according to the coordinates of the central pixel of the target tooth instance may include: determining where a second pixel of the plurality of pixels to be processed is located. The predicted coordinates of the center of the tooth instance pointed to; wherein, the second pixel represents any pixel in a plurality of pixels to be processed, and the predicted coordinate of the center of the tooth instance pointed to by the second pixel represents the second pixel predicted based on the second pixel belongs.
  • the predicted coordinates of the center of the tooth instance pointed to by the second pixel may be denoted as e i .
  • the probability that the second pixel belongs to the center of the target tooth instance may be predicted according to the difference between the predicted coordinates of the center of the tooth instance pointed to by the second pixel and the coordinates of the center pixel of the target tooth instance .
  • the second pixel is pixel i
  • the predicted coordinates of the center of the tooth instance pointed to by the second pixel are e i
  • the coordinates of the center pixel of the target tooth instance are Then the difference between the predicted coordinates of the center of the tooth instance pointed to by the second pixel and the coordinates of the center pixel of the target tooth instance can be expressed as
  • the probability that the second pixel belongs to the center of the target tooth instance may be negatively correlated with the distance between the predicted coordinates of the tooth instance center pointed by the second pixel and the coordinates of the center pixel of the target tooth instance. That is, the smaller the distance between the predicted coordinates of the center of the tooth instance pointed to by the second pixel and the coordinates of the center pixel of the target tooth instance, the greater the probability that the second pixel belongs to the center of the target tooth instance; The greater the distance between the predicted coordinates of the center of the tooth instance and the coordinates of the center pixel of the target tooth instance, the smaller the probability that the second pixel belongs to the center of the target tooth instance.
  • the greater the probability that the second pixel belongs to the center of the target tooth instance the greater the probability that the second pixel belongs to the target tooth instance; the smaller the probability that the second pixel belongs to the center of the target tooth instance, the higher the probability that the second pixel belongs to the center of the target tooth instance The probability of belonging to the target tooth instance is smaller.
  • the second pixel may be predicted to belong to the target tooth instance, that is, it may be predicted that the second pixel belongs to the target tooth instance.
  • the pixel set includes a second pixel; if the probability that the second pixel belongs to the center of the target tooth instance is less than or equal to the fourth preset value, the second pixel can be predicted as not belonging to the target tooth instance, that is, it can be predicted to belong to the target tooth instance.
  • the set of pixels does not include the second pixel.
  • the fourth preset value may be 0.5.
  • those skilled in the art can flexibly set the fourth preset value according to actual application scenario requirements, which is not limited here.
  • determining the predicted coordinates of the center of the tooth instance pointed to by the second pixel of the plurality of pixels to be processed may include: determining the center of the tooth instance to which the second pixel of the plurality of pixels to be processed belongs to the second pixel Predicted offset of the pixel; according to the coordinate of the second pixel and the predicted offset from the second pixel to the center pixel of the tooth instance to which the second pixel belongs, determine the predicted coordinate of the center of the tooth instance pointed to by the second pixel.
  • the predicted offset from the second pixel to the center pixel of the tooth instance to which the second pixel belongs may represent the predicted coordinates between the coordinates of the second pixel and the coordinates of the center pixel of the tooth instance to which the second pixel belongs difference.
  • the coordinates of the second pixel may be denoted as x i
  • the predicted offset of the second pixel to the center pixel of the tooth instance to which the second pixel belongs may be denoted as o i .
  • the coordinates of the second pixel can be compared with the coordinates of the second pixel.
  • the sum of the predicted offsets is determined as the predicted coordinates of the center of the tooth instance pointed to by the second pixel.
  • the coordinates of the second pixel may be The difference from the predicted offset is determined as the predicted coordinate of the center of the tooth instance pointed to by the second pixel.
  • the predicted offset of the second pixel in the plurality of pixels to be processed to the center pixel of the tooth instance to which the second pixel belongs and according to the coordinates of the second pixel, and the second pixel to the second pixel
  • the predicted offset of the center pixel of the tooth instance to which it belongs determines the predicted coordinates of the center of the tooth instance pointed to by the second pixel, so that more accurate predicted coordinates of the center of the tooth instance pointed to by the second pixel can be obtained.
  • predicting the probability that the second pixel belongs to the center of the target tooth instance according to the predicted coordinates of the center of the tooth instance pointed to by the second pixel and the coordinates of the center pixel of the target tooth instance may include: predicting that the target tooth instance corresponds to The clustering parameter; wherein, the clustering parameter is used to represent the discrete degree of the predicted coordinates of the center pixel of the target tooth instance; according to the predicted coordinates of the center of the tooth instance pointed to by the second pixel, the coordinates of the center pixel of the target tooth instance, and The clustering parameter corresponding to the target tooth instance predicts the probability that the second pixel belongs to the center of the target tooth instance.
  • the clustering parameter corresponding to the target tooth instance may be any parameter that can represent the degree of dispersion of the predicted coordinates of the central pixel of the target tooth instance.
  • the clustering parameter corresponding to the target tooth instance may represent the standard deviation of the predicted coordinates of the central pixel of the target tooth instance.
  • the clustering parameter corresponding to the target tooth instance can be denoted as ⁇ .
  • the clustering parameter corresponding to the target tooth instance may represent the variance of the predicted coordinates of the central pixel of the target tooth instance.
  • the clustering parameter corresponding to the target tooth instance can be denoted as ⁇ 2 .
  • the clustering parameter corresponding to the target tooth instance may be negatively correlated with the variance of the predicted coordinates of the central pixel of the target tooth instance.
  • the clustering parameter corresponding to the target tooth instance can be
  • the clustering parameters corresponding to different tooth instances may be different, and the corresponding clustering parameters may be predicted for each tooth instance respectively.
  • the probability that the second pixel belongs to the center of the target tooth instance may be Among them, exp(X) represents the X power of e.
  • exp(X) represents the X power of e.
  • the probability of the second pixel belonging to the center of the target tooth instance is predicted, thereby improving the accuracy of the predicted probability of the second pixel belonging to the center of the target tooth instance in some embodiments of the present disclosure.
  • the method further includes: inputting the image to be processed into a first neural network, and obtaining, via the first neural network, a predicted offset from the second pixel to the center pixel of the tooth instance to which the second pixel belongs , the clustering parameters of the tooth instance to which the second pixel belongs, and the probability that the second pixel is located at the center of the tooth instance.
  • the predicted offset of each pixel in the image to be processed to the central pixel of the tooth instance to which the pixel belongs, and the clustering parameters of each tooth instance in the image to be processed can be obtained through the first neural network, and the probability that each pixel in the image to be processed is located at the center of the tooth instance.
  • the first neural network may also only process some pixels in the image to be processed, which is not limited here.
  • processing the image to be processed through the first neural network can improve the accuracy of the obtained predicted offset, clustering parameters and the probability that the pixel is located in the center of the tooth instance, and can improve the obtained predicted offset, The velocity of the clustering parameters and the probability that a pixel is located at the center of a tooth instance.
  • the first neural network includes a first decoder and a second decoder; the image to be processed is input into the first neural network, and the second pixel to the tooth to which the second pixel belongs is obtained through the first neural network
  • the predicted offset of the center pixel of the instance, the clustering parameter of the tooth instance to which the second pixel belongs, and the probability that the second pixel is located at the center of the tooth instance including: inputting the image to be processed into the first neural network, via the first decoder
  • the predicted offset from the second pixel to the center pixel of the tooth instance to which the second pixel belongs, and the clustering parameter of the tooth instance to which the second pixel belongs are obtained, and the probability that the second pixel is located at the center of the tooth instance is obtained through the second decoder.
  • the accuracy of the resulting predicted offsets, clustering parameters, and probabilities that a pixel is centered on a tooth instance can be improved in some embodiments of the present disclosure.
  • the method may further include: inputting the training image into the first neural network, and obtaining the third to third pixels in the training image via the first neural network
  • the first tooth instance represents the tooth instance to which the third pixel belongs; according to the coordinates of the third pixel and the predicted offset from the third pixel to the center pixel of the first tooth instance, the prediction of the center of the tooth instance pointed to by the third pixel is determined Coordinates, wherein the predicted coordinates of the center of the tooth instance pointed to by the third pixel represent the coordinates of the center pixel of the first tooth instance predicted based on the third pixel; and the predicted coordinates of the center of the tooth instance pointed to by the third pixel belong to the first The predicted coordinates of the center
  • the training image may be a three-dimensional image or a two-dimensional image.
  • the training image is a three-dimensional image
  • the first tooth instance may be denoted as Sk
  • the center pixel of the first tooth instance may be denoted as ck
  • k represents the number of the tooth instance.
  • x i may include the x-axis coordinate, y-axis coordinate and z-axis coordinate of the third pixel, and the predicted offset from the third pixel to the center pixel of the first tooth instance to which the third pixel belongs
  • the quantities may include an x-axis predicted offset, a y-axis predicted offset, and a z-axis predicted offset from the third pixel to the center pixel of the first tooth instance to which the third pixel belongs.
  • the predicted offset of each pixel in the training image to the center pixel of the tooth instance to which it belongs can be obtained by the first neural network, so that the offset matrix of (3, D, H, W) can be obtained .
  • the mean value of the predicted coordinates of the center of the tooth instance pointed to by different pixels belonging to the first tooth instance can be obtained.
  • the mean of the predicted coordinates of the center of the tooth instance pointed to by different pixels belonging to the first tooth instance can be expressed as Among them, e j represents the predicted coordinates of the center of the tooth instance pointed to by the pixel j belonging to the first tooth instance, and
  • the predicted coordinates of the center of the tooth instance pointed to by the third pixel and the predicted coordinates of the center of the tooth instance pointed to by different pixels belonging to the first tooth instance, to determine the probability that the third pixel belongs to the center of the first tooth instance, we can Including: determining the mean value of the predicted coordinates of the center of the tooth instance pointed to by each pixel belonging to the first tooth instance; according to the difference between the predicted coordinates of the center of the tooth instance pointed to by the third pixel and the mean value, determining that the third pixel belongs to The probability of the center of a tooth instance.
  • the clustering parameter corresponding to the first tooth instance may be recorded as For example, the probability that the third pixel belongs to the center of the first tooth instance can be
  • the first neural network may be trained using a loss function such as a cross-entropy loss function.
  • a loss function such as a cross-entropy loss function.
  • the third pixel is pixel i
  • the probability that the third pixel is located at the center of the tooth instance can be denoted as s i
  • the probability that the third pixel belongs to the center of the first tooth instance can be denoted as ⁇ k (i)
  • the loss function of the neural network can be expressed as Wherein, s i ⁇ S k indicates that the third pixel belongs to the inside of the tooth, that is, the true value of the third pixel belonging to the inside of the tooth is that the third pixel belongs to the inside of the tooth.
  • N represents the total number of pixels in the image to be processed.
  • Training the first neural network through the above example enables the first neural network to learn the ability to segment different tooth instances in the tooth image.
  • Using the first neural network trained by this example to perform tooth instance segmentation can obtain stable and accurate tooth instance segmentation results in complex scenes. Morphology of teeth, low density shadows in teeth, etc.
  • the training image is input into the first neural network, and the predicted offset from the third pixel in the training image to the center pixel of the first tooth instance to which the third pixel belongs is obtained through the first neural network,
  • the clustering parameters corresponding to the first tooth instance and the probability that the third pixel is located at the center of the tooth instance include: inputting the training image into the first neural network, and obtaining the third pixel in the training image via the first decoder of the first neural network.
  • the predicted offset to the center pixel of the first tooth instance to which the third pixel belongs, and the clustering parameters corresponding to the first tooth instance, the probability that the third pixel is located at the center of the tooth instance is obtained through the second decoder of the first neural network .
  • the first neural network adopts an Encoder-Decoder structure, and the specific network architecture is not limited herein.
  • performing tooth location based on the tooth instance segmentation result to obtain the tooth location result of the image to be processed includes: predicting the tooth class to which the pixel included in the second tooth instance in the tooth instance segmentation result belongs ; wherein, the second tooth instance represents any tooth instance in the tooth instance segmentation result; according to the tooth class to which the pixels included in the second tooth instance belong, the tooth class to which the second tooth instance belongs is determined.
  • a second neural network for predicting the tooth category to which a pixel belongs may be pre-trained, and the tooth instance segmentation result may be input into the second neural network, or the tooth instance segmentation result and the image to be processed may be input into the second neural network.
  • the second neural network through the second neural network, obtains the tooth category to which the pixels included in each tooth instance in the tooth instance segmentation result belong, so as to determine the tooth category according to the tooth category to which the pixels included in each tooth instance in the tooth instance segmentation result belong.
  • the tooth category to which each tooth instance in the tooth instance segmentation result belongs may adopt a structure such as U-Net, which is not limited here.
  • a second neural network may be used to classify unilateral teeth, eg, a second neural network may be used to classify right-sided teeth.
  • a second neural network can be used to divide the input image into 18 categories, 16 categories of teeth on the right, teeth on the left, and the background part. That is, the second neural network can be used to determine which of the 18 categories each pixel in the input image belongs to, so that the tooth position category of the right tooth can be obtained. By flipping the input image left and right and inputting it into the second neural network, the tooth position category of the left tooth can be obtained.
  • the difficulty of training the second neural network can be reduced.
  • the tooth class with the largest number of occurrences among the tooth classes to which each pixel included in the second tooth instance belongs may be used as the tooth class to which the second tooth instance belongs.
  • the second tooth instance includes 100 pixels, of which 80 pixels belong to the tooth category of tooth position 34, 10 pixels belong to the tooth position category of tooth position 33, and 10 pixels belong to the tooth position category of tooth position 35 , it can be determined that the tooth position category to which the second tooth instance belongs is the tooth position 34 .
  • the second tooth instance is determined by predicting the tooth class to which the pixels included in the second tooth instance in the tooth instance segmentation result belong, and according to the tooth class to which the pixels included in the second tooth instance belong.
  • the tooth position category to which the second tooth instance belongs can be accurately determined.
  • the method before performing tooth instance segmentation on the image to be processed, the method further includes: down-sampling the image to be segmented to a first resolution to obtain the first image; and obtaining the image to be processed according to the first image
  • the method further includes: obtaining a second image according to the image to be segmented, wherein the resolution of the second image is the second resolution, and the second resolution is higher than The first resolution; according to the coordinates of the center pixel of the third tooth instance in the tooth instance segmentation result, the image corresponding to the third tooth instance is cropped from the second image, wherein the third tooth instance represents the Any tooth instance of ; segment the image corresponding to the third tooth instance to obtain the segmentation result of the third tooth instance at the second resolution.
  • the image to be segmented may represent a tooth image to be segmented.
  • the image to be segmented may be a three-dimensional image
  • the image to be segmented may be a three-dimensional CBCT image
  • the resolution of the image to be segmented may be 0.2mm ⁇ 0.2mm ⁇ 0.2mm or 0.3mm ⁇ 0.3mm ⁇ 0.3mm, etc., the length, width and height can be (453 ⁇ 755 ⁇ 755) or (613 ⁇ 681 ⁇ 681) and so on.
  • the first resolution may be a spatial resolution.
  • the first resolution may be 0.6mm ⁇ 0.6mm ⁇ 0.6mm.
  • the image to be segmented may be a two-dimensional image.
  • the first image may be normalized to obtain the first normalized image; the first normalized image may be cropped to obtain the image to be processed.
  • the size of the image to be processed may be (112, 128, 144).
  • the pixel values of the first image may be normalized based on a preset interval to obtain the first normalized image.
  • normalizing the pixel value of the first image based on the preset interval may include: for the fourth pixel in the first image, if the pixel value of the fourth pixel is smaller than the lower boundary value of the preset interval, determining the first image The normalized value of four pixels is 0, where the fourth pixel represents any pixel in the first image; if the pixel value of the fourth pixel is greater than or equal to the lower boundary value of the preset interval, and less than or equal to the preset interval The upper boundary value of If the boundary value is set, the normalized value of the fourth pixel is determined to be 1.
  • the preset interval is [-1000, 1500], and the pixel value of pixel i is u. If u ⁇ -1000, the normalized value of pixel i is determined to be 0; if -1000 ⁇ u ⁇ 1500, then (u-(-1000))/2500 is determined as the normalized value of pixel i; if u If it is greater than 1500, the normalized value of pixel i is determined to be 1.
  • the pixel values in the obtained normalized image can be in the interval [0, 1].
  • the image to be segmented may be down-sampled to a second resolution to obtain the second image.
  • the image to be segmented may be used as the second image.
  • the resolution of the image to be segmented is the second resolution.
  • the second resolution may be 0.2mm ⁇ 0.2mm ⁇ 0.2mm.
  • the second image may be normalized to obtain the second normalized image; according to the coordinates of the center pixel of the third tooth instance in the tooth instance segmentation result , from the second image, cropping out the image corresponding to the third tooth instance, which may include: according to the coordinates of the center pixel of the third tooth instance in the tooth instance segmentation result, from the second normalized image, cropping out the third tooth instance The image corresponding to the tooth instance.
  • the position of the center pixel of the third tooth instance in the tooth instance segmentation result may be used as the geometric center, and the image corresponding to the third tooth instance is cropped from the second image. That is, in this example, the geometric center of the image corresponding to the third tooth instance may be the position of the center pixel of the third tooth instance in the tooth instance segmentation result.
  • the size of the image corresponding to the third tooth instance may be (176, 112, 96).
  • the geometric center of the image corresponding to the third tooth instance may not be the location of the center pixel of the third tooth instance in the tooth instance segmentation result.
  • the image corresponding to the third tooth instance may be input into the third neural network, and the image corresponding to the third tooth instance may be segmented through the third neural network to obtain the third tooth instance at the second resolution
  • the third neural network may adopt an architecture such as U-Net.
  • tooth instance segmentation and tooth position location can be quickly performed at a lower resolution first, and a segmentation result of each tooth instance at a higher resolution can be obtained.
  • the method before performing tooth instance segmentation on the image to be processed, the method further includes: performing upper and lower tooth segmentation according to the image to be segmented, and determining a region of interest in the image to be segmented; The segmented image is cropped to obtain the image to be processed.
  • a third image may be obtained according to the image to be segmented; upper and lower teeth are segmented according to the third image to determine a region of interest in the image to be segmented.
  • the image to be segmented may be down-sampled to a third resolution to obtain a third image.
  • the third resolution may be 0.2mm ⁇ 0.2mm ⁇ 0.2mm.
  • the image to be segmented may be used as the third image.
  • the pixel values of the third image may be normalized to obtain a third normalized image; the upper and lower teeth may be segmented on the third normalized image to determine the interest in the image to be segmented. area.
  • upper and lower teeth may be segmented on the third image to determine a region of interest in the image to be segmented.
  • a fourth neural network may be used to perform layer-by-layer analysis of the 2 Dimensions (2D) of the third normalized image along the coronal or sagittal axis, that is, from the transverse or sagittal plane. ) slice to perform upper and lower teeth segmentation to obtain the region of interest of each layer of two-dimensional slices of the third normalized image, and obtain the third normalized image according to the region of interest of each layer of two-dimensional slices of the third normalized image The region of interest of the image.
  • the fourth neural network may be a convolutional neural network. Among them, the tooth boundaries in the transverse and sagittal planes are clearer and easier to segment.
  • the regions of interest of the two-dimensional slices of each layer of the third normalized image may be recombined to obtain the region of interest of the third normalized image.
  • the connected domain whose size is smaller than the third preset value in the three-dimensional region of interest may be removed.
  • obtain the region of interest of the third normalized image By removing the connected domain whose size is smaller than the third preset value in the three-dimensional region of interest, the influence of image noise on the segmentation result can be reduced, thereby optimizing the segmentation result.
  • the third preset value may be 150mm 3 .
  • the image to be segmented may be down-sampled to a first resolution to obtain the first image, and the first image may be cropped according to the region of interest to obtain the image to be processed.
  • the cropped image to be processed may include a region of interest.
  • the geometric center of the region of interest may be used as the geometric center of the image to be processed, and the preset size may be used as the size of the image to be processed, and the image to be processed may be obtained by cropping.
  • the preset size may be (112, 128, 144).
  • the to-be-processed image obtained according to this embodiment can retain most of the tooth information in the to-be-segmented image, and can remove most of the irrelevant information (for example, background information) in the to-be-segmented image, thereby facilitating subsequent tooth instance segmentation, tooth Efficiency and accuracy of bit positioning, etc.
  • the neural network in the embodiment of the present disclosure may adopt an architecture such as U-Net, which is not limited herein.
  • the convolutional blocks of the neural network may be composed of residual modules.
  • a dual attention (Dual Attention) module may be introduced between the encoder and decoder parts of the neural network.
  • tooth image processing method even in the case of missing teeth, highlight artifacts, etc. in the image, accurate tooth position positioning results can be obtained, thereby helping to improve the doctor's reading efficiency , for example, helping to improve the efficiency of doctors in analyzing CBCT images of a patient's teeth.
  • it can provide assistance for dentists to read images, so as to facilitate the determination of the position of missing teeth.
  • 5 shows a schematic diagram of a CBCT cross-sectional image with highlight artifacts and missing teeth provided by an embodiment of the present disclosure; wherein, a in FIG. 5 is a schematic diagram of a CBCT cross-sectional image with highlight artifacts; b in 5 shows a schematic representation of a CBCT cross-sectional image with missing teeth.
  • the embodiments of the present disclosure can also provide accurate tooth position information for links such as the production of dental restoration implant materials.
  • the embodiments of the present disclosure may also provide at least one of a tooth instance segmentation result and a tooth position positioning result for equipment, software manufacturers, etc., and the equipment, software manufacturers, etc. may, based on at least one of the tooth instance segmentation results and the tooth position positioning results provided by the embodiments of the present disclosure, Once some detailed analysis is performed, for example, a dental arch curve can be obtained based on at least one of the tooth instance segmentation result and the tooth position location result provided by the embodiments of the present disclosure.
  • the embodiments of the present disclosure also provide a dental image processing apparatus, electronic equipment, computer-readable storage medium, and programs, all of which can be used to implement any of the dental image processing methods provided by the embodiments of the present disclosure, and the corresponding technical solutions and The technical effects can be found in the corresponding records in the Methods section.
  • FIG. 6 shows a block diagram of an apparatus 60 for processing a tooth image provided by an embodiment of the present disclosure.
  • the processing device 60 of the tooth image includes:
  • the tooth instance segmentation module 61 is configured to perform tooth instance segmentation on the image to be processed, and obtain a tooth instance segmentation result of the to-be-processed image; wherein, a tooth instance corresponds to a tooth, and the tooth instance segmentation result includes the pixel in the image to be processed belongs to information on tooth instances;
  • the tooth position positioning module 62 is configured to perform tooth position positioning based on the tooth instance segmentation result, and obtain the tooth position positioning result of the image to be processed.
  • the tooth instance segmentation module 61 is configured to sequentially predict pixel sets belonging to different tooth instances from a plurality of pixels in the image to be processed, so as to obtain a plurality of tooth instances included in the to-be-processed image. The prediction result of each pixel set; according to the prediction result of the multiple pixel sets included in the multiple tooth instances, the segmentation result of the tooth instance of the image to be processed is obtained.
  • the tooth instance segmentation module 61 is configured to predict the center pixel of the target tooth instance from a plurality of to-be-processed pixels of the to-be-processed image; wherein the to-be-processed pixel indicates that the image to be processed has not been predicted is the pixel belonging to any tooth instance, and the target tooth instance represents the currently predicted tooth instance; according to the coordinates of the central pixel of the target tooth instance, the pixel set belonging to the target tooth instance is predicted from a plurality of pixels to be processed, and the target tooth instance is obtained including The prediction result of the pixel set of .
  • the tooth instance segmentation module 61 is configured to determine, from a plurality of pixels to be processed in the to-be-processed image, a first pixel with the highest probability of being located at the center of the tooth instance; when the first pixel is located at the center of the tooth instance When the probability of is greater than or equal to the first preset value, the first pixel is predicted to be the center pixel of the target tooth instance.
  • the tooth instance segmentation module 61 is configured to, among the plurality of pixels to be processed, the number of pixels whose probability of being located at the center of the tooth instance is greater than or equal to the first preset value is greater than or equal to the second preset value , and when the probability that the first pixel is located at the center of the tooth instance is greater than or equal to the first preset value, the first pixel is predicted to be the center pixel of the target tooth instance.
  • the tooth instance segmentation module 61 is configured to determine the predicted coordinates of the center of the tooth instance pointed to by the second pixel of the plurality of pixels to be processed; wherein the second pixel represents one of the plurality of pixels to be processed
  • the predicted coordinates of the center of the tooth instance pointed to by the second pixel represent the coordinates of the center pixel of the tooth instance to which the second pixel belongs based on the prediction of the second pixel; according to the predicted coordinates of the center of the tooth instance pointed to by the second pixel , and the coordinates of the center pixel of the target tooth instance, predict the probability that the second pixel belongs to the center of the target tooth instance; according to the probability that the second pixel belongs to the center of the target tooth instance, predict from a plurality of pixels to be processed that belong to the target tooth instance Pixel collection.
  • the tooth instance segmentation module 61 is configured to determine a predicted offset from the second pixel in the plurality of pixels to be processed to the center pixel of the tooth instance to which the second pixel belongs; The coordinates, and the predicted offset from the second pixel to the center pixel of the tooth instance to which the second pixel belongs, determine the predicted coordinates of the center of the tooth instance to which the second pixel points.
  • the tooth instance segmentation module 61 is configured to predict a clustering parameter corresponding to the target tooth instance; wherein the clustering parameter is used to represent the degree of dispersion of the predicted coordinates of the central pixel of the target tooth instance; The predicted coordinates of the center of the tooth instance pointed to by the two pixels, the coordinates of the center pixel of the target tooth instance, and the clustering parameters corresponding to the target tooth instance, predict the probability that the second pixel belongs to the center of the target tooth instance.
  • the apparatus 60 further includes:
  • the first prediction module is configured to input the image to be processed into the first neural network, and obtain the predicted offset from the second pixel to the center pixel of the tooth instance to which the second pixel belongs, and the tooth instance to which the second pixel belongs. , and the probability that the second pixel is located at the center of the tooth instance.
  • the first neural network includes a first decoder and a second decoder
  • the first prediction module is configured to input the image to be processed into the first neural network, and obtain, via the first decoder, the predicted offset from the second pixel to the center pixel of the tooth instance to which the second pixel belongs, and the tooth to which the second pixel belongs.
  • the clustering parameter of the instance, the probability that the second pixel is located at the center of the tooth instance is obtained via the second decoder.
  • the apparatus 60 further includes:
  • the second prediction module is configured to input the training image into the first neural network, and obtain the predicted offset from the third pixel in the training image to the center pixel of the first tooth instance to which the third pixel belongs through the first neural network, and the first The clustering parameter corresponding to the tooth instance, and the probability that the third pixel is located at the center of the tooth instance, wherein the third pixel represents any pixel in the training image, and the first tooth instance represents the tooth instance to which the third pixel belongs;
  • the first determination module is configured to determine the predicted coordinates of the center of the tooth instance pointed to by the third pixel according to the coordinates of the third pixel and the predicted offset from the third pixel to the center pixel of the first tooth instance, wherein the third pixel
  • the predicted coordinates of the center of the tooth instance pointed to by the pixel represent the coordinates of the center pixel of the first tooth instance predicted based on the third pixel
  • the second determination module is configured to be based on the predicted coordinates of the center of the tooth instance pointed to by the third pixel, belonging to the The predicted coordinates of the center of the tooth instance pointed to by different pixels of a tooth instance, and the clustering parameters corresponding to the first tooth instance, determine the probability that the third pixel belongs to the center of the first tooth instance;
  • the training module is configured to train the first neural network according to the probability that the third pixel is located at the center of the tooth instance, the probability that the third pixel belongs to the center of the first tooth instance, and the true value that the third pixel belongs to the inside of the tooth.
  • the tooth location module 62 is configured to predict the tooth class to which the pixels included in the second tooth instance in the tooth instance segmentation result belong; wherein the second tooth instance represents a tooth instance in the tooth instance segmentation result. Any tooth instance; according to the tooth class to which the pixels included in the second tooth instance belong, determine the tooth class to which the second tooth instance belongs.
  • the apparatus 60 further includes:
  • a downsampling module configured to downsample the image to be divided to a first resolution to obtain the first image; and obtain the image to be processed according to the first image;
  • a third determining module configured to obtain a second image according to the image to be segmented; wherein, the resolution of the second image is the second resolution, and the second resolution is higher than the first resolution;
  • the first cropping module is configured to crop an image corresponding to the third tooth instance from the second image according to the coordinates of the center pixel of the third tooth instance in the tooth instance segmentation result; wherein, the third tooth instance represents the tooth instance segmentation Any tooth instance in the result;
  • the first segmentation module is configured to segment the image corresponding to the third tooth instance to obtain a segmentation result of the third tooth instance at the second resolution.
  • the apparatus 60 further includes:
  • the second segmentation module is configured to perform upper and lower teeth segmentation according to the to-be-segmented image, and to determine the region of interest in the to-be-segmented image;
  • the second cropping module is configured to crop the to-be-segmented image according to the region of interest to obtain the to-be-processed image.
  • the tooth instance segmentation result of the to-be-processed image is obtained by segmenting the tooth instance of the to-be-processed image, and the tooth position location is performed based on the tooth instance segmentation result to obtain the tooth-position location result of the to-be-processed image.
  • the teeth and the background be distinguished, but also the tooth instance segmentation results of different teeth can be distinguished for tooth position positioning, which can improve the accuracy of tooth position positioning.
  • the functions or modules included in the apparatus provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments, and the specific implementation and technical effects may refer to the above method embodiments. describe.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program, including computer-readable codes.
  • a processor in the electronic device executes any one of the above-mentioned codes.
  • the processing method of the tooth image is not limited to, but not limited to
  • Embodiments of the present disclosure further provide another computer program product configured to store computer-readable instructions, which, when executed, cause the computer to perform operations of the dental image processing method provided by any of the foregoing embodiments.
  • Embodiments of the present disclosure further provide an electronic device, including: one or more processors; a memory configured to store executable instructions; wherein the one or more processors are configured to invoke executable instructions stored in the memory instruction to execute the above method.
  • the electronic device may be provided as a terminal, server or other form of device.
  • FIG. 7 shows a block diagram of an electronic device 700 provided by an embodiment of the present disclosure.
  • the electronic device 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, and a terminal such as a personal digital assistant.
  • an electronic device 700 may include one or more of the following components: a processing component 702, a memory 704, a power supply component 706, a multimedia component 708, an audio component 810, an Input/Output (I/O) interface 712, Sensor assembly 714 , and communication assembly 716 .
  • the processing component 702 generally controls the overall operation of the electronic device 700, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 702 can include one or more processors 720 to execute instructions to perform all or some of the steps of the methods described above.
  • processing component 702 may include one or more modules to facilitate interaction between processing component 702 and other components.
  • processing component 702 may include a multimedia module to facilitate interaction between multimedia component 708 and processing component 702.
  • Memory 704 is configured to store various types of data to support operation at electronic device 700 . Examples of such data include instructions for any application or method operating on the electronic device 700, contact data, phonebook data, messages, pictures, videos, and the like.
  • the memory 704 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as Static Random-Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (Electrically Erasable) Erasable Programmable Read Only Memory, EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (Read-Only Memory) , ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM Static Random-Access Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • Read-Only Memory Read-Only Memory
  • Power supply component 706 provides power to various components of electronic device 700 .
  • Power supply components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 700 .
  • Multimedia component 708 includes a screen that provides an output interface between the electronic device 700 and the user.
  • the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 708 includes a front-facing camera and/or a rear-facing camera. When the electronic device 700 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 710 is configured to output and/or input audio signals.
  • the audio component 710 includes a microphone (Microphone, MIC) that is configured to receive external audio signals when the electronic device 700 is in an operating mode, such as a calling mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be stored in memory 704 or transmitted via communication component 716 in some embodiments of the present disclosure.
  • the audio component 710 further includes a speaker configured to output audio signals.
  • the I/O interface 712 provides an interface between the processing component 702 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 714 includes one or more sensors configured to provide status assessments of various aspects of electronic device 700 .
  • the sensor assembly 714 can detect the open/closed state of the electronic device 700, the relative positioning of the components, such as the display and keypad of the electronic device 700, the sensor assembly 714 can also detect the electronic device 700 or one of the electronic devices 700 Changes in the position of components, presence or absence of user contact with the electronic device 700 , orientation or acceleration/deceleration of the electronic device 700 and changes in the temperature of the electronic device 700 .
  • Sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 714 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge-coupled Device (CCD) image sensor, configured for use in imaging applications.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge-coupled Device
  • the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 716 is configured to facilitate wired or wireless communication between electronic device 700 and other devices.
  • the electronic device 700 can access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second-generation mobile communication technology (2-Generation, 2G), a third-generation mobile communication technology (3rd-Generation, 3G), The fourth generation mobile communication technology (4-Generation, 4G) / the long term evolution (Long Term Evolution, LTE) of the universal mobile communication technology, the fifth generation mobile communication technology (5-Generation, 5G) or their combination.
  • the communication component 716 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 716 also includes a Near Field Communication (NFC) module to facilitate short-range communication.
  • NFC Near Field Communication
  • the NFC module may be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BitTorrent, BT) technology and other technology to achieve.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth BitTorrent, BT
  • the electronic device 700 may be implemented by one or more Application Specific Integrated Circuit (ASIC), Digital Signal Process (DSP), Digital Signal Processing Device (Digital Signal Process Device) , DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation, used to perform the above method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Process
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor, or other electronic component implementation, used to perform the above method.
  • a non-volatile computer-readable storage medium such as a memory 704 comprising computer program instructions executable by the processor 720 of the electronic device 700 to perform the above method.
  • FIG. 8 shows a block diagram of an electronic device 1900 provided by an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server.
  • an electronic device 1900 includes a processing component 1922, which in some embodiments of the present disclosure includes one or more processors, and a memory resource, represented by memory 1932, configured to store instructions executable by the processing component 1922 , such as applications.
  • An application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as a Microsoft server operating system (Windows ServerTM), a graphical user interface based operating system (Mac OS XTM) introduced by Apple, a multi-user multi-process computer operating system (UnixTM). ), Free and Open Source Unix-like Operating System (LinuxTM), Open Source Unix-like Operating System (FreeBSDTM) or similar.
  • a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
  • the devices involved in the embodiments of the present disclosure may be systems, methods and/or computer program products.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), ROM, EPROM or flash memory, SRAM, portable compact disk read only Memory (Compact Disc Read-Only Memory, CD-ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanical coding devices, such as punched cards or recessed protrusions on which instructions are stored structure, and any suitable combination of the above.
  • Computer-readable storage media, as used herein, are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions for carrying out the operations of the present disclosure may be assembly instructions, Industry Standard Architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the “C” language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or Wide Area Network (WAN), or may be connected to an external computer (eg, using Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • electronic circuits such as programmable logic circuits, FPGAs, or Programmable Logic Arrays (PLAs), are personalized by utilizing state information of computer-readable program instructions, which electronic circuits can Computer readable program instructions are executed to implement various aspects of the present disclosure.
  • PDAs Programmable Logic Arrays
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions configured to implement the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • a software development kit Software Development Kit, SDK
  • Embodiments of the present disclosure provide a dental image processing method, device, electronic device, storage medium, and program, wherein the dental image processing method includes: segmenting a tooth instance of an image to be processed to obtain a tooth image of the to-be-processed image. Instance segmentation result; wherein, a tooth instance corresponds to a tooth, and the tooth instance segmentation result includes the information of the tooth instance to which the pixel in the image to be processed belongs; based on the tooth instance segmentation result, the tooth position is located to obtain The tooth position positioning result of the to-be-processed image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例涉及一种牙齿图像的处理方法、装置、电子设备、存储介质及程序,所述方法包括:对待处理图像进行牙实例分割,得到所述待处理图像的牙实例分割结果;其中,一个牙实例对应于一颗牙齿,所述牙实例分割结果包括所述待处理图像中的像素所属的牙实例的信息;基于所述牙实例分割结果进行牙位定位,得到所述待处理图像的牙位定位结果。如此,在本公开实施例中,基于不仅能区分牙齿和背景、还能区分不同牙齿的牙实例分割结果进行牙位定位,能够提高牙位定位的准确性。

Description

牙齿图像的处理方法、装置、电子设备、存储介质及程序
相关申请的交叉引用
本公开基于申请号为202011246718.0、申请日为2020年11月10日、申请名称为“牙齿图像的处理方法及装置、电子设备和存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及计算机视觉技术领域,尤其涉及一种牙齿图像的处理方法、装置、电子设备、存储介质及程序。
背景技术
锥形束计算机断层扫描(Cone Beam Computed Tomography,CBCT)是一种获得三维影像的方法。相比于电子计算机断层扫描(Computed Tomography,CT),CBCT具有辐射剂量小、扫描时间短、图像空间分辨率高等优点,在口腔医学领域得到了越来越广泛的应用。对CBCT图像进行自动化的牙位定位,对于口腔医学领域具有重要意义。
发明内容
本公开实施例至少提供了一种牙齿图像的处理方法、装置、电子设备、存储介质及程序。
本公开实施例提供了一种牙齿图像的处理方法,所述方法由电子设备执行,所述方法包括:
对待处理图像进行牙实例分割,得到所述待处理图像的牙实例分割结果;其中,一个牙实例对应于一颗牙齿,所述牙实例分割结果包括所述待处理图像中的像素所属的牙实例的信息;
基于所述牙实例分割结果进行牙位定位,得到所述待处理图像的牙位定位结果。如此,通过对待处理图像进行牙实例分割,得到待处理图像的牙实例分割结果,并基于牙实例分割结果进行牙位定位,得到待处理图像的牙位定位结果,由此基于不仅能区分牙齿和背景、还能区分不同牙齿的牙实例分割结果进行牙位定位,能够提高牙位定位的准确性。
在本公开的一些实施例中,所述对待处理图像进行牙实例分割,得到所述待处理图像的牙实例分割结果,包括:从所述待处理图像的多个像素中,依次预测属于不同牙实例的像素集合,得到所述待处理图像中的多个牙实例包含的多个像素集合的预测结果;根据所述多个牙实例包含的多个像素集合的预测结果,得到所述待处理图像的牙实例分割结果。如此,通过从待处理图像的多个像素中,依次预测属于不同牙实例的像素集合,得到待处理图像中的多个牙实例包含的多个像素集合的预测结果,并根据多个牙实例包含的多个像素集合的预测结果,得到待处理图像的牙实例分割结果,由此能够得到准确的牙实例分割结果,有效应对CBCT图像中的噪声干扰、牙齿界限模糊、牙根与颌骨灰度值接近等复杂情况。
在本公开的一些实施例中,所述从所述待处理图像的多个像素中,依次预测属于不同牙实例的像素集合,得到所述待处理图像中的多个牙实例包含的多个像素集合的预测结果,包括:从所述待处理图像的多个待处理像素中,预测目标牙实例的中心像素;其中,所述待处理像素表示所述待处理图像中未被预测为属于任一牙实例的像素,所述目标牙实例表示当前预测的牙实例;根据所述目标牙实例的中心像素的坐标,从多个所述待处理像素中预测属于所述目标牙实例的像素集合,得到所述目标牙实例包含的像素集合的预测结果。如此,通过从待处理图像的多个待处理像素中,预测目标牙实例的中心像素,根据目标牙实例的中心像素的坐标,从多个待处理像素中预测属于目标牙实例的像素集合,得到目标牙实例包含的像素集合的预测结果,由此能够提高所得到的任一牙实例包含的像素的预测结果的准确性。
在本公开的一些实施例中,所述从所述待处理图像的多个待处理像素中,预测目标牙实例的中心像素,包括:从所述待处理图像的多个待处理像素中,确定位于牙实例中心的概率最大的第一像素;在所述第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将所述第一像素预测为所述 目标牙实例的中心像素。如此,通过从待处理图像的多个待处理像素中,确定位于牙实例中心的概率最大的第一像素,并在第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将第一像素预测为目标牙实例的中心像素,由此能够较准确地确定牙实例的中心像素,从而有助于准确地进行牙实例分割。
在本公开的一些实施例中,所述在所述第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将所述第一像素预测为所述目标牙实例的中心像素,包括:在多个所述待处理像素中、位于所述牙实例中心的概率大于或等于所述第一预设值的像素数大于或等于第二预设值,且所述第一像素位于所述牙实例中心的概率大于或等于所述第一预设值的情况下,将所述第一像素预测为目标牙实例的中心像素。如此,在多个待处理像素中、位于牙实例中心的概率大于或等于第一预设值的像素数大于或等于第二预设值,且第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将第一像素预测为目标牙实例的中心像素,并基于第一像素继续进行预测;在多个待处理像素中、位于牙实例中心的概率大于或等于第一预设值的像素数小于第二预设值的情况下,可以停止预测,由此能够提高预测效率和准确性。
在本公开的一些实施例中,所述根据所述目标牙实例的中心像素的坐标,从多个所述待处理像素中预测属于所述目标牙实例的像素集合,包括:确定多个所述待处理像素中的第二像素所指向的牙实例中心的预测坐标;其中,所述第二像素表示多个所述待处理像素中的任一像素,所述第二像素所指向的牙实例中心的预测坐标表示基于所述第二像素预测的所述第二像素所属的牙实例的中心像素的坐标;根据所述第二像素所指向的牙实例中心的预测坐标,以及所述目标牙实例的中心像素的坐标,预测所述第二像素属于所述目标牙实例的中心的概率;根据所述第二像素属于所述目标牙实例的中心的概率,从多个所述待处理像素中预测属于所述目标牙实例的像素集合。如此,通过确定多个待处理像素中的第二像素所指向的牙实例中心的预测坐标,根据第二像素所指向的牙实例中心的预测坐标,以及目标牙实例的中心像素的坐标,预测第二像素属于目标牙实例的中心的概率,根据第二像素属于目标牙实例的中心的概率,从多个待处理像素中预测属于目标牙实例的像素集合,由此能够从多个待处理像素中准确地预测属于目标牙实例的像素。
在本公开的一些实施例中,所述确定多个所述待处理像素中的第二像素所指向的牙实例中心的预测坐标,包括:确定多个所述待处理像素中的第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量;根据所述第二像素的坐标,以及所述第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量,确定所述第二像素所指向的牙实例中心的预测坐标。如此,根据第二像素的坐标,以及第二像素到第二像素所属的牙实例的中心像素的预测偏移量,确定第二像素所指向的牙实例中心的预测坐标,由此能够获得较准确的第二像素所指向的牙实例中心的预测坐标。
在本公开的一些实施例中,所述根据所述第二像素所指向的牙实例中心的预测坐标,以及所述目标牙实例的中心像素的坐标,预测所述第二像素属于所述目标牙实例的中心的概率,包括:预测所述目标牙实例对应的聚类参数;其中,所述聚类参数用于表示所述目标牙实例的中心像素的预测坐标的离散程度;根据所述第二像素所指向的牙实例中心的预测坐标,所述目标牙实例的中心像素的坐标,以及所述目标牙实例对应的聚类参数,预测所述第二像素属于所述目标牙实例的中心的概率。如此,通过预测目标牙实例对应的聚类参数,并根据第二像素所指向的牙实例中心的预测坐标,目标牙实例的中心像素的坐标,以及目标牙实例对应的聚类参数,预测第二像素属于目标牙实例的中心的概率,由此能够在本公开的一些实施例中提高所预测的第二像素属于目标牙实例的中心的概率的准确性。
在本公开的一些实施例中,所述方法还包括:将所述待处理图像输入第一神经网络,经由所述第一神经网络得到所述第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量,所述第二像素所属的牙实例的聚类参数,以及所述第二像素位于牙实例中心的概率。如此,通过第一神经网络对待处理图像进行处理,能够提高所得到的预测偏移量、聚类参数和像素位于牙实例中心的概率的准确性,并能提高得到预测偏移量、聚类参数和像素位于牙实例中心的概率的速度。
在本公开的一些实施例中,所述第一神经网络包括第一解码器和第二解码器;所述将所述待处理图像输入第一神经网络,经由所述第一神经网络得到所述第二像素到所述第二像素所属的牙实例的中 心像素的预测偏移量,所述第二像素所属的牙实例的聚类参数,以及所述第二像素位于牙实例中心的概率,包括:将所述待处理图像输入第一神经网络,经由所述第一解码器得到所述第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量,以及所述第二像素所属的牙实例的聚类参数,经由所述第二解码器得到所述第二像素位于牙实例中心的概率。如此,能够在本公开的一些实施例中提高所得到的预测偏移量、聚类参数和像素位于牙实例中心的概率的准确性。
在本公开的一些实施例中,在所述将所述待处理图像输入第一神经网络之前,所述方法还包括:将训练图像输入所述第一神经网络,经由所述第一神经网络得到所述训练图像中的第三像素到所述第三像素所属的第一牙实例的中心像素的预测偏移量,所述第一牙实例对应的聚类参数,以及所述第三像素位于牙实例中心的概率,其中,所述第三像素表示所述训练图像中的任一像素,所述第一牙实例表示所述第三像素所属的牙实例;根据所述第三像素的坐标,以及所述第三像素到所述第一牙实例的中心像素的预测偏移量,确定所述第三像素所指向的牙实例中心的预测坐标,其中,所述第三像素所指向的牙实例中心的预测坐标表示基于所述第三像素预测的所述第一牙实例的中心像素的坐标;根据所述第三像素所指向的牙实例中心的预测坐标,属于所述第一牙实例的不同像素所指向的牙实例中心的预测坐标,以及所述第一牙实例对应的聚类参数,确定所述第三像素属于所述第一牙实例的中心的概率;根据所述第三像素位于牙实例中心的概率,所述第三像素属于所述第一牙实例的中心的概率,以及所述第三像素属于牙齿内部的真值,训练所述第一神经网络。如此,通过训练第一神经网络,能够使第一神经网络学习到分割牙齿图像中不同的牙实例的能力。采用该实现方式训练得到的第一神经网络进行牙实例分割,能够在复杂场景中得到稳定、准确的牙实例分割结果,例如,能够应对CBCT图像中牙齿灰度分布不均匀、牙齿边界模糊、非常规形态的牙齿、牙内低密度影等情况。
在本公开的一些实施例中,所述基于所述牙实例分割结果进行牙位定位,得到所述待处理图像的牙位定位结果,包括:预测所述牙实例分割结果中的第二牙实例包含的像素所属的牙位类别;其中,所述第二牙实例表示所述牙实例分割结果中的任一牙实例;根据所述第二牙实例包含的像素所属的牙位类别,确定所述第二牙实例所属的牙位类别。如此,通过预测牙实例分割结果中的第二牙实例包含的像素所属的牙位类别,并根据第二牙实例包含的像素所属的牙位类别,确定第二牙实例所属的牙位类别,由此能够准确地确定所述第二牙实例所属的牙位类别。
在本公开的一些实施例中,在所述对待处理图像进行牙实例分割之前,所述方法还包括:将待分割图像降采样至第一分辨率,得到第一图像;根据所述第一图像,得到所述待处理图像;在所述得到所述待处理图像的牙实例分割结果之后,所述方法还包括:根据所述待分割图像,得到第二图像;其中,所述第二图像的分辨率为第二分辨率,所述第二分辨率高于所述第一分辨率;根据所述牙实例分割结果中的第三牙实例的中心像素的坐标,从所述第二图像中,裁剪出所述第三牙实例对应的图像;其中,所述第三牙实例表示所述牙实例分割结果中的任一牙实例;对所述第三牙实例对应的图像进行分割,得到所述第三牙实例在所述第二分辨率下的分割结果。如此,能够先在较低的分辨率上快速进行牙实例分割和牙位定位,并能获得各个牙实例在较高的分辨率下的分割结果。
在本公开的一些实施例中,在所述对待处理图像进行牙实例分割之前,所述方法还包括:根据待分割图像进行上下牙分割,确定所述待分割图像中的感兴趣区域;根据所述感兴趣区域,对所述待分割图像进行裁剪,得到所述待处理图像。如此,得到的待处理图像能够保留待分割图像中大部分的牙齿信息,且能去除待分割图像中的大部分无关信息(例如背景信息),从而有助于后续进行牙实例分割、牙位定位等的效率和准确性。
以下装置、电子设备等的效果描述参见上述方法的说明。
本公开实施例还提供了一种牙齿图像的处理装置,包括:
牙实例分割模块,配置为对待处理图像进行牙实例分割,得到所述待处理图像的牙实例分割结果;其中,一个牙实例对应于一颗牙齿,所述牙实例分割结果包括所述待处理图像中的像素所属的牙实例的信息;
牙位定位模块,配置为基于所述牙实例分割结果进行牙位定位,得到所述待处理图像的牙位定位结果。
在本公开的一些实施例中,所述牙实例分割模块,配置为从所述待处理图像的多个像素中,依次预测属于不同牙实例的像素集合,得到所述待处理图像中的多个牙实例包含的多个像素集合的预测结果;根据所述多个牙实例包含的多个像素集合的预测结果,得到所述待处理图像的牙实例分割结果。
在本公开的一些实施例中,所述牙实例分割模块,配置为从所述待处理图像的多个待处理像素中,预测目标牙实例的中心像素;其中,所述待处理像素表示所述待处理图像中未被预测为属于任一牙实例的像素,所述目标牙实例表示当前预测的牙实例;根据所述目标牙实例的中心像素的坐标,从多个所述待处理像素中预测属于所述目标牙实例的像素集合,得到所述目标牙实例包含的像素集合的预测结果。
在本公开的一些实施例中,所述牙实例分割模块,配置为从所述待处理图像的多个待处理像素中,确定位于牙实例中心的概率最大的第一像素;在所述第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将所述第一像素预测为所述目标牙实例的中心像素。
在本公开的一些实施例中,所述牙实例分割模块,配置为在多个所述待处理像素中、位于所述牙实例中心的概率大于或等于所述第一预设值的像素数大于或等于第二预设值,且所述第一像素位于所述牙实例中心的概率大于或等于所述第一预设值的情况下,将所述第一像素预测为所述目标牙实例的中心像素。
在本公开的一些实施例中,所述牙实例分割模块,配置为确定多个所述待处理像素中的第二像素所指向的牙实例中心的预测坐标;其中,所述第二像素表示多个所述待处理像素中的任一像素,所述第二像素所指向的牙实例中心的预测坐标表示基于所述第二像素预测的所述第二像素所属的牙实例的中心像素的坐标;根据所述第二像素所指向的牙实例中心的预测坐标,以及所述目标牙实例的中心像素的坐标,预测所述第二像素属于所述目标牙实例的中心的概率;根据所述第二像素属于所述目标牙实例的中心的概率,从多个所述待处理像素中预测属于所述目标牙实例的像素集合。
在本公开的一些实施例中,所述牙实例分割模块,配置为确定多个所述待处理像素中的第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量;根据所述第二像素的坐标,以及所述第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量,确定所述第二像素所指向的牙实例中心的预测坐标。
在本公开的一些实施例中,所述牙实例分割模块,配置为预测所述目标牙实例对应的聚类参数;其中,所述聚类参数用于表示所述目标牙实例的中心像素的预测坐标的离散程度;根据所述第二像素所指向的牙实例中心的预测坐标,所述目标牙实例的中心像素的坐标,以及所述目标牙实例对应的聚类参数,预测所述第二像素属于所述目标牙实例的中心的概率。
在本公开的一些实施例中,所述装置还包括:
第一预测模块,配置为将所述待处理图像输入第一神经网络,经由所述第一神经网络得到所述第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量,所述第二像素所属的牙实例的聚类参数,以及所述第二像素位于牙实例中心的概率。
在本公开的一些实施例中,所述第一神经网络包括第一解码器和第二解码器;
所述第一预测模块,配置为将所述待处理图像输入第一神经网络,经由所述第一解码器得到所述第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量,以及所述第二像素所属的牙实例的聚类参数,经由所述第二解码器得到所述第二像素位于牙实例中心的概率。
在本公开的一些实施例中,所述装置还包括:
第二预测模块,配置为将训练图像输入所述第一神经网络,经由所述第一神经网络得到所述训练图像中的第三像素到所述第三像素所属的第一牙实例的中心像素的预测偏移量,所述第一牙实例对应的聚类参数,以及所述第三像素位于牙实例中心的概率,其中,所述第三像素表示所述训练图像中的任一像素,所述第一牙实例表示所述第三像素所属的牙实例;
第一确定模块,配置为根据所述第三像素的坐标,以及所述第三像素到所述第一牙实例的中心像素的预测偏移量,确定所述第三像素所指向的牙实例中心的预测坐标,其中,所述第三像素所指向的牙实例中心的预测坐标表示基于所述第三像素预测的所述第一牙实例的中心像素的坐标;
第二确定模块,配置为根据所述第三像素所指向的牙实例中心的预测坐标,属于所述第一牙实例的不同像素所指向的牙实例中心的预测坐标,以及所述第一牙实例对应的聚类参数,确定所述第三像素属于所述第一牙实例的中心的概率;
训练模块,配置为根据所述第三像素位于牙实例中心的概率,所述第三像素属于所述第一牙实例的中心的概率,以及所述第三像素属于牙齿内部的真值,训练所述第一神经网络。
在本公开的一些实施例中,所述牙位定位模块,配置为预测所述牙实例分割结果中的第二牙实例包含的像素所属的牙位类别;其中,所述第二牙实例表示所述牙实例分割结果中的任一牙实例;根据所述第二牙实例包含的像素所属的牙位类别,确定所述第二牙实例所属的牙位类别。
在本公开的一些实施例中,所述装置还包括:
降采样模块,配置为将待分割图像降采样至第一分辨率,得到第一图像;根据所述第一图像,得到所述待处理图像;
第三确定模块,配置为根据所述待分割图像,得到第二图像;其中,所述第二图像的分辨率为第二分辨率,所述第二分辨率高于所述第一分辨率;
第一裁剪模块,配置为根据所述牙实例分割结果中的第三牙实例的中心像素的坐标,从所述第二图像中,裁剪出所述第三牙实例对应的图像;其中,所述第三牙实例表示所述牙实例分割结果中的任一牙实例;
第一分割模块,配置为对所述第三牙实例对应的图像进行分割,得到所述第三牙实例在所述第二分辨率下的分割结果。
在本公开的一些实施例中,所述装置还包括:
第二分割模块,配置为根据待分割图像进行上下牙分割,确定所述待分割图像中的感兴趣区域;
第二裁剪模块,配置为根据所述感兴趣区域,对所述待分割图像进行裁剪,得到所述待处理图像。
本公开实施例还提供了一种电子设备,包括:一个或多个处理器;配置为存储可执行指令的存储器;其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,执行上述方法。
本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。
本公开实施例还提供一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行如上述任一实施例所述的图像目标分类方法。
本公开实施例提供的一种牙齿图像的处理方法、装置、设备、存储介质及程序,通过对待处理图像进行牙实例分割,得到待处理图像的牙实例分割结果,并基于牙实例分割结果进行牙位定位,得到待处理图像的牙位定位结果,由此基于不仅能区分牙齿和背景、还能区分不同牙齿的牙实例分割结果进行牙位定位,能够提高牙位定位的准确性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开实施例,并与说明书一起用于说明本公开的技术方案。
图1示出本公开实施例提供的牙齿图像的处理方法的一种应用场景的示意图;
图2示出本公开实施例提供的一种牙齿图像的处理方法的流程示意图;
图3示出本公开实施例提供的一种待处理图像的牙实例分割结果的示意图;
图4示出可以应用本公开实施例的牙齿图像的处理方法的一种系统架构示意图;
图5示出本公开实施例提供的存在高亮伪影和存在缺牙的CBCT横断面图像的示意图;
图6示出本公开实施例提供的牙齿图像的处理装置600的框图;
图7示出本公开实施例提供的一种电子设备700的框图;
图8示出本公开实施例提供的一种电子设备1900的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。
相关技术中,CBCT图像在现代口腔医学领域尤其在牙科中得到越来越广泛的应用。在临床诊断、牙齿修补种植材料的制作等环节中,医生需要精准的牙齿三维形态等几何信息来进行诊断,确定个性化的治疗方案。通过算法自动化地获得患者的牙齿解剖与牙位信息,可以提高医生的阅片效率,同时为牙齿修补材料的制作提供信息。因此,基于CBCT图像的自动化牙齿分割以及牙位确定算法对临床具有重要的意义。由于CBCT图像存在噪声干扰、牙齿界限模糊以及牙根与颌骨灰度值接近等因素,牙齿的准确分割存在较多的难点;同时由于不同患者具有不同的缺牙、种植体或残根等情况,牙位的自动化判断也是难于解决的问题。
此外,依赖牙科医生手动勾画获取牙齿三维模型的方法需要大量的时间,且存在偶然误差;基于阈值的方法难以处理CBCT图像中牙齿灰度分布不均匀、边界模糊的问题;基于交互式的分割方法需要手工参与,难以应对非常规形态以及边界模糊的牙;基于水平集的自动分割方法对初始化敏感,且难于在牙齿的不同部位自适应的进行分割边界演化;基于主动轮廓模型的方法也需确定较好的初始化曲线,对牙齿边界模糊、牙内低密度影等情况效果不佳。因此,传统算法设计通常应对的情况有限,难以在复杂场景中稳定运行得到较好的效果。近年来,伴随着深度学习方法的兴起,将深度学习方法应用到牙分割上的工作,这些方法多是基于正常牙样本进行模型的训练与测试的,而临床实际中患者的牙齿形态、缺牙、牙齿修复、种植体情况复杂,现有方法在这种场景下得不到理想的效果。
为了解决类似上文所述的技术问题,本公开实施例提供了一种牙齿图像的处理方法、装置、电子设备、存储介质及程序,通过对待处理图像进行牙实例分割,得到待处理图像的牙实例分割结果,并基于牙实例分割结果进行牙位定位,得到待处理图像的牙位定位结果,由此基于不仅能区分牙齿和背景、还能区分不同牙齿的牙实例分割结果进行牙位定位,能够提高牙位定位的准确性。
下面通过一个具体的应用场景说明本公开实施例提供的牙齿图像的处理方法。图1示出本公开实施例提供的牙齿图像的处理方法的一种应用场景的示意图。如图1所示,首先,可以先获取待分割图像101(即原始数据)。在获取待分割图像之后,可以根据待分割图像进行上下牙分割,确定待分割图像中的牙齿感兴趣区域,即102。同时可以将待分割图像降采样至第一分辨率,得到处于低空间分辨率的第一图像,即103,根据感兴趣区域,对第一图像进行裁剪,得到待处理图像,即104。其次,对待处理图像进行牙实例分割,相应地得到待处理图像的牙实例分割结果,即105,并在得到待处理图像的牙实例分割结果之后,可以先对待处理图像中的单侧牙(例如右侧牙)进行牙位分类(即牙位定位),再将待处理图像左右翻转后,对另一侧牙(例如左侧牙)进行牙位分类,从而得到待处理图像的牙位定位结果即106。再次,在得到牙位定位结果之后,可以根据待分割图像,得到第二图像,即107,其中,第二图像的分辨率为第二分辨率,第二分辨率高于第一分辨率。然后,可以根据牙位定位结果中的任一牙实例的中心像素的坐标,从第二图像中,裁剪出该牙实例对应的图像,即108,并对该牙实例对应的图像进行单颗牙分割,得到该牙实例在第二分辨率下的分割结果并输出,即109,由此能获得各个牙实例在较高的分辨率下的分割结果。
下面结合附图对本公开实施例提供的牙齿图像的处理方法进行详细的说明。
图2示出本公开实施例提供的一种牙齿图像的处理方法的流程示意图。在本公开的一些实施例中,牙齿图像的处理方法可以由终端设备或服务器或其它处理设备执行。其中,终端设备可以是用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal  Digital Assistant,PDA)、手持设备、计算设备、车载设备或者可穿戴设备等。在本公开的一些实施例中,牙齿图像的处理方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。如图2所示,牙齿图像的处理方法包括步骤S21至步骤S22。
在步骤S21中,对待处理图像进行牙实例分割,得到待处理图像的牙实例分割结果。
其中,一个牙实例对应于一颗牙齿,牙实例分割结果包括待处理图像中的像素所属的牙实例的信息。
在本公开的一些实施例中,待处理图像可以表示需要处理的牙齿图像,其中,牙齿图像可以表示至少包含部分牙齿信息的图像。待处理图像可以是CBCT图像。其中,CBCT图像可以通过锥形束投照计算机重组断层影像设备等设备采集得到。当然,待处理图像还可以是CT图像或者其他包含牙齿信息的图像,在此不作限定。待处理图像可以是三维图像或者二维图像。例如,待处理图像可以是三维CBCT图像。
在本公开的一些实施例中,牙实例分割可以表示分割不同的牙齿。即,通过牙实例分割不仅能区分牙齿和背景、还能区分不同的牙齿。对待处理图像进行牙实例分割可以表示将待处理图像中的不同牙齿分割出来,得到待处理图像中的各个牙齿包含的像素集合。
在本公开的一些实施例中,在牙实例分割结果中,待处理图像中的像素所属的牙实例的信息可以采用类别来表示。例如,待处理图像中包括32个牙实例,则牙实例分割结果可以包括33个类别,分别是32个牙实例类别和背景类别。其中,任一牙实例类别对应于一个牙实例,背景类别表示不属于牙齿内部。在牙实例分割结果中,待处理图像中的任一像素可以属于33个类别中的任一类别。
在本公开的一些实施例中,牙实例分割结果可以采用图像、表格、矩阵等数据形式来表示,只要能够表示待处理图像中的像素所属的牙实例的信息即可,本公开实施例对此不作限定。图3示出本公开实施例提供的一种待处理图像的牙实例分割结果的示意图。在图3中,待处理图像中不属于牙齿内部(即属于背景部分)的像素的像素值为0,属于不同牙实例的像素的像素值的灰度值不同,属于同一牙实例的像素的像素值相同。
在步骤S22中,基于牙实例分割结果进行牙位定位,得到待处理图像的牙位定位结果。
在本公开的一些实施例中,牙位定位可以表示确定牙实例所属的牙位的信息,确定待处理图像中的像素所属的牙位的信息至少之一。即,通过基于牙实例分割结果进行牙位定位,可以确定待处理图像中的各个牙实例分别属于哪个牙位。牙位定位结果可以包括待处理图像中的牙实例所属的牙位的信息,待处理图像中的像素所属的牙位的信息至少之一。
在本公开的一些实施例中,牙位定位结果可以采用世界牙科联盟(Fédération Dentaire Internationale,FDI)牙位表示法来表示;其中,该FDI牙位表示法也称国际标准化组织(International Organization for Standardization,ISO)-3950表示法。在其他可能的实现方式中,牙位定位结果还可以采用部位记录法(又称为Palmer牙位表示法)、通用记录法(Universal Numbering System,UNS)等牙位表示法来表示。
在本公开的一些实施例中,通过对待处理图像进行牙实例分割,得到待处理图像的牙实例分割结果,并基于牙实例分割结果进行牙位定位,得到待处理图像的牙位定位结果,由此基于不仅能区分牙齿和背景、还能区分不同牙齿的牙实例分割结果进行牙位定位,能够提高牙位定位的准确性。本公开实施例通过先进行牙实例分割,再基于牙实例分割结果进行牙位定位,对于不同牙齿形态、缺牙、种植体等复杂情况均能得到较准确的牙位定位结果。
图4示出可以应用本公开实施例的牙齿图像的处理方法的一种系统架构示意图;如图4所示,该系统架构中包括:图像获取终端401、网络402和牙齿图像的处理终端403。为实现支撑一个示例性应用,图像获取终端401和牙齿图像的处理终端403通过网络402建立通信连接,图像获取终端401通过网络402向牙齿图像的处理终端403上报待处理图像,牙齿图像的处理终端403响应于待处理图像,并利用牙实例分割模型和牙位定位模型,对待处理图像进行牙示例分割和牙位定位,得到待处理图像的牙位定位结果。最后,牙齿图像的处理终端403将待处理图像的牙位定位结果上传至网络402,并通过网络402发送给图像获取终端401。
作为示例,图像获取终端401可以包括图像采集设备,牙齿图像的处理终端403可以包括具有视觉信息处理能力的视觉处理设备或远程服务器。网络402可以采用有线或无线连接方式。其中,当牙齿图像的处理终端403为视觉处理设备时,图像获取终端401可以通过有线连接的方式与视觉处理设备通信连接,例如通过总线进行数据通信;当牙齿图像的处理终端403为远程服务器时,图像获取终端401可以通过无线网络与远程服务器进行数据交互。
或者,在一些场景中,图像获取终端401可以是带有图像采集模组的视觉处理设备,具体实现为 带有摄像头的主机。这时,本公开实施例的牙齿图像的处理方法可以由图像获取终端401执行,上述系统架构可以不包含网络402和牙齿图像的处理终端403。
在本公开的一些实施例中,对待处理图像进行牙实例分割,得到待处理图像的牙实例分割结果,包括:从待处理图像的多个像素中,依次预测属于不同牙实例的像素集合,得到待处理图像中的多个牙实例包含的多个像素集合的预测结果;根据多个牙实例包含的多个像素集合的预测结果,得到待处理图像的牙实例分割结果。
在本公开的一些实施例中,属于任一牙实例的像素集合,可以表示该牙实例包含的像素的集合。可以从待处理图像的多个像素中,依次预测属于不同牙实例的像素,得到待处理图像中的多个牙实例包含的像素集合。例如,先预测属于第1个牙实例的像集合素,在属于第1个牙实例的像素集合预测完成后,预测属于第2个牙实例的像素集合,在属于第2个牙实例的像素集合预测完成后,预测属于第3个牙实例的像素集合,以此类推。即,在该实现方式中,每次可以仅针对一个牙实例进行预测。
在本公开的一些实施例中,任一牙实例包含的像素集合的预测结果,可以包括预测的属于该牙实例的像素的信息,例如,可以包括预测的属于该牙实例的像素的坐标。
在本公开的一些实施例中,可以从待处理图像的多个像素中,依次预测属于各个牙实例的像素集合,得到待处理图像中的各个牙实例包含的像素集合的预测结果;根据各个牙实例包含的像素集合的预测结果,得到待处理图像的牙实例分割结果。当然,在其他示例中,也可以仅从待处理图像的多个像素中,预测属于部分牙实例的像素集合,而无需预测属于各个牙实例的像素集合。
在本公开的一些实施例中,任一牙实例包含的像素集合的预测结果,可以采用该牙实例对应的预测掩膜(mask)来表示。该牙实例对应的预测掩膜的尺寸可以与待处理图像相同。在该牙实例对应的预测掩膜中,预测的属于该牙实例的像素的像素值不同于预测的不属于该牙实例的像素的像素值。例如,在该牙实例对应的预测掩膜中,预测的属于该牙实例的像素的像素值为1,预测的不属于该牙实例的像素的像素值为0。当然,还可以采用表格、矩阵等数据形式,表示任一牙实例包含的像素集合的预测结果。
由于CBCT图像存在噪声干扰、牙齿界限模糊、牙根与颌骨灰度值接近等因素,牙齿的准确分割存在较多的难点。在本公开的一些实施例中,通过从待处理图像的多个像素中,依次预测属于不同牙实例的像素集合,得到待处理图像中的多个牙实例包含的多个像素集合的预测结果,并根据多个牙实例包含的多个像素集合的预测结果,得到待处理图像的牙实例分割结果,由此能够得到准确的牙实例分割结果,有效应对CBCT图像中的噪声干扰、牙齿界限模糊、牙根与颌骨灰度值接近等复杂情况。
在本公开的一些实施例中,还可以并行对不同的牙实例进行预测,例如,可以同时预测属于各个牙实例的像素集合,得到待处理图像中的各个牙实例包含的像素集合的预测结果,再根据各个牙实例包含的像素集合的预测结果,得到待处理图像的牙实例分割结果。
在本公开的一些实施例中,从待处理图像的多个像素中,依次预测属于不同牙实例的像素集合,得到待处理图像中的多个牙实例包含的多个像素集合的预测结果,包括:从待处理图像的多个待处理像素中,预测目标牙实例的中心像素;其中,待处理像素表示待处理图像中未被预测为属于任一牙实例的像素,目标牙实例表示当前预测的牙实例;根据目标牙实例的中心像素的坐标,从多个待处理像素中预测属于目标牙实例的像素集合,得到目标牙实例包含的像素集合的预测结果。
在本公开的一些实施例中,在未进行任一牙实例的预测的情况下,可以从待处理图像的所有像素中,预测目标牙实例的中心像素。即,在未进行任一牙实例的预测的情况下,可以将待处理图像的所有像素均作为待处理像素。在属于某一牙实例的像素集合预测完成后,可以从待处理图像的多个待处理像素中,预测下一牙实例(即目标牙实例)的中心像素。
在本公开的一些实施例中,被预测为属于任一牙实例的像素集合,包括预测的该牙实例的中心像素,以及被预测为属于该牙实例的其他像素(即非中心像素)。
在本公开的一些实施例中,目标牙实例的中心像素的坐标可以记为
Figure PCTCN2021089058-appb-000001
在本公开的一些实施例中,通过从待处理图像的多个待处理像素中,预测目标牙实例的中心像素,根据目标牙实例的中心像素的坐标,从多个待处理像素中预测属于目标牙实例的像素集合,得到目标牙实例包含的像素集合的预测结果,由此能够提高所得到的任一牙实例包含的像素集合的预测结果的准确性。
在本公开的一些实施例中,从待处理图像的多个待处理像素中,预测目标牙实例的中心像素,可以包括:从待处理图像的多个待处理像素中,确定位于牙实例中心的概率最大的第一像素;在第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将第一像素预测为目标牙实例的中心像素。
在本公开的一些实施例中,待处理图像的像素i位于牙实例中心的概率可以记为s i
在该示例中,第一像素表示多个待处理像素中位于牙实例中心的概率最大的像素。
在本公开的一些实施例中,第一预设值可以为0.5。当然,本领域技术人员可以根据实际应用场景需求灵活设置第一预设值,在此不作限定。
在该示例中,通过从待处理图像的多个待处理像素中,确定位于牙实例中心的概率最大的第一像素,并在第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将第一像素预测为目标牙实例的中心像素,由此能够较准确地确定牙实例的中心像素,从而有助于准确地进行牙实例分割。
在本公开的一些实施例中,在第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将第一像素预测为目标牙实例的中心像素,可以包括:在多个待处理像素中、位于牙实例中心的概率大于或等于第一预设值的像素数大于或等于第二预设值,且第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将第一像素预测为目标牙实例的中心像素。在这个例子中,多个待处理像素中、位于牙实例中心的概率大于或等于第一预设值的像素数,表示待处理图像中未被预测为属于任一牙实例的像素中,位于牙实例中心的概率大于或等于第一预设值的像素的数量。在这个例子中,可以根据单颗牙齿包含的像素数的均值或经验值,确定第二预设值。例如,第二预设值可以为32。当然,本领域技术人员也可以根据实际应用场景需求和经验至少之一,灵活确定第二预设值,在此不作限定。在这个例子中,在多个待处理像素中、位于牙实例中心的概率大于或等于第一预设值的像素数大于或等于第二预设值,且第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将第一像素预测为目标牙实例的中心像素,并基于第一像素继续进行预测;在多个待处理像素中、位于牙实例中心的概率大于或等于第一预设值的像素数小于第二预设值的情况下,可以停止预测,由此能够提高预测效率和准确性。
在本公开的一些实施例中,根据目标牙实例的中心像素的坐标,从多个待处理像素中预测属于目标牙实例的像素集合,可以包括:确定多个待处理像素中的第二像素所指向的牙实例中心的预测坐标;其中,第二像素表示多个待处理像素中的任一像素,第二像素所指向的牙实例中心的预测坐标表示基于第二像素预测的第二像素所属的牙实例的中心像素的坐标;根据第二像素所指向的牙实例中心的预测坐标,以及目标牙实例的中心像素的坐标,预测第二像素属于目标牙实例的中心的概率;根据第二像素属于目标牙实例的中心的概率,从多个待处理像素中预测属于目标牙实例的像素集合。
在本公开的一些实施例中,若第二像素为像素i,则第二像素所指向的牙实例中心的预测坐标可以记为e i
在本公开的一些实施例中,可以根据第二像素所指向的牙实例中心的预测坐标与目标牙实例的中心像素的坐标之间的差值,预测第二像素属于目标牙实例的中心的概率。例如,第二像素为像素i,第二像素所指向的牙实例中心的预测坐标为e i,目标牙实例的中心像素的坐标为
Figure PCTCN2021089058-appb-000002
则第二像素所指向的牙实例中心的预测坐标与目标牙实例的中心像素的坐标之间的差值可以表示为
Figure PCTCN2021089058-appb-000003
在该示例中,第二像素属于目标牙实例的中心的概率,可以与第二像素所指向的牙实例中心的预测坐标与目标牙实例的中心像素的坐标之间的距离负相关。即,第二像素所指向的牙实例中心的预测坐标与目标牙实例的中心像素的坐标之间的距离越小,则第二像素属于目标牙实例的中心的概率越大;第二像素所指向的牙实例中心的预测坐标与目标牙实例的中心像素的坐标之间的距离越大,则第二像素属于目标牙实例的中心的概率越小。
在该示例中,第二像素属于目标牙实例的中心的概率越大,则第二像素属于目标牙实例的概率越大;第二像素属于目标牙实例的中心的概率越小,则第二像素属于目标牙实例的概率越小。
在本公开的一些实施例中,若第二像素属于目标牙实例的中心的概率大于第四预设值,则可以将第二像素预测为属于目标牙实例,即,可以预测属于目标牙实例的像素集合包括第二像素;若第二像素属于目标牙实例的中心的概率小于或等于第四预设值,则可以将第二像素预测为不属于目标牙实例,即,可以预测属于目标牙实例的像素集合不包括第二像素。例如,第四预设值可以为0.5。当然,本领域技术人员可以根据实际应用场景需求灵活设置第四预设值,在此不作限定。
在该示例中,通过确定多个待处理像素中的第二像素所指向的牙实例中心的预测坐标,根据第二像素所指向的牙实例中心的预测坐标,以及目标牙实例的中心像素的坐标,预测第二像素属于目标牙实例的中心的概率,根据第二像素属于目标牙实例的中心的概率,从多个待处理像素中预测属于目标牙实例的像素集合,由此能够从多个待处理像素中准确地预测属于目标牙实例的像素。
在一个示例中,确定多个待处理像素中的第二像素所指向的牙实例中心的预测坐标,可以包括:确定多个待处理像素中的第二像素到第二像素所属的牙实例的中心像素的预测偏移量;根据第二像素 的坐标,以及第二像素到第二像素所属的牙实例的中心像素的预测偏移量,确定第二像素所指向的牙实例中心的预测坐标。
在该示例中,第二像素到第二像素所属的牙实例的中心像素的预测偏移量,可以表示第二像素的坐标与第二像素所属的牙实例的中心像素的坐标之间的预测坐标差值。例如,第二像素的坐标可以记为x i,第二像素到第二像素所属的牙实例的中心像素的预测偏移量可以记为o i
在本公开的一些实施例中,若预测偏移量为第二像素所属的牙实例的中心像素的坐标与第二像素的坐标之间的预测坐标差值,则可以将第二像素的坐标与预测偏移量之和,确定为第二像素所指向的牙实例中心的预测坐标。例如,第二像素所指向的牙实例中心的预测坐标可以记为e i,即e i=x i+o i
在本公开的另一些实施例中,若预测偏移量为第二像素的坐标与第二像素所属的牙实例的中心像素的坐标之间的预测坐标差值,则可以将第二像素的坐标与预测偏移量之差,确定为第二像素所指向的牙实例中心的预测坐标。
在该示例中,通过确定多个待处理像素中的第二像素到第二像素所属的牙实例的中心像素的预测偏移量,并根据第二像素的坐标,以及第二像素到第二像素所属的牙实例的中心像素的预测偏移量,确定第二像素所指向的牙实例中心的预测坐标,由此能够获得较准确的第二像素所指向的牙实例中心的预测坐标。
在一个示例中,根据第二像素所指向的牙实例中心的预测坐标,以及目标牙实例的中心像素的坐标,预测第二像素属于目标牙实例的中心的概率,可以包括:预测目标牙实例对应的聚类参数;其中,聚类参数用于表示目标牙实例的中心像素的预测坐标的离散程度;根据第二像素所指向的牙实例中心的预测坐标,目标牙实例的中心像素的坐标,以及目标牙实例对应的聚类参数,预测第二像素属于目标牙实例的中心的概率。
在该示例中,目标牙实例对应的聚类参数可以是能够表示目标牙实例的中心像素的预测坐标的离散程度的任意参数。在本公开的一些实施例中,目标牙实例对应的聚类参数可以表示目标牙实例的中心像素的预测坐标的标准差。在这个例子中,目标牙实例对应的聚类参数可以记为σ。在本公开的另一些实施例中,目标牙实例对应的聚类参数可以表示目标牙实例的中心像素的预测坐标的方差。在这个例子中,目标牙实例对应的聚类参数可以记为σ 2。在本公开的另一些实施例中,目标牙实例对应的聚类参数可以与目标牙实例的中心像素的预测坐标的方差负相关。例如,目标牙实例对应的聚类参数可以为
Figure PCTCN2021089058-appb-000004
在该示例中,不同的牙实例对应的聚类参数可以不同,可以针对各个牙实例分别预测对应的聚类参数。
在本公开的一些实施例中,第二像素属于目标牙实例的中心的概率可以为
Figure PCTCN2021089058-appb-000005
其中,exp(X)表示e的X次方。通过目标牙实例对应的聚类参数
Figure PCTCN2021089058-appb-000006
可以使得到的第二像素属于目标牙实例的中心的概率在[0,1]范围内。
在该示例中,通过预测目标牙实例对应的聚类参数,并根据第二像素所指向的牙实例中心的预测坐标,目标牙实例的中心像素的坐标,以及目标牙实例对应的聚类参数,预测第二像素属于目标牙实例的中心的概率,由此能够在本公开的一些实施例中提高所预测的第二像素属于目标牙实例的中心的概率的准确性。
在本公开的一些实施例中,所述方法还包括:将待处理图像输入第一神经网络,经由第一神经网络得到第二像素到第二像素所属的牙实例的中心像素的预测偏移量,第二像素所属的牙实例的聚类参数,以及第二像素位于牙实例中心的概率。例如,在这个例子中,可以经由第一神经网络得到待处理图像中的各个像素到该像素所属的牙实例的中心像素的预测偏移量,待处理图像中的各个牙实例的聚类参数,以及待处理图像中的各个像素位于牙实例中心的概率。当然,第一神经网络也可以只对待处理图像中的部分像素进行处理,在此不作限定。在这个例子中,通过第一神经网络对待处理图像进行处理,能够提高所得到的预测偏移量、聚类参数和像素位于牙实例中心的概率的准确性,并能提高得到预测偏移量、聚类参数和像素位于牙实例中心的概率的速度。
在本公开的一些实施例中,第一神经网络包括第一解码器和第二解码器;将待处理图像输入第一神经网络,经由第一神经网络得到第二像素到第二像素所属的牙实例的中心像素的预测偏移量,第二像素所属的牙实例的聚类参数,以及第二像素位于牙实例中心的概率,包括:将待处理图像输入第一神经网络,经由第一解码器得到第二像素到第二像素所属的牙实例的中心像素的预测偏移量,以及第 二像素所属的牙实例的聚类参数,经由第二解码器得到第二像素位于牙实例中心的概率。根据这个例子,能够在本公开的一些实施例中提高所得到的预测偏移量、聚类参数和像素位于牙实例中心的概率的准确性。
在一个示例中,在将待处理图像输入第一神经网络之前,所述方法还可以包括:将训练图像输入第一神经网络,经由第一神经网络得到训练图像中的第三像素到第三像素所属的第一牙实例的中心像素的预测偏移量,第一牙实例对应的聚类参数,以及第三像素位于牙实例中心的概率,其中,第三像素表示训练图像中的任一像素,第一牙实例表示第三像素所属的牙实例;根据第三像素的坐标,以及第三像素到第一牙实例的中心像素的预测偏移量,确定第三像素所指向的牙实例中心的预测坐标,其中,第三像素所指向的牙实例中心的预测坐标表示基于第三像素预测的第一牙实例的中心像素的坐标;根据第三像素所指向的牙实例中心的预测坐标,属于第一牙实例的不同像素所指向的牙实例中心的预测坐标,以及第一牙实例对应的聚类参数,确定第三像素属于第一牙实例的中心的概率;根据第三像素位于牙实例中心的概率,第三像素属于第一牙实例的中心的概率,以及第三像素属于牙齿内部的真值,以及第三像素属于第一牙实例的中心的概率,训练第一神经网络。
在这个示例中,训练图像可以是三维图像或者二维图像。例如,训练图像为三维图像,训练图像的尺寸为(D,H,W),例如,D=112,H=128,W=144。
例如,第一牙实例可以记为S k,第一牙实例的中心像素可以记为c k,其中,k表示牙实例的编号。若第三像素为像素i,第三像素的坐标为x i,则第三像素到第三像素所属的第一牙实例的中心像素的预测偏移量可以为o i=c k-x i。其中,若训练图像是三维图像,则x i可以包括第三像素的x轴坐标、y轴坐标和z轴坐标,第三像素到第三像素所属的第一牙实例的中心像素的预测偏移量可以包括第三像素到第三像素所属的第一牙实例的中心像素的x轴预测偏移量、y轴预测偏移量和z轴预测偏移量。
在这个示例中,可以由第一神经网络得到训练图像中的各个像素到其所属的牙实例的中心像素的预测偏移量,从而可以得到(3,D,H,W)的偏移量矩阵。
在这个示例中,根据属于第一牙实例的不同像素所指向的牙实例中心的预测坐标,可以得到属于第一牙实例的不同像素所指向的牙实例中心的预测坐标的均值。例如,属于第一牙实例的不同像素所指向的牙实例中心的预测坐标的均值可以表示为
Figure PCTCN2021089058-appb-000007
其中,e j表示属于第一牙实例的像素j所指向的牙实例中心的预测坐标,|S k|表示属于第一牙实例的像素总数。
例如,根据第三像素所指向的牙实例中心的预测坐标,以及属于第一牙实例的不同像素所指向的牙实例中心的预测坐标,确定第三像素属于第一牙实例的中心的概率,可以包括:确定属于第一牙实例的各个像素所指向的牙实例中心的预测坐标的均值;根据第三像素所指向的牙实例中心的预测坐标与均值之间的差值,确定第三像素属于第一牙实例的中心的概率。
在本公开的一些实施例中,第一牙实例对应的聚类参数可以记为
Figure PCTCN2021089058-appb-000008
例如,第三像素属于第一牙实例的中心的概率可以为
Figure PCTCN2021089058-appb-000009
在本公开的一些实施例中,第一神经网络可以采用交叉熵损失函数等损失函数来训练。例如,第三像素为像素i,第三像素位于牙实例中心的概率可以记为s i,第三像素属于第一牙实例的中心的概率可以记为φ k(i),用于训练第一神经网络的损失函数可以表示为
Figure PCTCN2021089058-appb-000010
其中,s i∈S k表示第三像素属于牙齿内部,即,第三像素属于牙齿内部的真值为第三像素属于牙齿内部。s i∈bg表示第三像素不属于牙齿内部,即,第三像素属于牙齿内部的真值为第三像素不属于牙齿内部,即,第三像素属于背景部分。N表示待处理图像中的像素总数。
通过上述示例训练第一神经网络,能够使第一神经网络学习到分割牙齿图像中不同的牙实例的能力。采用该示例训练得到的第一神经网络进行牙实例分割,能够在复杂场景中得到稳定、准确的牙实例分割结果,例如,能够应对CBCT图像中牙齿灰度分布不均匀、牙齿边界模糊、非常规形态的牙齿、牙内低密度影等情况。
在本公开的一些实施例中,将训练图像输入第一神经网络,经由第一神经网络得到训练图像中的第三像素到第三像素所属的第一牙实例的中心像素的预测偏移量,第一牙实例对应的聚类参数,以及第三像素位于牙实例中心的概率,包括:将训练图像输入第一神经网络,经由第一神经网络的第一解码器得到训练图像中的第三像素到第三像素所属的第一牙实例的中心像素的预测偏移量,以及第一牙实例对应的聚类参数,经由第一神经网络的第二解码器得到第三像素位于牙实例中心的概率。在本公开的一些实施例中,第一神经网络采用Encoder-Decoder的结构,具体网络架构在此不作限定。
在本公开的一些实施例中,基于牙实例分割结果进行牙位定位,得到待处理图像的牙位定位结果,包括:预测牙实例分割结果中的第二牙实例包含的像素所属的牙位类别;其中,第二牙实例表示牙实例分割结果中的任一牙实例;根据第二牙实例包含的像素所属的牙位类别,确定第二牙实例所属的牙位类别。
在本公开的一些实施例中,可以预先训练用于预测像素所属的牙位类别的第二神经网络,将牙实例分割结果输入第二神经网络,或者,将牙实例分割结果和待处理图像输入第二神经网络,经由第二神经网络得到牙实例分割结果中的各个牙实例包含的像素所属的牙位类别,从而根据牙实例分割结果中的各个牙实例包含的像素所属的牙位类别,确定牙实例分割结果中的各个牙实例所属的牙位类别。其中,第二神经网络可以采用U-Net等结构,在此不作限定。
在本公开的一些实施例中,第二神经网络可以用于对单侧牙进行分类,例如,第二神经网络可以用于对右侧牙进行分类。例如,第二神经网络可以用于将输入图像划分为18个类别,分别是右侧的16个牙位类别、左侧牙和背景部分。即,第二神经网络可以用于确定输入图像中的各个像素分别属于这18个类别中的哪个类别,从而能够得到右侧牙的牙位类别。通过对输入图像左右翻转后输入第二神经网络,可以得到左侧牙的牙位类别。在这个例子中,通过训练第二神经网络对单侧牙进行分类,能够降低第二神经网络的训练难度。
在本公开的一些实施例中,可以根据第二牙实例包含的各个像素所属的牙位类别中,出现次数最多的牙位类别,作为第二牙实例所属的牙位类别。例如,第二牙实例包含100个像素,其中,80个像素所属的牙位类别是牙位34,10个像素所属的牙位类别是牙位33,10个像素所属的牙位类别是牙位35,则可以确定第二牙实例所属的牙位类别是牙位34。
在本公开的一些实施例中,通过预测牙实例分割结果中的第二牙实例包含的像素所属的牙位类别,并根据第二牙实例包含的像素所属的牙位类别,确定第二牙实例所属的牙位类别,由此能够准确地确定第二牙实例所属的牙位类别。
在本公开的一些实施例中,在对待处理图像进行牙实例分割之前,所述方法还包括:将待分割图像降采样至第一分辨率,得到第一图像;根据第一图像,得到待处理图像;在得到待处理图像的牙实例分割结果之后,所述方法还包括:根据待分割图像,得到第二图像,其中,第二图像的分辨率为第二分辨率,第二分辨率高于第一分辨率;根据牙实例分割结果中的第三牙实例的中心像素的坐标,从第二图像中,裁剪出第三牙实例对应的图像,其中,第三牙实例表示牙实例分割结果中的任一牙实例;对第三牙实例对应的图像进行分割,得到第三牙实例在第二分辨率下的分割结果。
在本公开的一些实施例中,待分割图像可以表示需要分割的牙齿图像。
在本公开的一些实施例中,待分割图像可以是三维图像,例如,待分割图像可以是三维CBCT图像,待分割图像的分辨率可以是0.2mm×0.2mm×0.2mm或者0.3mm×0.3mm×0.3mm等,长宽高可以是(453×755×755)或者(613×681×681)等。第一分辨率可以是空间分辨率。例如,第一分辨率可以是0.6mm×0.6mm×0.6mm。作为该实现例的另一个示例,待分割图像可以是二维图像。
在本公开的一些实施例中,可以对第一图像进行归一化,得到第一归一化图像;对第一归一化图像进行裁剪,得到待处理图像。例如,待处理图像的尺寸可以是(112,128,144)。
在本公开的一些实施例中,可以基于预设区间将第一图像的像素值进行归一化,得到第一归一化图像。其中,基于预设区间将第一图像的像素值进行归一化,可以包括:对于第一图像中的第四像素,若第四像素的像素值小于预设区间的下边界值,则确定第四像素的归一化值为0,其中,第四像素表示第一图像中的任一像素;若第四像素的像素值大于或等于预设区间的下边界值,且小于或等于预设区间的上边界值,则确定第四像素的像素值与下边界值的差值,并将差值与区间长度的比值确定为第四像素的归一化值;若第四像素的像素值大于上边界值,则确定第四像素的归一化值为1。例如,预设区间为[-1000,1500],像素i的像素值为u。若u<-1000,则确定像素i的归一化值为0;若-1000≤u≤1500,则将(u-(-1000))/2500确定为像素i的归一化值;若u大于1500,则确定像素i的归一化值为1。通过基于预设区间将第一图像的像素值进行归一化,能够使得到的归一化图像中的像素值在区间[0,1]中。
在本公开的一些实施例中,可以将待分割图像降采样至第二分辨率,得到第二图像。在本公开的一些实施例中,可以将待分割图像作为第二图像。在该示例中,待分割图像的分辨率为第二分辨率。例如,第二分辨率可以是0.2mm×0.2mm×0.2mm。
在本公开的一些实施例中,在得到第二图像之后,可以对第二图像进行归一化,得到第二归一化图像;根据牙实例分割结果中的第三牙实例的中心像素的坐标,从第二图像中,裁剪出第三牙实例对应的图像,可以包括:根据牙实例分割结果中的第三牙实例的中心像素的坐标,从第二归一化图像中,裁剪出第三牙实例对应的图像。
在本公开的一些实施例中,可以牙实例分割结果中的第三牙实例的中心像素所在位置为几何中心,从第二图像中,裁剪出第三牙实例对应的图像。即,在该示例中,第三牙实例对应的图像的几何中心可以是牙实例分割结果中的第三牙实例的中心像素所在位置。例如,第三牙实例对应的图像的尺寸可以是(176,112,96)。当然,在其他示例中,第三牙实例对应的图像的几何中心也可以不是牙实例分割结果中的第三牙实例的中心像素所在位置。
在本公开的一些实施例中,可以将第三牙实例对应的图像输入第三神经网络,经由第三神经网络对第三牙实例对应的图像进行分割,得到第三牙实例在第二分辨率下的分割结果。例如,第三神经网络可以采用U-Net等架构。
在本公开的一些实施例中,能够先在较低的分辨率上快速进行牙实例分割和牙位定位,并能获得各个牙实例在较高的分辨率下的分割结果。
在本公开的一些实施例中,在对待处理图像进行牙实例分割之前,所述方法还包括:根据待分割图像进行上下牙分割,确定待分割图像中的感兴趣区域;根据感兴趣区域,对待分割图像进行裁剪,得到待处理图像。
在本公开的一些实施例中,可以根据待分割图像,得到第三图像;根据第三图像进行上下牙分割,确定待分割图像中的感兴趣区域。在本公开的一些实施例中,可以将待分割图像降采样至第三分辨率,得到第三图像。例如,第三分辨率可以是0.2mm×0.2mm×0.2mm。在本公开的另一些实施例中,可以将待分割图像作为第三图像。在本公开的一些实施例中,可以对第三图像的像素值进行归一化,得到第三归一化图像;对第三归一化图像进行上下牙分割,确定待分割图像中的感兴趣区域。在本公开的另一些实施例中,可以对第三图像进行上下牙分割,确定待分割图像中的感兴趣区域。
在本公开的一些实施例中,可以采用第四神经网络,沿冠状轴或失状轴,即从横断面或者矢状面,逐层对第三归一化图像的二维(2 Dimensions,2D)切片进行上下牙分割,得到第三归一化图像的各层二维切片的感兴趣区域,并根据第三归一化图像的各层二维切片的感兴趣区域,得到第三归一化图像的感兴趣区域。例如,第四神经网络可以是卷积神经网络。其中,横断面和矢状面上的牙齿边界较清晰,易于分割。例如,可以将第三归一化图像的各层二维切片的感兴趣区域重新组合得到第三归一化图像的感兴趣区域。又如,在将第三归一化图像的各层二维切片的感兴趣区域重新组合得到感兴趣的三维区域后,可以去除该感兴趣的三维区域中尺寸小于第三预设值的连通域,得到第三归一化图像的感兴趣区域。通过去除该感兴趣的三维区域中尺寸小于第三预设值的连通域,能够减少图像噪声对分割结果的影响,从而对分割结果进行优化。例如,第三预设值可以是150mm 3
在本公开的一些实施例中,可以将待分割图像降采样至第一分辨率,得到第一图像,根据感兴趣区域,对第一图像进行裁剪,得到待处理图像。例如,裁剪得到的待处理图像可以包括感兴趣区域。又如,可以以感兴趣区域的几何中心作为待处理图像的几何中心,以预设尺寸作为待处理图像的尺寸,裁剪得到待处理图像。例如,预设尺寸可以是(112,128,144)。
根据该实施例得到的待处理图像能够保留待分割图像中大部分的牙齿信息,且能去除待分割图像中的大部分无关信息(例如背景信息),从而有助于后续进行牙实例分割、牙位定位等的效率和准确性。
本公开实施例中的神经网络可以采用U-Net等架构,在此不作限定。在本公开的一些实施例中,神经网络的卷积块可以由残差模块构成。在本公开的一些实施例中,神经网络的编解码两部分之间可以引入双维度注意力(Dual Attention)模块。
根据本公开实施例提供的牙齿图像的处理方法,即使在图像中存在缺牙、高亮伪影等的情况下,也能获得准确的牙位定位结果,从而有助于提高医生的阅片效率,例如,有助于提高医生分析患者牙齿的CBCT图像的效率。例如,可以为牙科医生阅片提供辅助,方便判断缺牙的牙位。图5示出本公开实施例提供的存在高亮伪影和存在缺牙的CBCT横断面图像的示意图;其中,图5中的a为存在高亮伪影的CBCT横断面图像的示意图;同时图5中的b示出存在缺牙的CBCT横断面图像的示意图。
本公开实施例通过提供准确的牙位定位结果,还可以为牙齿修补种植材料的制作等环节提供准确 的牙位信息。本公开实施例还可以为设备、软件厂商等提供牙实例分割结果和牙位定位结果至少之一,设备、软件厂商等可以基于本公开实施例提供的牙实例分割结果和牙位定位结果至少之一进行一些细致的分析,例如可以基于本公开实施例提供的牙实例分割结果和牙位定位结果至少之一获取牙弓曲线等。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例。本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
此外,本公开实施例还提供了牙齿图像的处理装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开实施例提供的任一种牙齿图像的处理方法,相应技术方案和技术效果可参见方法部分的相应记载。
图6示出本公开实施例提供的牙齿图像的处理装置60的框图。如图6所示,牙齿图像的处理装置60包括:
牙实例分割模块61,配置为对待处理图像进行牙实例分割,得到待处理图像的牙实例分割结果;其中,一个牙实例对应于一颗牙齿,牙实例分割结果包括待处理图像中的像素所属的牙实例的信息;
牙位定位模块62,配置为基于牙实例分割结果进行牙位定位,得到待处理图像的牙位定位结果。
在本公开的一些实施例中,牙实例分割模块61,配置为从待处理图像的多个像素中,依次预测属于不同牙实例的像素集合,得到待处理图像中的多个牙实例包含的多个像素集合的预测结果;根据多个牙实例包含的多个像素集合的预测结果,得到待处理图像的牙实例分割结果。
在本公开的一些实施例中,牙实例分割模块61,配置为从待处理图像的多个待处理像素中,预测目标牙实例的中心像素;其中,待处理像素表示待处理图像中未被预测为属于任一牙实例的像素,目标牙实例表示当前预测的牙实例;根据目标牙实例的中心像素的坐标,从多个待处理像素中预测属于目标牙实例的像素集合,得到目标牙实例包含的像素集合的预测结果。
在本公开的一些实施例中,牙实例分割模块61,配置为从待处理图像的多个待处理像素中,确定位于牙实例中心的概率最大的第一像素;在第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将第一像素预测为目标牙实例的中心像素。
在本公开的一些实施例中,牙实例分割模块61,配置为在多个待处理像素中、位于牙实例中心的概率大于或等于第一预设值的像素数大于或等于第二预设值,且第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将第一像素预测为目标牙实例的中心像素。
在本公开的一些实施例中,牙实例分割模块61,配置为确定多个待处理像素中的第二像素所指向的牙实例中心的预测坐标;其中,第二像素表示多个待处理像素中的任一像素,第二像素所指向的牙实例中心的预测坐标表示基于第二像素预测的第二像素所属的牙实例的中心像素的坐标;根据第二像素所指向的牙实例中心的预测坐标,以及目标牙实例的中心像素的坐标,预测第二像素属于目标牙实例的中心的概率;根据第二像素属于目标牙实例的中心的概率,从多个待处理像素中预测属于目标牙实例的像素集合。
在本公开的一些实施例中,牙实例分割模块61,配置为确定多个待处理像素中的第二像素到第二像素所属的牙实例的中心像素的预测偏移量;根据第二像素的坐标,以及第二像素到第二像素所属的牙实例的中心像素的预测偏移量,确定第二像素所指向的牙实例中心的预测坐标。
在本公开的一些实施例中,牙实例分割模块61,配置为预测目标牙实例对应的聚类参数;其中,聚类参数用于表示目标牙实例的中心像素的预测坐标的离散程度;根据第二像素所指向的牙实例中心的预测坐标,目标牙实例的中心像素的坐标,以及目标牙实例对应的聚类参数,预测第二像素属于目标牙实例的中心的概率。
在本公开的一些实施例中,装置60还包括:
第一预测模块,配置为将待处理图像输入第一神经网络,经由第一神经网络得到第二像素到第二像素所属的牙实例的中心像素的预测偏移量,第二像素所属的牙实例的聚类参数,以及第二像素位于牙实例中心的概率。
在本公开的一些实施例中,第一神经网络包括第一解码器和第二解码器;
第一预测模块,配置为将待处理图像输入第一神经网络,经由第一解码器得到第二像素到第二像素所属的牙实例的中心像素的预测偏移量,以及第二像素所属的牙实例的聚类参数,经由第二解码器得到第二像素位于牙实例中心的概率。
在本公开的一些实施例中,装置60还包括:
第二预测模块,配置为将训练图像输入第一神经网络,经由第一神经网络得到训练图像中的第三 像素到第三像素所属的第一牙实例的中心像素的预测偏移量,第一牙实例对应的聚类参数,以及第三像素位于牙实例中心的概率,其中,第三像素表示训练图像中的任一像素,第一牙实例表示第三像素所属的牙实例;
第一确定模块,配置为根据第三像素的坐标,以及第三像素到第一牙实例的中心像素的预测偏移量,确定第三像素所指向的牙实例中心的预测坐标,其中,第三像素所指向的牙实例中心的预测坐标表示基于第三像素预测的第一牙实例的中心像素的坐标;第二确定模块,配置为根据第三像素所指向的牙实例中心的预测坐标,属于第一牙实例的不同像素所指向的牙实例中心的预测坐标,以及第一牙实例对应的聚类参数,确定第三像素属于第一牙实例的中心的概率;
训练模块,配置为根据第三像素位于牙实例中心的概率,第三像素属于第一牙实例的中心的概率,以及第三像素属于牙齿内部的真值,训练第一神经网络。
在本公开的一些实施例中,牙位定位模块62,配置为预测牙实例分割结果中的第二牙实例包含的像素所属的牙位类别;其中,第二牙实例表示牙实例分割结果中的任一牙实例;根据第二牙实例包含的像素所属的牙位类别,确定第二牙实例所属的牙位类别。
在本公开的一些实施例中,装置60还包括:
降采样模块,配置为将待分割图像降采样至第一分辨率,得到第一图像;根据第一图像,得到待处理图像;
第三确定模块,配置为根据待分割图像,得到第二图像;其中,第二图像的分辨率为第二分辨率,第二分辨率高于第一分辨率;
第一裁剪模块,配置为根据牙实例分割结果中的第三牙实例的中心像素的坐标,从第二图像中,裁剪出第三牙实例对应的图像;其中,第三牙实例表示牙实例分割结果中的任一牙实例;
第一分割模块,配置为对第三牙实例对应的图像进行分割,得到第三牙实例在第二分辨率下的分割结果。
在本公开的一些实施例中,装置60还包括:
第二分割模块,配置为根据待分割图像进行上下牙分割,确定待分割图像中的感兴趣区域;
第二裁剪模块,配置为根据感兴趣区域,对待分割图像进行裁剪,得到待处理图像。
在本公开实施例中,通过对待处理图像进行牙实例分割,得到待处理图像的牙实例分割结果,并基于牙实例分割结果进行牙位定位,得到待处理图像的牙位定位结果,由此基于不仅能区分牙齿和背景、还能区分不同牙齿的牙实例分割结果进行牙位定位,能够提高牙位定位的准确性。
在本公开的一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以配置为执行上文方法实施例描述的方法,其具体实现和技术效果可以参照上文方法实施例的描述。
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。其中,所述计算机可读存储介质可以是非易失性计算机可读存储介质,或者可以是易失性计算机可读存储介质。
本公开实施例还提出一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行用于实现上述任一所述的牙齿图像的处理方法。
本公开实施例还提供了另一种计算机程序产品,配置为存储计算机可读指令,指令被执行时使得计算机执行上述任一实施例提供的牙齿图像的处理方法的操作。
本公开实施例还提供一种电子设备,包括:一个或多个处理器;配置为存储可执行指令的存储器;其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,以执行上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。
图7示出本公开实施例提供的一种电子设备700的框图。例如,电子设备700可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备以及个人数字助理等终端。
参照图7,电子设备700可以包括以下一个或多个组件:处理组件702,存储器704,电源组件706,多媒体组件708,音频组件810,输入/输出(Input/Output,I/O)接口712,传感器组件714,以及通信组件716。
处理组件702通常控制电子设备700的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件702可以包括一个或多个处理器720来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件702可以包括一个或多个模块,便于处理组件702和其他组件之间的交互。例如,处理组件702可以包括多媒体模块,以方便多媒体组件708和处理组件702之间的交互。
存储器704被配置为存储各种类型的数据以支持在电子设备700的操作。这些数据的示例包括用于 在电子设备700上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器704可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(Static Random-Access Memory,SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read Only Memory,EEPROM),可擦除可编程只读存储器(Electrical Programmable Read Only Memory,EPROM),可编程只读存储器(Programmable Read-Only Memory,PROM),只读存储器(Read-Only Memory,ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件706为电子设备700的各种组件提供电力。电源组件706可以包括电源管理系统,一个或多个电源,及其他与为电子设备700生成、管理和分配电力相关联的组件。
多媒体组件708包括在所述电子设备700和用户之间的提供一个输出接口的屏幕。在本公开的一些实施例中,屏幕可以包括液晶显示器(Liquid Crystal Display,LCD)和触摸面板(Touch Panel,TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在本公开的一些实施例中,多媒体组件708包括一个前置摄像头和/或后置摄像头。当电子设备700处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件710被配置为输出和/或输入音频信号。例如,音频组件710包括一个麦克风(Microphone,MIC),当电子设备700处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被在本公开的一些实施例中存储在存储器704或经由通信组件716发送。在本公开的一些实施例中,音频组件710还包括一个扬声器,配置为输出音频信号。
I/O接口712为处理组件702和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件714包括一个或多个传感器,配置为电子设备700提供各个方面的状态评估。例如,传感器组件714可以检测到电子设备700的打开/关闭状态,组件的相对定位,例如所述组件为电子设备700的显示器和小键盘,传感器组件714还可以检测电子设备700或电子设备700一个组件的位置改变,用户与电子设备700接触的存在或不存在,电子设备700方位或加速/减速和电子设备700的温度变化。传感器组件714可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件714还可以包括光传感器,如互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)或电荷耦合装置(Charge-coupled Device,CCD)图像传感器,配置为在成像应用中使用。在本公开的一些实施例中,该传感器组件714还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件716被配置为便于电子设备700和其他设备之间有线或无线方式的通信。电子设备700可以接入基于通信标准的无线网络,如无线网络(Wi-Fi)、第二代移动通信技术(2-Generation,2G)、第三代移动通信技术(3rd-Generation,3G)、第四代移动通信技术(4-Generation,4G)/通用移动通信技术的长期演进(Long Term Evolution,LTE)、第五代移动通信技术(5-Generation,5G)或它们的组合。在一个示例性实施例中,通信组件716经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件716还包括近场通信(Near Field Communication,NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(Radio Frequency Identification,RFID)技术,红外数据协会(Infrared Data Association,IrDA)技术,超宽带(Ultra Wide Band,UWB)技术,蓝牙(BitTorrent,BT)技术和其他技术来实现。
在示例性实施例中,电子设备700可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Process,DSP)、数字信号处理设备(Digital Signal Process Device,DSPD)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器704,上述计算机程序指令可由电子设备700的处理器720执行以完成上述方法。
图8示出本公开实施例提供的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图8,电子设备1900包括处理组件1922,其在本公开的一些实施例中包括一个或多个处理器,以及由存储器1932所代表的存储器资源,配置为存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外, 处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如微软服务器操作系统(Windows ServerTM),苹果公司推出的基于图形用户界面操作系统(Mac OS XTM),多用户多进程的计算机操作系统(UnixTM),自由和开放原代码的类Unix操作系统(LinuxTM),开放原代码的类Unix操作系统(FreeBSDTM)或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开实施例中涉及的设备可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是但不限于电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、ROM、EPROM或闪存、SRAM、便携式压缩盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(Industry Standard Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言,诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在本公开的一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、FPGA或可编程逻辑阵列(Programmable Logic Arrays,PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的 一部分,所述模块、程序段或指令的一部分包含一个或多个配置为实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。
工业实用性
本公开实施例提供了一种牙齿图像的处理方法、装置、电子设备、存储介质及程序,其中,牙齿图像的处理方法,包括:对待处理图像进行牙实例分割,得到所述待处理图像的牙实例分割结果;其中,一个牙实例对应于一颗牙齿,所述牙实例分割结果包括所述待处理图像中的像素所属的牙实例的信息;基于所述牙实例分割结果进行牙位定位,得到所述待处理图像的牙位定位结果。

Claims (18)

  1. 一种牙齿图像的处理方法,所述方法由电子设备执行,所述方法包括:
    对待处理图像进行牙实例分割,得到所述待处理图像的牙实例分割结果;其中,一个牙实例对应于一颗牙齿,所述牙实例分割结果包括所述待处理图像中的像素所属的牙实例的信息;
    基于所述牙实例分割结果进行牙位定位,得到所述待处理图像的牙位定位结果。
  2. 根据权利要求1所述的方法,其中,所述对待处理图像进行牙实例分割,得到所述待处理图像的牙实例分割结果,包括:
    从所述待处理图像的多个像素中,依次预测属于不同牙实例的像素集合,得到所述待处理图像中的多个牙实例包含的多个像素集合的预测结果;
    根据所述多个牙实例包含的多个像素集合的预测结果,得到所述待处理图像的牙实例分割结果。
  3. 根据权利要求2所述的方法,其中,所述从所述待处理图像的多个像素中,依次预测属于不同牙实例的像素集合,得到所述待处理图像中的多个牙实例包含的多个像素集合的预测结果,包括:
    从所述待处理图像的多个待处理像素中,预测目标牙实例的中心像素;其中,所述待处理像素表示所述待处理图像中未被预测为属于任一牙实例的像素,所述目标牙实例表示当前预测的牙实例;
    根据所述目标牙实例的中心像素的坐标,从多个所述待处理像素中预测属于所述目标牙实例的像素集合,得到所述目标牙实例包含的像素集合的预测结果。
  4. 根据权利要求3所述的方法,其中,所述从所述待处理图像的多个待处理像素中,预测目标牙实例的中心像素,包括:
    从所述待处理图像的多个待处理像素中,确定位于牙实例中心的概率最大的第一像素;
    在所述第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将所述第一像素预测为所述目标牙实例的中心像素。
  5. 根据权利要求4所述的方法,其中,所述在所述第一像素位于牙实例中心的概率大于或等于第一预设值的情况下,将所述第一像素预测为所述目标牙实例的中心像素,包括:
    在多个所述待处理像素中、位于所述牙实例中心的概率大于或等于所述第一预设值的像素数大于或等于第二预设值,且所述第一像素位于所述牙实例中心的概率大于或等于所述第一预设值的情况下,将所述第一像素预测为所述目标牙实例的中心像素。
  6. 根据权利要求3至5中任意一项所述的方法,其中,所述根据所述目标牙实例的中心像素的坐标,从多个所述待处理像素中预测属于所述目标牙实例的像素集合,包括:
    确定多个所述待处理像素中的第二像素所指向的牙实例中心的预测坐标;其中,所述第二像素表示多个所述待处理像素中的任一像素,所述第二像素所指向的牙实例中心的预测坐标表示基于所述第二像素预测的所述第二像素所属的牙实例的中心像素的坐标;
    根据所述第二像素所指向的牙实例中心的预测坐标,以及所述目标牙实例的中心像素的坐标,预测所述第二像素属于所述目标牙实例的中心的概率;
    根据所述第二像素属于所述目标牙实例的中心的概率,从多个所述待处理像素中预测属于所述目标牙实例的像素集合。
  7. 根据权利要求6所述的方法,其中,所述确定多个所述待处理像素中的第二像素所指向的牙实例中心的预测坐标,包括:
    确定多个所述待处理像素中的第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量;
    根据所述第二像素的坐标,以及所述第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量,确定所述第二像素所指向的牙实例中心的预测坐标。
  8. 根据权利要求6或7所述的方法,其中,所述根据所述第二像素所指向的牙实例中心的预测坐标,以及所述目标牙实例的中心像素的坐标,预测所述第二像素属于所述目标牙实例的中心的概率, 包括:
    预测所述目标牙实例对应的聚类参数;其中,所述聚类参数用于表示所述目标牙实例的中心像素的预测坐标的离散程度;
    根据所述第二像素所指向的牙实例中心的预测坐标,所述目标牙实例的中心像素的坐标,以及所述目标牙实例对应的聚类参数,预测所述第二像素属于所述目标牙实例的中心的概率。
  9. 根据权利要求8所述的方法,其中,所述方法还包括:
    将所述待处理图像输入第一神经网络,经由所述第一神经网络得到所述第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量,所述第二像素所属的牙实例的聚类参数,以及所述第二像素位于牙实例中心的概率。
  10. 根据权利要求9所述的方法,其中,所述第一神经网络包括第一解码器和第二解码器;
    所述将所述待处理图像输入第一神经网络,经由所述第一神经网络得到所述第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量,所述第二像素所属的牙实例的聚类参数,以及所述第二像素位于牙实例中心的概率,包括:
    将所述待处理图像输入第一神经网络,经由所述第一解码器得到所述第二像素到所述第二像素所属的牙实例的中心像素的预测偏移量,以及所述第二像素所属的牙实例的聚类参数,经由所述第二解码器得到所述第二像素位于牙实例中心的概率。
  11. 根据权利要求9或10所述的方法,其中,在所述将所述待处理图像输入第一神经网络之前,所述方法还包括:
    将训练图像输入所述第一神经网络,经由所述第一神经网络得到所述训练图像中的第三像素到所述第三像素所属的第一牙实例的中心像素的预测偏移量,所述第一牙实例对应的聚类参数,以及所述第三像素位于牙实例中心的概率,其中,所述第三像素表示所述训练图像中的任一像素,所述第一牙实例表示所述第三像素所属的牙实例;
    根据所述第三像素的坐标,以及所述第三像素到所述第一牙实例的中心像素的预测偏移量,确定所述第三像素所指向的牙实例中心的预测坐标,其中,所述第三像素所指向的牙实例中心的预测坐标表示基于所述第三像素预测的所述第一牙实例的中心像素的坐标;
    根据所述第三像素所指向的牙实例中心的预测坐标,属于所述第一牙实例的不同像素所指向的牙实例中心的预测坐标,以及所述第一牙实例对应的聚类参数,确定所述第三像素属于所述第一牙实例的中心的概率;
    根据所述第三像素位于牙实例中心的概率,所述第三像素属于所述第一牙实例的中心的概率,以及所述第三像素属于牙齿内部的真值,训练所述第一神经网络。
  12. 根据权利要求1至11中任意一项所述的方法,其中,所述基于所述牙实例分割结果进行牙位定位,得到所述待处理图像的牙位定位结果,包括:
    预测所述牙实例分割结果中的第二牙实例包含的像素所属的牙位类别;其中,所述第二牙实例表示所述牙实例分割结果中的任一牙实例;
    根据所述第二牙实例包含的像素所属的牙位类别,确定所述第二牙实例所属的牙位类别。
  13. 根据权利要求1至12中任意一项所述的方法,其中,
    在所述对待处理图像进行牙实例分割之前,所述方法还包括:将待分割图像降采样至第一分辨率,得到第一图像;根据所述第一图像,得到所述待处理图像;
    在所述得到所述待处理图像的牙实例分割结果之后,所述方法还包括:根据所述待分割图像,得到第二图像;其中,所述第二图像的分辨率为第二分辨率,所述第二分辨率高于所述第一分辨率;根据所述牙实例分割结果中的第三牙实例的中心像素的坐标,从所述第二图像中,裁剪出所述第三牙实例对应的图像;其中,所述第三牙实例表示所述牙实例分割结果中的任一牙实例;对所述第三牙实例对应的图像进行分割,得到所述第三牙实例在所述第二分辨率下的分割结果。
  14. 根据权利要求1至13中任意一项所述的方法,其中,在所述对待处理图像进行牙实例分割之前,所述方法还包括:
    根据待分割图像进行上下牙分割,确定所述待分割图像中的感兴趣区域;
    根据所述感兴趣区域,对所述待分割图像进行裁剪,得到所述待处理图像。
  15. 一种牙齿图像的处理装置,包括:
    牙实例分割模块,配置为对待处理图像进行牙实例分割,得到所述待处理图像的牙实例分割结果;其中,一个牙实例对应于一颗牙齿,所述牙实例分割结果包括所述待处理图像中的像素所属的牙实例的信息;
    牙位定位模块,配置为基于所述牙实例分割结果进行牙位定位,得到所述待处理图像的牙位定位结果。
  16. 一种电子设备,包括:
    一个或多个处理器;
    配置为存储可执行指令的存储器;
    其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,执行权利要求1至14中任意一项所述的方法。
  17. 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至14中任意一项所述的方法。
  18. 一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于实现如权利要求1至14任意一项所述的牙齿图像的处理方法。
PCT/CN2021/089058 2020-11-10 2021-04-22 牙齿图像的处理方法、装置、电子设备、存储介质及程序 WO2022100005A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021576347A JP2023504957A (ja) 2020-11-10 2021-04-22 歯画像の処理方法、装置、電子機器、記憶媒体及びプログラム
KR1020227001270A KR20220012991A (ko) 2020-11-10 2021-04-22 치아 이미지 처리 방법, 장치, 전자 기기, 저장 매체 및 프로그램

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011246718.0 2020-11-10
CN202011246718.0A CN112308867B (zh) 2020-11-10 2020-11-10 牙齿图像的处理方法及装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022100005A1 true WO2022100005A1 (zh) 2022-05-19

Family

ID=74325454

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089058 WO2022100005A1 (zh) 2020-11-10 2021-04-22 牙齿图像的处理方法、装置、电子设备、存储介质及程序

Country Status (2)

Country Link
CN (1) CN112308867B (zh)
WO (1) WO2022100005A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308867B (zh) * 2020-11-10 2022-07-22 上海商汤智能科技有限公司 牙齿图像的处理方法及装置、电子设备和存储介质
CN112785609B (zh) * 2021-02-07 2022-06-03 重庆邮电大学 一种基于深度学习的cbct牙齿分割方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741288A (zh) * 2016-01-29 2016-07-06 北京正齐口腔医疗技术有限公司 牙齿图像的分割方法和装置
CN109389129A (zh) * 2018-09-15 2019-02-26 北京市商汤科技开发有限公司 一种图像处理方法、电子设备及存储介质
CN110619646A (zh) * 2019-07-23 2019-12-27 同济大学 一种基于全景图的单牙提取方法
CN110689564A (zh) * 2019-08-22 2020-01-14 浙江工业大学 一种基于超像素聚类的牙弓线绘制方法
CN112308867A (zh) * 2020-11-10 2021-02-02 上海商汤智能科技有限公司 牙齿图像的处理方法及装置、电子设备和存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761252B (zh) * 2016-02-02 2017-03-29 北京正齐口腔医疗技术有限公司 图像分割的方法及装置
EP3462373A1 (en) * 2017-10-02 2019-04-03 Promaton Holding B.V. Automated classification and taxonomy of 3d teeth data using deep learning methods
US11534272B2 (en) * 2018-09-14 2022-12-27 Align Technology, Inc. Machine learning scoring system and methods for tooth position assessment
CN109801307A (zh) * 2018-12-17 2019-05-24 中国科学院深圳先进技术研究院 一种全景分割方法、装置及设备
CN109949319B (zh) * 2019-03-12 2022-05-20 北京羽医甘蓝信息技术有限公司 基于深度学习的全景片恒牙识别的方法和装置
CN109978886B (zh) * 2019-04-01 2021-11-09 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN110033005A (zh) * 2019-04-08 2019-07-19 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
US10878566B2 (en) * 2019-04-23 2020-12-29 Adobe Inc. Automatic teeth whitening using teeth region detection and individual tooth location
CN110348339B (zh) * 2019-06-26 2021-11-16 西安理工大学 一种基于实例分割的手写文档文本行的提取方法
CN110516527B (zh) * 2019-07-08 2023-05-23 广东工业大学 一种基于实例分割的视觉slam回环检测改进方法
CN110569854B (zh) * 2019-09-12 2022-03-29 上海商汤智能科技有限公司 图像处理方法及装置、电子设备和存储介质
CN110969655B (zh) * 2019-10-24 2023-08-18 百度在线网络技术(北京)有限公司 用于检测车位的方法、装置、设备、存储介质以及车辆
CN110930421B (zh) * 2019-11-22 2022-03-29 电子科技大学 一种用于cbct牙齿图像的分割方法
CN110974288A (zh) * 2019-12-26 2020-04-10 北京大学口腔医学院 一种牙周病cbct纵向数据记录及分析方法
CN111709959B (zh) * 2020-06-23 2022-07-15 杭州口腔医院集团有限公司 一种口腔正畸数字化智能诊断方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741288A (zh) * 2016-01-29 2016-07-06 北京正齐口腔医疗技术有限公司 牙齿图像的分割方法和装置
CN109389129A (zh) * 2018-09-15 2019-02-26 北京市商汤科技开发有限公司 一种图像处理方法、电子设备及存储介质
CN110619646A (zh) * 2019-07-23 2019-12-27 同济大学 一种基于全景图的单牙提取方法
CN110689564A (zh) * 2019-08-22 2020-01-14 浙江工业大学 一种基于超像素聚类的牙弓线绘制方法
CN112308867A (zh) * 2020-11-10 2021-02-02 上海商汤智能科技有限公司 牙齿图像的处理方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN112308867B (zh) 2022-07-22
CN112308867A (zh) 2021-02-02

Similar Documents

Publication Publication Date Title
WO2022151755A1 (zh) 目标检测方法及装置、电子设备、存储介质、计算机程序产品和计算机程序
US20220198775A1 (en) Image processing method and apparatus, electronic device, storage medium and computer program
CN111899268B (zh) 图像分割方法及装置、电子设备和存储介质
KR101839789B1 (ko) 치과 영상의 판독 데이터 생성 시스템
TWI754375B (zh) 圖像處理方法、電子設備、電腦可讀儲存介質
WO2022100005A1 (zh) 牙齿图像的处理方法、装置、电子设备、存储介质及程序
CN110998602A (zh) 使用深度学习方法对3d牙颌面结构的分类和3d建模
CN112967291B (zh) 图像处理方法及装置、电子设备和存储介质
KR20210107667A (ko) 이미지 분할 방법 및 장치, 전자 기기 및 저장 매체
CN113222038B (zh) 基于核磁图像的乳腺病灶分类和定位方法及装置
WO2022022350A1 (zh) 图像处理方法及装置、电子设备、存储介质和计算机程序产品
WO2021189848A1 (zh) 模型训练方法、杯盘比确定方法、装置、设备及存储介质
KR20220013404A (ko) 이미지 처리 방법 및 장치, 전자 기기, 저장 매체 및 프로그램 제품
WO2023050691A1 (zh) 图像处理方法及装置、电子设备、存储介质和程序
US20230149135A1 (en) Systems and methods for modeling dental structures
CN113012166A (zh) 颅内动脉瘤分割方法及装置、电子设备和存储介质
CN111598989B (zh) 一种图像渲染参数设置方法、装置、电子设备及存储介质
CN114820584B (zh) 肺部病灶定位装置
JP2014215925A (ja) 画像処理装置、画像処理方法、画像処理制御プログラム、および記録媒体
CN113034491B (zh) 一种冠脉钙化斑块检测方法及装置
WO2022032998A1 (zh) 图像处理方法及装置、电子设备、存储介质和程序产品
TWI765386B (zh) 神經網路訓練及圖像分割方法、電子設備和電腦儲存介質
JP2022548453A (ja) 画像分割方法及び装置、電子デバイス並びに記憶媒体
CN113554607A (zh) 一种牙体检测模型、生成方法及牙体分割方法
US20230042643A1 (en) Intuitive Intraoral Scanning

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021576347

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227001270

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21890540

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.10.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21890540

Country of ref document: EP

Kind code of ref document: A1