WO2023198101A1 - Artificial intelligence-based oral cavity examination method and apparatus, electronic device, and medium - Google Patents

Artificial intelligence-based oral cavity examination method and apparatus, electronic device, and medium Download PDF

Info

Publication number
WO2023198101A1
WO2023198101A1 PCT/CN2023/087798 CN2023087798W WO2023198101A1 WO 2023198101 A1 WO2023198101 A1 WO 2023198101A1 CN 2023087798 W CN2023087798 W CN 2023087798W WO 2023198101 A1 WO2023198101 A1 WO 2023198101A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
oral cavity
lesion
area
model
Prior art date
Application number
PCT/CN2023/087798
Other languages
French (fr)
Chinese (zh)
Inventor
王嘉磊
皮成祥
张健
江腾飞
Original Assignee
先临三维科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 先临三维科技股份有限公司 filed Critical 先临三维科技股份有限公司
Publication of WO2023198101A1 publication Critical patent/WO2023198101A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Definitions

  • Embodiments of the present disclosure relate to the technical field of intelligent oral medicine, and in particular to an oral detection method, device, electronic device and medium based on artificial intelligence.
  • dentists identify and judge oral diseases based on their own experience, which requires the dentist to have certain clinical experience and consumes a lot of energy. In addition, there are cases where the diseased parts are misdetected or missed.
  • the present disclosure provides an oral cavity detection method, device, electronic device and medium based on artificial intelligence.
  • Embodiments of the present disclosure provide an oral cavity detection method based on artificial intelligence, which method includes:
  • the two-dimensional texture map is processed based on the pre-trained oral cavity detection model to obtain the lesion area;
  • An embodiment of the present disclosure also provides an oral cavity detection device based on artificial intelligence, which device includes:
  • the image acquisition module is used to obtain the two-dimensional texture map corresponding to the three-dimensional tooth model
  • a processing module used to process the two-dimensional texture map based on a pre-trained oral cavity detection model to obtain the lesion area
  • a back-projection module is used to back-project the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model.
  • An embodiment of the present disclosure also provides an electronic device.
  • the electronic device includes: a processor; a memory used to store instructions executable by the processor; and the processor is used to read the instruction from the memory.
  • the instructions can be executed and executed to implement the artificial intelligence-based oral cavity detection method provided by the embodiments of the present disclosure.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute the oral cavity detection method based on artificial intelligence as provided by the embodiments of the present disclosure.
  • the artificial intelligence-based oral detection solution obtained by the embodiment of the present disclosure obtains the two-dimensional texture map corresponding to the three-dimensional tooth model, processes the two-dimensional texture map based on the pre-trained oral cavity detection model, obtains the diseased area, and back-projects the diseased area to The three-dimensional tooth model is used to obtain the lesion location of the three-dimensional tooth model.
  • the two-dimensional texture map corresponding to the three-dimensional tooth model can be detected to completely observe the information of the entire set of teeth, avoiding the problem of the diseased area being missed due to improper selection of the observation angle of the three-dimensional tooth model.
  • identifying the lesion area based on the pre-trained oral detection model can greatly improve the recognition accuracy of the lesion area.
  • the location of the lesion can be presented in the form of a three-dimensional model, making the presentation effect more intuitive and further improving the detection efficiency and effect in the oral detection scenario.
  • Figure 1 is a schematic flowchart of an artificial intelligence-based oral cavity detection method in one or more embodiments of the present disclosure
  • Figure 2 is a schematic flow chart of another oral cavity detection method based on artificial intelligence in one or more embodiments of the present disclosure
  • Figure 3a is a schematic diagram of a three-dimensional tooth model in one or more embodiments of the present disclosure
  • Figure 3b is a schematic diagram of a two-dimensional texture map in one or more embodiments of the present disclosure
  • Figure 3c is a schematic diagram of another two-dimensional texture map in one or more embodiments of the present disclosure.
  • Figure 3d is a schematic diagram of yet another two-dimensional texture map in one or more embodiments of the present disclosure.
  • Figure 4 is a schematic diagram of another three-dimensional tooth model in one or more embodiments of the present disclosure.
  • Figure 5 is a schematic diagram of yet another three-dimensional tooth model in one or more embodiments of the present disclosure.
  • Figure 6 is a schematic structural diagram of an artificial intelligence-based oral cavity detection device in one or more embodiments of the present disclosure
  • Figure 7 is a schematic structural diagram of an electronic device in one or more embodiments of the present disclosure.
  • the present disclosure proposes an oral cavity detection method based on artificial intelligence.
  • the two-dimensional texture map is processed based on the pre-trained oral cavity detection model to obtain the lesion area, and the lesion is The area is back-projected to the three-dimensional tooth model to obtain the lesion location of the three-dimensional tooth model, so that the location of the lesion can be quickly and accurately located and identified in an automated manner.
  • FIG. 1 is a schematic flowchart of an artificial intelligence-based oral cavity detection method provided by an embodiment of the present disclosure.
  • the method can be executed by an artificial intelligence-based oral cavity detection device, where the device can be implemented using software and/or hardware. Generally can be integrated in electronic equipment. As shown in Figure 1, the method includes:
  • Step 101 Obtain the two-dimensional texture map corresponding to the three-dimensional tooth model.
  • the three-dimensional tooth model can be any three-dimensional tooth model.
  • the embodiment of the present disclosure does not limit the source of the three-dimensional tooth model.
  • the three-dimensional tooth model can be obtained by scanning the upper row of teeth or the lower row of teeth in the human mouth in real time with a scanning device. It can also be a three-dimensional tooth model corresponding to the upper row of teeth or the lower row of teeth sent based on the download address or other devices.
  • the two-dimensional texture map refers to a two-dimensional plane texture image after the three-dimensional tooth model is expanded by network parameters, so that the information of the entire set of teeth can be completely observed, and the problem of the diseased area being missed due to improper selection of the observation perspective can be avoided.
  • the three-dimensional tooth model is subjected to network parameterization processing to obtain the two-dimensional texture map, such as obtaining the two-dimensional texture map corresponding to the three-dimensional tooth model.
  • network parameterization processing to obtain the two-dimensional texture map, such as obtaining the two-dimensional texture map corresponding to the three-dimensional tooth model.
  • multiple three-dimensional coordinate points corresponding to the three-dimensional tooth model are obtained, the multiple three-dimensional coordinate points are converted based on the dimension conversion model to obtain multiple two-dimensional coordinate points, and a two-dimensional texture map is constructed based on the two-dimensional coordinate points .
  • the above two methods are only examples of obtaining the two-dimensional texture map corresponding to the three-dimensional tooth model.
  • the embodiments of the present disclosure do not limit the specific methods of obtaining the two-dimensional texture map corresponding to the three-dimensional tooth model.
  • the mapping relationship between the three-dimensional vertex coordinates on the three-dimensional tooth model and the two-dimensional coordinates of the parameter domain plane is obtained, thereby converting a three-dimensional tooth model into a three-dimensional tooth model.
  • the tooth model is flattened into a two-dimensional plane texture image.
  • Step 102 Process the two-dimensional texture map based on the pre-trained oral cavity detection model to obtain the lesion area.
  • the oral cavity detection model is an oral cavity detection model pre-trained for lesion area recognition.
  • the specific training process will be described in detail in subsequent embodiments and will not be described in detail here.
  • the lesion area refers to the area corresponding to dental lesions such as dental caries, dental calculus, and pigmentation on the two-dimensional texture map.
  • the two-dimensional texture map is processed based on the pre-trained oral cavity detection model to obtain the lesion area in many ways.
  • the two-dimensional texture map is processed based on the target detection network in the oral cavity detection model.
  • the image is detected and multiple tooth regions are obtained.
  • Each tooth region is identified based on the semantic segmentation network in the oral cavity detection model, and the recognition result of each tooth region is obtained.
  • the lesion area is determined based on the recognition result of each tooth region.
  • feature extraction is performed on the two-dimensional texture map based on the oral cavity detection model to obtain multiple feature values, each feature value is compared with a preset feature threshold, and the lesion area is determined based on the comparison results.
  • the above two methods are only examples of processing the two-dimensional texture map based on the pre-trained oral cavity detection model to obtain the lesion area.
  • the embodiments of the present disclosure do not process the two-dimensional texture map based on the pre-trained oral cavity detection model to obtain the lesion area. be limited in specific ways.
  • the two-dimensional texture map after obtaining the two-dimensional texture map, can be processed based on the pre-trained oral cavity detection model to obtain the lesion area.
  • Step 103 Back-project the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model.
  • the lesion area refers to the area corresponding to dental lesions such as dental caries, dental calculus, and pigmentation on the two-dimensional texture map. Therefore, the lesion area needs to be back-projected to the three-dimensional tooth model to obtain the lesion location of the three-dimensional tooth model.
  • the location of the lesion refers to the location corresponding to dental lesions such as dental caries, dental calculus, and pigmentation on the three-dimensional tooth model.
  • the two-dimensional coordinate points corresponding to the lesion area are obtained based on the pre-stored
  • the dimensional coordinate mapping relationship table obtains the three-dimensional coordinate points corresponding to the two-dimensional coordinate points, and determines the lesion location of the three-dimensional tooth model based on the three-dimensional coordinate points.
  • each coordinate point of the three-dimensional tooth model projected onto the two-dimensional plane is obtained, a target coordinate point matching the lesion area is obtained, and the position of the target coordinate point reflected onto the three-dimensional tooth model is the lesion position.
  • the above two methods are only examples of back-projecting the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model.
  • the embodiments of the present disclosure do not back-project the lesion area to the three-dimensional tooth model to obtain the specific lesion position of the three-dimensional tooth model. way to limit.
  • the lesion area can be back-projected to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model, and the lesion position can be presented in the form of the three-dimensional model, making the presentation effect more intuitive.
  • the artificial intelligence-based oral detection solution obtained by the embodiment of the present disclosure obtains the two-dimensional texture map corresponding to the three-dimensional tooth model, processes the two-dimensional texture map based on the pre-trained oral cavity detection model, obtains the diseased area, and back-projects the diseased area to The three-dimensional tooth model is used to obtain the lesion location of the three-dimensional tooth model.
  • the two-dimensional texture map corresponding to the three-dimensional tooth model can be detected to completely observe the information of the entire set of teeth, avoiding the problem of the diseased area being missed due to improper selection of the observation angle of the three-dimensional tooth model.
  • identifying the lesion area based on the pre-trained oral detection model can greatly improve the recognition accuracy of the lesion area.
  • the location of the lesion can be presented in the form of a three-dimensional model, making the presentation effect more intuitive and further improving the detection efficiency and effect in the oral detection scenario.
  • detection in the form of a two-dimensional texture map can completely observe the information of the entire set of teeth, avoiding the problem of missing the diseased area in the three-dimensional model due to improper selection of observation angles.
  • detection in the form of a two-dimensional texture map can completely observe the information of the entire set of teeth, avoiding the problem of missing the diseased area in the three-dimensional model due to improper selection of observation angles.
  • the lesion location can be presented in the form of a three-dimensional model, making the presentation The effect is more intuitive, and finally non-occlusion is calculated based on the location of the lesion
  • the observation angle allows dental doctors to quickly locate and count the location of lesions, effectively avoiding omissions.
  • the tooth area is first detected in the two-dimensional texture map, and then the tooth area is segmented and identified to identify the lesion, which can further improve the detection of small lesion areas such as dental caries.
  • the recognition accuracy, how to train the oral detection model, and associate the observation angle of the screenshot to the observation angle of the three-dimensional tooth model further facilitates the oral surgeon to quickly locate and count the location of the lesion. This is described in detail below in conjunction with Figure 2.
  • FIG. 2 is a schematic flow chart of another oral cavity detection method based on artificial intelligence provided by an embodiment of the present disclosure. Based on the above embodiment, this embodiment further optimizes the above oral cavity detection method based on artificial intelligence. As shown in Figure 2, the method includes:
  • Step 201 Perform network parameterization processing on the three-dimensional tooth model to obtain a two-dimensional texture map.
  • multiple three-dimensional vertices corresponding to the three-dimensional tooth model are obtained.
  • multiple triangles are obtained.
  • a linear equation system is established based on the angle between two adjacent sides of each triangle and the ratio of the two sides.
  • the initial conditions are to solve a system of linear equations, two-dimensional parameters, and a two-dimensional texture map is constructed based on the two-dimensional parameters.
  • the three-dimensional tooth model is composed of multiple three-dimensional vertices.
  • the mapping relationship between the three-dimensional vertex coordinates on the three-dimensional tooth model and the two-dimensional plane coordinates of the parameter domain plane is obtained, thereby flattening a three-dimensional tooth model into a two-dimensional texture. picture.
  • a linear system of equations is established through the angle between the two adjacent sides of the triangle and the proportion of the two sides.
  • the linear equation can be quickly solved after the initial conditions are given.
  • the system of equations is used to obtain the parameterized two-dimensional texture map.
  • the preset initial conditions are conditions that restrict the solution. For example, it is necessary to ensure that the three included angles of the triangle remain unchanged as much as possible. Another example is to ensure that the expanded two-dimensional texture map is as compact as possible. Another example is to ensure that the expanded two-dimensional texture map is as compact as possible.
  • the time required for the two-dimensional texture map should be as short as possible, which should be set according to the needs of the application scenario. In the embodiment of the present disclosure, try to ensure that the angles of the triangular patches are consistent, so that the teeth on the expanded two-dimensional texture map will not be distorted, thereby affecting subsequent recognition and further improving the accuracy of oral cavity detection.
  • Figure 3a is a schematic diagram of a three-dimensional tooth model provided by an embodiment of the present disclosure.
  • Figure 3a shows the original three-dimensional tooth model.
  • the obtained two-dimensional texture map corresponding to the three-dimensional tooth model is shown in Figure 3b.
  • Figure 3b shows a schematic diagram of the obtained two-dimensional texture map.
  • Step 202 Obtain the two-dimensional texture map sample corresponding to each three-dimensional tooth model sample; wherein, the two-dimensional texture map sample includes marked tooth areas and marked lesion areas.
  • the two-dimensional texture map samples are input into the target detection network for detection to obtain the training teeth. area, input the training tooth area into the semantic segmentation network for recognition, and obtain the training lesion area.
  • Step 203 Adjust the network parameters of the target detection network based on the first comparison result between the training tooth area and the marked tooth area, and adjust the network parameters of the semantic segmentation network based on the second comparison result between the training lesion area and the marked lesion area, to obtain an oral cavity detection model.
  • the annotated two-dimensional texture map constitutes a training sample.
  • the target detection network identifies the position of the teeth in the two-dimensional texture map to obtain each tooth region.
  • the semantic segmentation network detects each tooth region in the corresponding image. Classify a pixel to determine whether the category of each pixel belongs to the lesion area.
  • the network parameters of the target detection network are adjusted based on the first comparison result of the training tooth area and the marked tooth area, and the network parameters of the semantic segmentation network are adjusted based on the second comparison result of the training lesion area and the marked lesion area, continuously. Optimize the target detection network and semantic segmentation network until the loss value is less than the preset threshold to obtain the oral cavity detection model.
  • Step 204 Detect the two-dimensional texture map based on the target detection network in the oral cavity detection model to obtain multiple tooth regions. Identify each tooth region based on the semantic segmentation network in the oral cavity detection model to obtain the identification of each tooth region. As a result, the lesion area is determined based on the recognition result of each tooth area.
  • the target detection network detects a single tooth region
  • the semantic segmentation network identifies the lesion area of a single tooth region, and re-overlays the lesion area within a single tooth region to the original two-dimensional texture map.
  • Figure 3c is a schematic diagram of another two-dimensional texture map provided by an embodiment of the present disclosure.
  • Figure 3c shows the tooth area obtained by target detection, and based on The semantic segmentation network in the oral cavity detection model identifies each tooth area, such as the lesion area 11 shown in Figure 3d.
  • the small box in Figure 3c is obtained through the target detection technology of deep learning. Using multiple boxes is used to select the tooth area, which can focus attention on a single area (that is, the subsequent segmentation network only processes the image data in a single box), which can be more accurate than segmenting the entire expanded image. Identify small diseased areas such as dental caries.
  • Step 205 Obtain the two-dimensional coordinate points corresponding to the lesion area, obtain the three-dimensional coordinate points corresponding to the two-dimensional coordinate points based on the pre-stored dimensional coordinate mapping relationship table, and determine the lesion location of the three-dimensional tooth model based on the three-dimensional coordinate points.
  • the three-dimensional tooth model is expanded into a two-dimensional texture map
  • the one-to-one correspondence between the vertices of the three-dimensional tooth model and the coordinates of the two-dimensional points is saved.
  • the two-dimensional texture map identifies that a certain point is a lesion
  • the corresponding point can be queried.
  • the coordinates of a three-dimensional point is saved.
  • the lesion area 11 is projected onto the three-dimensional tooth model, as shown in Figure 4, showing the lesion location 21 of the three-dimensional tooth model.
  • Step 206 Calculate the observation angle based on the lesion location, and obtain and display the target image based on the observation angle.
  • the observation angle refers to the visual angle from which the lesion location is observed without obstruction.
  • the average normal direction of the lesion position is obtained, the positions of two adjacent teeth are determined based on the lesion position, and the position of the two adjacent teeth is determined based on the two phases.
  • the position of the adjacent teeth determines the target straight line direction, the target angle is determined based on the average normal direction and the target straight line direction, and the observation angle is determined based on the target included angle and the preset angle threshold.
  • the non-occluded observation angle is calculated based on the location of the lesion, and the target image is obtained and displayed based on the observation angle.
  • the target image of the specific lesion location is given, which can facilitate the oral surgeon to quickly locate and count the location of the lesion, and effectively avoid omissions.
  • Step 207 Correlate the observation perspective with the perspective of the three-dimensional tooth model.
  • Figure 5 is a schematic diagram of yet another three-dimensional tooth model provided by an embodiment of the present disclosure.
  • the tooth number of each tooth can be obtained through automated or manual marking, such as marking numbers 1-16 in Figure 5. Assuming that the location of the lesion is position A in Figure 5, the observation of position A The average normal direction of the triangular patch in the measurement area is the direction of arrow 1 (it can be seen that if position A is observed from this direction, it will be blocked by tooth No. 5).
  • the formed angle ⁇ when the angle ⁇ is less than the preset threshold, it indicates that there is occlusion, and the observation angle is forcibly changed to the vertical direction of the two tooth numbers (the direction of arrow 3 in Figure 5), thereby avoiding observation View angle occlusion problem.
  • the non-occluded observation angle is calculated based on the location of the lesion, and is displayed by automatically obtaining the target image.
  • the observation angle of the target image is associated with the observation angle of the three-dimensional tooth model, and the observation area of the specific lesion location is given to facilitate oral cavity diagnosis. Doctors can quickly locate and count the location of lesions to effectively avoid omissions.
  • the oral cavity detection solution based on artificial intelligence provided by the embodiment of the present disclosure performs network parameterization processing on the three-dimensional tooth model to obtain the two-dimensional texture, and obtains the two-dimensional texture map sample corresponding to each three-dimensional tooth model sample; wherein, the two-dimensional texture map sample It includes marking the tooth area and marking the lesion area.
  • the first comparison result of the tooth area adjusts the network parameters of the target detection network
  • the second comparison result of the training lesion area and the marked lesion area adjusts the network parameters of the semantic segmentation network to obtain the oral cavity detection model, which is based on the target detection network in the oral cavity detection model.
  • the two-dimensional texture map is detected to obtain multiple tooth regions. Each tooth region is identified based on the semantic segmentation network in the oral cavity detection model to obtain the recognition result of each tooth region.
  • the lesion is determined based on the recognition result of each tooth region.
  • the lesion position of the three-dimensional tooth model based on the three-dimensional coordinate points
  • obtain observations based on the lesion position area and obtain the average normal direction of the observation area
  • determine the position of two adjacent teeth based on the location of the lesion and determine the direction of the target straight line based on the positions of the two adjacent teeth, based on the average normal direction and the target straight line
  • the direction determines the target angle, determines the observation angle based on the target angle and the preset angle threshold, and associates the observation angle with the angle of the three-dimensional tooth model.
  • the two-dimensional texture map corresponding to the three-dimensional tooth model can be detected to completely observe the information of the entire set of teeth, avoiding the problem of the diseased area being missed due to improper selection of the observation angle of the three-dimensional tooth model.
  • detecting the tooth area in the two-dimensional texture map, and then segmenting and identifying the lesion problem in the tooth area can further improve the recognition accuracy of small lesion areas such as dental caries, and how to train the oral cavity detection model, and associate the observation perspective of the screenshot to
  • the observation perspective of the three-dimensional tooth model further facilitates the dentist to quickly locate and count the location of the lesion.
  • the location of the lesion can be presented in the form of a three-dimensional model, making the presentation more intuitive.
  • a target picture corresponding to the specific location of the lesion is given, which is convenient Dental doctors can quickly locate and count the location of lesions, effectively avoiding omissions and further improving detection efficiency and effectiveness in oral examination scenarios.
  • Figure 6 is a schematic structural diagram of an oral cavity detection device based on artificial intelligence provided by an embodiment of the present disclosure.
  • the device can be implemented by software and/or hardware, and can generally be integrated in electronic equipment. As shown in Figure 6, the device includes:
  • the picture acquisition module 301 is used to obtain the two-dimensional texture map corresponding to the three-dimensional tooth model
  • the processing module 302 is used to process the two-dimensional texture map based on the pre-trained oral cavity detection model to obtain the lesion area;
  • the back-projection module 303 is used to back-project the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model.
  • the picture acquisition module 301 is specifically used to:
  • processing module 302 is specifically used to:
  • the diseased area is determined based on the identification result of each of the tooth areas.
  • the back-projection module 303 is specifically used for:
  • the lesion location of the three-dimensional tooth model is determined based on the three-dimensional coordinate points.
  • the device also includes:
  • Calculation module 304 used to calculate the observation angle based on the lesion location
  • the acquisition and display module 305 is used to acquire and display a target picture based on the observation perspective.
  • calculation module 304 is specifically used to:
  • the observation angle is determined based on the target angle and a preset angle threshold.
  • the device also includes:
  • a sample acquisition module is used to obtain a two-dimensional texture map sample corresponding to each three-dimensional tooth model sample; wherein the two-dimensional texture map sample includes a marked tooth area and a marked lesion area;
  • a detection module used to input the two-dimensional texture map sample into the target detection network for detection to obtain the training tooth area
  • a recognition module used to input the training tooth area into the semantic segmentation network for recognition, and obtain the training lesion area
  • a training module configured to adjust the network parameters of the target detection network based on the first comparison result of the training tooth area and the marked tooth area, and adjust the second comparison result of the training lesion area and the marked lesion area.
  • the network parameters of the semantic segmentation network are used to obtain the oral cavity detection model.
  • the device also includes an association module for:
  • the observation angle is associated with the angle of view of the three-dimensional tooth model.
  • the artificial intelligence-based oral cavity detection device provided by the embodiments of the present disclosure can execute the artificial intelligence-based oral cavity detection method provided by any embodiment of the present disclosure, and has an execution method Corresponding functional modules and beneficial effects.
  • Embodiments of the present disclosure also provide a computer program product, which includes a computer program/instruction. When executed by a processor, the computer program/instruction implements the artificial intelligence-based oral cavity detection method provided by any embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, laptops, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 7 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 400 may include a processing device (eg, central processing unit, graphics processor, etc.) 401 , which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 402 or from a storage device 408 .
  • the program in the memory (RAM) 403 executes various appropriate actions and processes.
  • various programs and data required for the operation of the electronic device 400 are also stored.
  • the processing device 401, ROM 402 and RAM 403 are connected to each other via a bus 404.
  • An input/output (I/O) interface 405 is also connected to bus 404.
  • the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 407 such as a computer; a storage device 408 including a magnetic tape, a hard disk, etc.; and a communication device 409.
  • the communication device 409 may allow the electronic device 400 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 7 illustrates electronic device 400 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program product
  • the computer program contains program code for performing the methods illustrated in the flowcharts.
  • the computer program may be downloaded and installed from the network via communication device 409, or from storage device 408, or from ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above functions defined in the artificial intelligence-based oral cavity detection method of the embodiment of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programmd read-only memory (EPROM or flash memory), fiber optics, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and server can communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communications e.g., communications network
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or developed in the future network of.
  • LAN local area networks
  • WAN wide area networks
  • the Internet e.g., the Internet
  • end-to-end networks e.g., ad hoc end-to-end networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device receives the user's information display triggering operation during the playback of the video; obtains the At least two target information associated with the video; display the first target information among the at least two target information in the information display area of the play page of the video, wherein the size of the information display area is smaller than the size of the play page. Size: Receive the user's first switching triggering operation, and switch the first target information displayed in the information display area to the second target information among the at least two target information.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages—such as "C” or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as an Internet service provider through Internet connection
  • each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , Or it can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of a unit does not constitute a limitation on the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • the present disclosure provides an electronic device, including:
  • memory for storing instructions executable by the processor
  • the processor is configured to read the executable instructions from the memory and execute the instructions to implement any of the artificial intelligence-based oral cavity detection methods provided by this disclosure.
  • the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, the computer program is used to execute any one of the methods provided by the present disclosure based on Artificial intelligence-based oral detection method.
  • the oral detection solution based on artificial intelligence provided by the present disclosure avoids the problem of missing the lesion area due to improper selection of the observation angle of the three-dimensional tooth model, and identifies the lesion area based on the pre-trained oral detection model, which can greatly improve the identification accuracy of the lesion area.
  • the location of the lesion can be presented in the form of a three-dimensional model, making the presentation effect more intuitive and further improving the detection efficiency and effect in oral detection scenarios.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The present disclosure relates to an artificial intelligence-based oral cavity examination method and apparatus, an electronic device, and a medium. The method comprises: obtaining a two-dimensional texture map corresponding to a three-dimensional tooth model; processing the two-dimensional texture map on the basis of a pre-trained oral cavity examination model to obtain a lesion area; and back-projecting the lesion area to the three-dimensional tooth model to obtain a lesion position of the three-dimensional tooth model. By means of the described technical solution, detection is performed on a two-dimensional texture map to avoid the problem that a lesion area is missed; a lesion area is recognized on the basis of a pre-trained oral cavity examination model, thereby improving the precision of recognition of the lesion area; and a lesion position is presented in the form of a three-dimensional model, thereby implementing a more intuitive presentation effect, and further improving the examination efficiency and effect in an oral cavity examination scene.

Description

基于人工智能的口腔检测方法、装置、电子设备及介质Oral examination methods, devices, electronic equipment and media based on artificial intelligence
本公开要求2022年4月12日提交中国专利局、申请号为2022103836043、发明名称为“基于人工智能的口腔检测方法、装置、电子设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This disclosure requires the priority of the Chinese patent application submitted to the China Patent Office on April 12, 2022, with the application number 2022103836043 and the invention name "Oral Detection Method, Device, Electronic Equipment and Media Based on Artificial Intelligence", and its entire content is approved by This reference is incorporated into this disclosure.
技术领域Technical field
本公开实施例涉及智能口腔医学技术领域,尤其涉及一种基于人工智能的口腔检测方法、装置、电子设备及介质。Embodiments of the present disclosure relate to the technical field of intelligent oral medicine, and in particular to an oral detection method, device, electronic device and medium based on artificial intelligence.
背景技术Background technique
随着社会不断发展和进步,人们生活水平的不断提高,人们越来越重视口腔牙齿情况。With the continuous development and progress of society and the continuous improvement of people's living standards, people are paying more and more attention to the condition of their oral teeth.
相关技术中,通过口腔医生根据自身的经验识别和判断出口腔疾病,需要口腔医生具备一定的临床经验,以及会耗费口腔医生大量的精力,另外存在病变部位被误检和漏检的情况。In related technologies, dentists identify and judge oral diseases based on their own experience, which requires the dentist to have certain clinical experience and consumes a lot of energy. In addition, there are cases where the diseased parts are misdetected or missed.
发明内容Contents of the invention
(一)要解决的技术问题(1) Technical problems to be solved
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种基于人工智能的口腔检测方法、装置、电子设备及介质。In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides an oral cavity detection method, device, electronic device and medium based on artificial intelligence.
(二)技术方案(2) Technical solutions
本公开实施例提供了一种基于人工智能的口腔检测方法,所述方法包括:Embodiments of the present disclosure provide an oral cavity detection method based on artificial intelligence, which method includes:
获取三维牙齿模型对应的二维纹理图;Obtain the two-dimensional texture map corresponding to the three-dimensional tooth model;
基于预先训练的口腔检测模型对所述二维纹理图进行处理,得到病变区域;The two-dimensional texture map is processed based on the pre-trained oral cavity detection model to obtain the lesion area;
将所述病变区域反投影至所述三维牙齿模型,得到所述三维牙齿 模型的病变位置。Back-project the diseased area to the three-dimensional tooth model to obtain the three-dimensional tooth The location of the lesion in the model.
本公开实施例还提供了一种基于人工智能的口腔检测装置,所述装置包括:An embodiment of the present disclosure also provides an oral cavity detection device based on artificial intelligence, which device includes:
获取图片模块,用于获取三维牙齿模型对应的二维纹理图;The image acquisition module is used to obtain the two-dimensional texture map corresponding to the three-dimensional tooth model;
处理模块,用于基于预先训练的口腔检测模型对所述二维纹理图进行处理,得到病变区域;A processing module, used to process the two-dimensional texture map based on a pre-trained oral cavity detection model to obtain the lesion area;
反投影模块,用于将所述病变区域反投影至所述三维牙齿模型,得到所述三维牙齿模型的病变位置。A back-projection module is used to back-project the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model.
本公开实施例还提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开实施例提供的基于人工智能的口腔检测方法。An embodiment of the present disclosure also provides an electronic device. The electronic device includes: a processor; a memory used to store instructions executable by the processor; and the processor is used to read the instruction from the memory. The instructions can be executed and executed to implement the artificial intelligence-based oral cavity detection method provided by the embodiments of the present disclosure.
本公开实施例还提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开实施例提供的基于人工智能的口腔检测方法。Embodiments of the present disclosure also provide a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute the oral cavity detection method based on artificial intelligence as provided by the embodiments of the present disclosure.
(三)有益效果(3) Beneficial effects
本公开实施例提供的上述技术方案与现有技术相比具有如下优点:Compared with the existing technology, the above technical solutions provided by the embodiments of the present disclosure have the following advantages:
本公开实施例提供的基于人工智能的口腔检测方案,获取三维牙齿模型对应的二维纹理图,基于预先训练的口腔检测模型对二维纹理图进行处理,得到病变区域,将病变区域反投影至三维牙齿模型,得到三维牙齿模型的病变位置。采用上述技术方案,在口腔检测过程中针对三维牙齿模型对应的二维纹理图进行检测能够完整地观测到整副牙齿的信息,避免三维牙齿模型由于观测视角选择不当导致病变区域被遗漏的问题,并且基于预先训练的口腔检测模型识别病变区域,能够大大提高病变区域的识别精度,另外能够以三维模型的方式呈现病变位置,使呈现效果更加直观,进一步提高口腔检测场景下的检测效率和效果。The artificial intelligence-based oral detection solution provided by the embodiment of the present disclosure obtains the two-dimensional texture map corresponding to the three-dimensional tooth model, processes the two-dimensional texture map based on the pre-trained oral cavity detection model, obtains the diseased area, and back-projects the diseased area to The three-dimensional tooth model is used to obtain the lesion location of the three-dimensional tooth model. Using the above technical solution, during the oral inspection process, the two-dimensional texture map corresponding to the three-dimensional tooth model can be detected to completely observe the information of the entire set of teeth, avoiding the problem of the diseased area being missed due to improper selection of the observation angle of the three-dimensional tooth model. And identifying the lesion area based on the pre-trained oral detection model can greatly improve the recognition accuracy of the lesion area. In addition, the location of the lesion can be presented in the form of a three-dimensional model, making the presentation effect more intuitive and further improving the detection efficiency and effect in the oral detection scenario.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It should be understood that the foregoing general description and the following detailed description are exemplary and explanatory only, and do not limit the present disclosure.
附图说明 Description of the drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, for those of ordinary skill in the art, It is said that other drawings can be obtained based on these drawings without exerting creative labor.
图1是本公开一个或多个实施例中的一种基于人工智能的口腔检测方法的流程示意图;Figure 1 is a schematic flowchart of an artificial intelligence-based oral cavity detection method in one or more embodiments of the present disclosure;
图2是本公开一个或多个实施例中的另一种基于人工智能的口腔检测方法的流程示意图;Figure 2 is a schematic flow chart of another oral cavity detection method based on artificial intelligence in one or more embodiments of the present disclosure;
图3a是本公开一个或多个实施例中的一种三维牙齿模型的示意图;Figure 3a is a schematic diagram of a three-dimensional tooth model in one or more embodiments of the present disclosure;
图3b是本公开一个或多个实施例中的一种二维纹理图的示意图;Figure 3b is a schematic diagram of a two-dimensional texture map in one or more embodiments of the present disclosure;
图3c是本公开一个或多个实施例中的另一种二维纹理图的示意图;Figure 3c is a schematic diagram of another two-dimensional texture map in one or more embodiments of the present disclosure;
图3d是本公开一个或多个实施例中的再一种二维纹理图的示意图;Figure 3d is a schematic diagram of yet another two-dimensional texture map in one or more embodiments of the present disclosure;
图4是本公开一个或多个实施例中的另一种三维牙齿模型的示意图;Figure 4 is a schematic diagram of another three-dimensional tooth model in one or more embodiments of the present disclosure;
图5是本公开一个或多个实施例中的再一种三维牙齿模型的示意图;Figure 5 is a schematic diagram of yet another three-dimensional tooth model in one or more embodiments of the present disclosure;
图6是本公开一个或多个实施例中的一种基于人工智能的口腔检测装置的结构示意图;Figure 6 is a schematic structural diagram of an artificial intelligence-based oral cavity detection device in one or more embodiments of the present disclosure;
图7是本公开一个或多个实施例中的一种电子设备的结构示意图。Figure 7 is a schematic structural diagram of an electronic device in one or more embodiments of the present disclosure.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below. Obviously, the described embodiments are part of the embodiments of the present disclosure, rather than All examples. Based on the embodiments in this disclosure, all other embodiments obtained by those of ordinary skill in the art without any creative efforts fall within the scope of protection of this disclosure.
在实际应用中,在对常见的口腔疾病如龋齿、牙结石、色素沉积的检测识别过程,需要口腔医生对病变部位进行识别,需要口腔医生具备一定的临床经验,以及会耗费医生大量的精力;另外,在进行病 变部位的识别,口腔医生观测角度的选择不当会导致存在病变部位被误检和漏检的情况。In practical applications, the process of detecting and identifying common oral diseases such as dental caries, dental calculus, and pigmentation requires a dentist to identify the lesion, which requires the dentist to have certain clinical experience and consumes a lot of energy from the doctor; In addition, during the ongoing disease The identification of variable parts and the improper selection of observation angles by the dentist may lead to misdetection or missed detection of the diseased parts.
针对上述问题,本公开提出一种基于人工智能的口腔检测方法,通过获取三维牙齿模型对应的二维纹理图,基于预先训练的口腔检测模型对二维纹理图进行处理,得到病变区域,将病变区域反投影至三维牙齿模型,得到三维牙齿模型的病变位置,以通过自动化的方式,快速准确地进行病变位置的定位和识别。In response to the above problems, the present disclosure proposes an oral cavity detection method based on artificial intelligence. By obtaining the two-dimensional texture map corresponding to the three-dimensional tooth model, the two-dimensional texture map is processed based on the pre-trained oral cavity detection model to obtain the lesion area, and the lesion is The area is back-projected to the three-dimensional tooth model to obtain the lesion location of the three-dimensional tooth model, so that the location of the lesion can be quickly and accurately located and identified in an automated manner.
具体地,图1为本公开实施例提供的一种基于人工智能的口腔检测方法的流程示意图,该方法可以由基于人工智能的口腔检测装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图1所示,该方法包括:Specifically, FIG. 1 is a schematic flowchart of an artificial intelligence-based oral cavity detection method provided by an embodiment of the present disclosure. The method can be executed by an artificial intelligence-based oral cavity detection device, where the device can be implemented using software and/or hardware. Generally can be integrated in electronic equipment. As shown in Figure 1, the method includes:
步骤101、获取三维牙齿模型对应的二维纹理图。Step 101: Obtain the two-dimensional texture map corresponding to the three-dimensional tooth model.
其中,三维牙齿模型可以为任意一个三维牙齿模型,本公开实施例对三维牙齿模型的来源不作限定,例如三维牙齿模型可以为扫描设备实时对人口腔的上排牙齿或下排牙齿进行扫描得到,也可以为基于下载地址或者其他设备发送得到的上排牙齿或下排牙齿对应的三维牙齿模型。二维纹理图是指将三维牙齿模型经过网络参数化展开后的二维平面纹理图片,以能够完整地观测到整副牙齿的信息,避免由于观测视角选择不当导致病变区域被遗漏的问题。The three-dimensional tooth model can be any three-dimensional tooth model. The embodiment of the present disclosure does not limit the source of the three-dimensional tooth model. For example, the three-dimensional tooth model can be obtained by scanning the upper row of teeth or the lower row of teeth in the human mouth in real time with a scanning device. It can also be a three-dimensional tooth model corresponding to the upper row of teeth or the lower row of teeth sent based on the download address or other devices. The two-dimensional texture map refers to a two-dimensional plane texture image after the three-dimensional tooth model is expanded by network parameters, so that the information of the entire set of teeth can be completely observed, and the problem of the diseased area being missed due to improper selection of the observation perspective can be avoided.
本公开实施例中,获取三维牙齿模型对应的二维纹理图的方式有很多种,在一些实施方式中,对三维牙齿模型进行网络参数化处理,得到二维纹理图,比如获取三维牙齿模型对应的多个三维顶点,基于多个三维顶点,得到多个三角形,基于每个三角形相邻两条边的夹角和两边的比值建立线性方程组,基于预设的初始条件对线性方程组进行求解,二维参数,基于二维参数构建二维纹理图。In the embodiments of the present disclosure, there are many ways to obtain the two-dimensional texture map corresponding to the three-dimensional tooth model. In some embodiments, the three-dimensional tooth model is subjected to network parameterization processing to obtain the two-dimensional texture map, such as obtaining the two-dimensional texture map corresponding to the three-dimensional tooth model. Multiple three-dimensional vertices, based on multiple three-dimensional vertices, obtain multiple triangles, establish a linear equation system based on the angle between two adjacent sides of each triangle and the ratio of the two sides, and solve the linear equation system based on the preset initial conditions , two-dimensional parameters, construct a two-dimensional texture map based on two-dimensional parameters.
在另一些实施方式中,获取三维牙齿模型对应的多个三维坐标点,基于维度转换模型对多个三维坐标点进行转换,得到多个二维坐标点,基于二维坐标点构建二维纹理图。以上两种方式仅为获取三维牙齿模型对应的二维纹理图的示例,本公开实施例不对获取三维牙齿模型对应的二维纹理图的具体方式进行限定。 In other embodiments, multiple three-dimensional coordinate points corresponding to the three-dimensional tooth model are obtained, the multiple three-dimensional coordinate points are converted based on the dimension conversion model to obtain multiple two-dimensional coordinate points, and a two-dimensional texture map is constructed based on the two-dimensional coordinate points . The above two methods are only examples of obtaining the two-dimensional texture map corresponding to the three-dimensional tooth model. The embodiments of the present disclosure do not limit the specific methods of obtaining the two-dimensional texture map corresponding to the three-dimensional tooth model.
具体的,在获取三维牙齿模型后,三维牙齿模型是由多个三维顶点构成,求出从三维牙齿模型上的三维顶点坐标到参数域平面的二维坐标之间的映射关系,从而将一个三维牙齿模型展平成一张二维平面的纹理图片。Specifically, after obtaining the three-dimensional tooth model, which is composed of multiple three-dimensional vertices, the mapping relationship between the three-dimensional vertex coordinates on the three-dimensional tooth model and the two-dimensional coordinates of the parameter domain plane is obtained, thereby converting a three-dimensional tooth model into a three-dimensional tooth model. The tooth model is flattened into a two-dimensional plane texture image.
步骤102、基于预先训练的口腔检测模型对二维纹理图进行处理,得到病变区域。Step 102: Process the two-dimensional texture map based on the pre-trained oral cavity detection model to obtain the lesion area.
其中,口腔检测模型是预先训练用于病变区域识别的口腔检测模型,具体训练过程后续实施例详细描述,此处不再详述。病变区域指的是二维纹理图上存在龋齿、牙结石、色素沉积等牙齿病变对应的区域。Among them, the oral cavity detection model is an oral cavity detection model pre-trained for lesion area recognition. The specific training process will be described in detail in subsequent embodiments and will not be described in detail here. The lesion area refers to the area corresponding to dental lesions such as dental caries, dental calculus, and pigmentation on the two-dimensional texture map.
在本公开实施例中,基于预先训练的口腔检测模型对二维纹理图进行处理,得到病变区域的方式有很多种,在一些实施方式中,基于口腔检测模型中的目标检测网络对二维纹理图进行检测,得到多个牙齿区域,基于口腔检测模型中的语义分割网络对每个牙齿区域进行识别,得到每个牙齿区域的识别结果,基于每个牙齿区域的识别结果确定病变区域。In the embodiments of the present disclosure, the two-dimensional texture map is processed based on the pre-trained oral cavity detection model to obtain the lesion area in many ways. In some embodiments, the two-dimensional texture map is processed based on the target detection network in the oral cavity detection model. The image is detected and multiple tooth regions are obtained. Each tooth region is identified based on the semantic segmentation network in the oral cavity detection model, and the recognition result of each tooth region is obtained. The lesion area is determined based on the recognition result of each tooth region.
在另一些实施方式中,基于口腔检测模型对二维纹理图进行特征提取,得到多个特征值,并将每个特征值与预设的特征阈值比较,基于比较结果确定病变区域。以上两种方式仅为基于预先训练的口腔检测模型对二维纹理图进行处理,得到病变区域的示例,本公开实施例不对基于预先训练的口腔检测模型对二维纹理图进行处理,得到病变区域的具体方式进行限定。In other embodiments, feature extraction is performed on the two-dimensional texture map based on the oral cavity detection model to obtain multiple feature values, each feature value is compared with a preset feature threshold, and the lesion area is determined based on the comparison results. The above two methods are only examples of processing the two-dimensional texture map based on the pre-trained oral cavity detection model to obtain the lesion area. The embodiments of the present disclosure do not process the two-dimensional texture map based on the pre-trained oral cavity detection model to obtain the lesion area. be limited in specific ways.
本公开实施例中,当获取二维纹理图之后,可以基于预先训练的口腔检测模型对二维纹理图进行处理,得到病变区域。In the embodiment of the present disclosure, after obtaining the two-dimensional texture map, the two-dimensional texture map can be processed based on the pre-trained oral cavity detection model to obtain the lesion area.
步骤103、将病变区域反投影至三维牙齿模型,得到三维牙齿模型的病变位置。Step 103: Back-project the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model.
其中,病变区域指的是二维纹理图上存在龋齿、牙结石、色素沉积等牙齿病变对应的区域,因此需要将将病变区域反投影至三维牙齿模型,得到三维牙齿模型的病变位置。其中,病变位置指的是三维牙齿模型上存在龋齿、牙结石、色素沉积等牙齿病变对应的位置。 Among them, the lesion area refers to the area corresponding to dental lesions such as dental caries, dental calculus, and pigmentation on the two-dimensional texture map. Therefore, the lesion area needs to be back-projected to the three-dimensional tooth model to obtain the lesion location of the three-dimensional tooth model. Among them, the location of the lesion refers to the location corresponding to dental lesions such as dental caries, dental calculus, and pigmentation on the three-dimensional tooth model.
在本公开实施例中,将病变区域反投影至三维牙齿模型,得到三维牙齿模型的病变位置的方式有很多种,在一些实施方式中,获取病变区域对应的二维坐标点,基于预先存储的维度坐标映射关系表获取与二维坐标点对应的三维坐标点,基于三维坐标点确定三维牙齿模型的病变位置。In the embodiments of the present disclosure, there are many ways to back-project the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model. In some embodiments, the two-dimensional coordinate points corresponding to the lesion area are obtained based on the pre-stored The dimensional coordinate mapping relationship table obtains the three-dimensional coordinate points corresponding to the two-dimensional coordinate points, and determines the lesion location of the three-dimensional tooth model based on the three-dimensional coordinate points.
在另一些实施方式中,获取三维牙齿模型投影到二维平面上的各个坐标点,获取与病变区域匹配的目标坐标点,将目标坐标点反映射到三维牙齿模型的位置为病变位置。以上两种方式仅为将病变区域反投影至三维牙齿模型,得到三维牙齿模型的病变位置的示例,本公开实施例不对将病变区域反投影至三维牙齿模型,得到三维牙齿模型的病变位置的具体方式进行限定。In other embodiments, each coordinate point of the three-dimensional tooth model projected onto the two-dimensional plane is obtained, a target coordinate point matching the lesion area is obtained, and the position of the target coordinate point reflected onto the three-dimensional tooth model is the lesion position. The above two methods are only examples of back-projecting the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model. The embodiments of the present disclosure do not back-project the lesion area to the three-dimensional tooth model to obtain the specific lesion position of the three-dimensional tooth model. way to limit.
具体的,获取病变区域之后,可以将病变区域反投影至三维牙齿模型,得到三维牙齿模型的病变位置,能够以三维模型的方式呈现病变位置,使呈现效果更加直观。Specifically, after obtaining the lesion area, the lesion area can be back-projected to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model, and the lesion position can be presented in the form of the three-dimensional model, making the presentation effect more intuitive.
本公开实施例提供的基于人工智能的口腔检测方案,获取三维牙齿模型对应的二维纹理图,基于预先训练的口腔检测模型对二维纹理图进行处理,得到病变区域,将病变区域反投影至三维牙齿模型,得到三维牙齿模型的病变位置。采用上述技术方案,在口腔检测过程中针对三维牙齿模型对应的二维纹理图进行检测能够完整地观测到整副牙齿的信息,避免三维牙齿模型由于观测视角选择不当导致病变区域被遗漏的问题,并且基于预先训练的口腔检测模型识别病变区域,能够大大提高病变区域的识别精度,另外能够以三维模型的方式呈现病变位置,使呈现效果更加直观,进一步提高口腔检测场景下的检测效率和效果。The artificial intelligence-based oral detection solution provided by the embodiment of the present disclosure obtains the two-dimensional texture map corresponding to the three-dimensional tooth model, processes the two-dimensional texture map based on the pre-trained oral cavity detection model, obtains the diseased area, and back-projects the diseased area to The three-dimensional tooth model is used to obtain the lesion location of the three-dimensional tooth model. Using the above technical solution, during the oral inspection process, the two-dimensional texture map corresponding to the three-dimensional tooth model can be detected to completely observe the information of the entire set of teeth, avoiding the problem of the diseased area being missed due to improper selection of the observation angle of the three-dimensional tooth model. And identifying the lesion area based on the pre-trained oral detection model can greatly improve the recognition accuracy of the lesion area. In addition, the location of the lesion can be presented in the form of a three-dimensional model, making the presentation effect more intuitive and further improving the detection efficiency and effect in the oral detection scenario.
基于上述实施例的描述,通过网格参数化展开,以二维纹理图的方式进行检测能够完整地观测到整副牙齿的信息,避免了三维模型由于观测视角选择不当导致病变区域被遗漏的问题,以及通过深度学习技术能够高效准确地识别病变问题,并且通过将网格参数展开后二维纹理图的识别结果反投影回三维网格的方式,能够以三维模型的方式呈现病变位置,使呈现效果更加直观,最后根据病变位置计算非遮挡 的观测视角,能方便口腔医生快速地对病变位置进行定位和统计,有效避免遗漏。Based on the description of the above embodiments, through parametric expansion of the grid, detection in the form of a two-dimensional texture map can completely observe the information of the entire set of teeth, avoiding the problem of missing the diseased area in the three-dimensional model due to improper selection of observation angles. , and can efficiently and accurately identify lesions through deep learning technology, and by back-projecting the recognition results of the two-dimensional texture map after the grid parameters are expanded back to the three-dimensional grid, the lesion location can be presented in the form of a three-dimensional model, making the presentation The effect is more intuitive, and finally non-occlusion is calculated based on the location of the lesion The observation angle allows dental doctors to quickly locate and count the location of lesions, effectively avoiding omissions.
可以理解的是,可以基于使用检测和分割两个阶段的深度学习网络,首先在二维纹理图中检测出牙齿区域,再对牙齿区域进行病变问题的分割识别,能够进一步提高龋齿等小病变区域的识别精度,以及如何训练口腔检测模型,以及将截图的观测视角关联到三维牙齿模型的观测视角,进一步方便口腔医生快速地对病变位置进行定位和统计,下面结合图2进行详细描述。It can be understood that based on the deep learning network using two stages of detection and segmentation, the tooth area is first detected in the two-dimensional texture map, and then the tooth area is segmented and identified to identify the lesion, which can further improve the detection of small lesion areas such as dental caries. The recognition accuracy, how to train the oral detection model, and associate the observation angle of the screenshot to the observation angle of the three-dimensional tooth model further facilitates the oral surgeon to quickly locate and count the location of the lesion. This is described in detail below in conjunction with Figure 2.
具体地,图2为本公开实施例提供的另一种基于人工智能的口腔检测方法的流程示意图,本实施例在上述实施例的基础上,进一步优化了上述基于人工智能的口腔检测方法。如图2所示,该方法包括:Specifically, FIG. 2 is a schematic flow chart of another oral cavity detection method based on artificial intelligence provided by an embodiment of the present disclosure. Based on the above embodiment, this embodiment further optimizes the above oral cavity detection method based on artificial intelligence. As shown in Figure 2, the method includes:
步骤201、对三维牙齿模型进行网络参数化处理,得到二维纹理图。Step 201: Perform network parameterization processing on the three-dimensional tooth model to obtain a two-dimensional texture map.
具体地,获取三维牙齿模型对应的多个三维顶点,基于多个三维顶点,得到多个三角形,基于每个三角形相邻两条边的夹角和两边的比值建立线性方程组,基于预设的初始条件对线性方程组进行求解,二维参数,基于二维参数构建二维纹理图。Specifically, multiple three-dimensional vertices corresponding to the three-dimensional tooth model are obtained. Based on the multiple three-dimensional vertices, multiple triangles are obtained. A linear equation system is established based on the angle between two adjacent sides of each triangle and the ratio of the two sides. Based on the preset The initial conditions are to solve a system of linear equations, two-dimensional parameters, and a two-dimensional texture map is constructed based on the two-dimensional parameters.
具体地,三维牙齿模型是由多个三维顶点构成,获取从三维牙齿模型上的三维顶点坐标到参数域平面的二维平面坐标之间的映射关系,从而将一个三维牙齿模型展平成一张二维纹理图。Specifically, the three-dimensional tooth model is composed of multiple three-dimensional vertices. The mapping relationship between the three-dimensional vertex coordinates on the three-dimensional tooth model and the two-dimensional plane coordinates of the parameter domain plane is obtained, thereby flattening a three-dimensional tooth model into a two-dimensional texture. picture.
为了快速求解和减少参数化带来三维牙齿模型上三角形的形变扭曲,通过三角形相邻两条边的夹角和两边的比例值建立线性方程组,在给定初始条件后可以快速地求解该线性方程组,从而得到参数化后的二维纹理图。In order to quickly solve and reduce the deformation and distortion of the triangle on the three-dimensional tooth model caused by parameterization, a linear system of equations is established through the angle between the two adjacent sides of the triangle and the proportion of the two sides. The linear equation can be quickly solved after the initial conditions are given. The system of equations is used to obtain the parameterized two-dimensional texture map.
其中,预设的初始条件是会对求解有约束的条件,比如需要保证三角形的三个夹角尽量保持不变,再比如需要保证展开的二维纹理图尽可能紧凑,还比如需要保证展开成二维纹理图的时间尽可能少等,具体根据应用场景需要设置。在本公开实施例中,尽量保证三角面片的角度保持一致,这样展开的二维纹理图上的牙齿不会有畸变,从而影响后面的识别,进一步提高口腔检测的精度。Among them, the preset initial conditions are conditions that restrict the solution. For example, it is necessary to ensure that the three included angles of the triangle remain unchanged as much as possible. Another example is to ensure that the expanded two-dimensional texture map is as compact as possible. Another example is to ensure that the expanded two-dimensional texture map is as compact as possible. The time required for the two-dimensional texture map should be as short as possible, which should be set according to the needs of the application scenario. In the embodiment of the present disclosure, try to ensure that the angles of the triangular patches are consistent, so that the teeth on the expanded two-dimensional texture map will not be distorted, thereby affecting subsequent recognition and further improving the accuracy of oral cavity detection.
示例性的,图3a为本公开实施例提供的一种三维牙齿模型的示意 图,图3a展示了原始的三维牙齿模型,获取三维牙齿模型对应的二维纹理图如图3b所示,图3b展示获取的二维纹理图的示意图。Illustratively, Figure 3a is a schematic diagram of a three-dimensional tooth model provided by an embodiment of the present disclosure. Figure 3a shows the original three-dimensional tooth model. The obtained two-dimensional texture map corresponding to the three-dimensional tooth model is shown in Figure 3b. Figure 3b shows a schematic diagram of the obtained two-dimensional texture map.
步骤202,获取各个三维牙齿模型样本对应的二维纹理图样本;其中,二维纹理图样本中包括标记牙齿区域和标记病变区域,将二维纹理图样本输入目标检测网络进行检测,得到训练牙齿区域,将训练牙齿区域输入语义分割网络进行识别,得到训练病变区域。Step 202: Obtain the two-dimensional texture map sample corresponding to each three-dimensional tooth model sample; wherein, the two-dimensional texture map sample includes marked tooth areas and marked lesion areas. The two-dimensional texture map samples are input into the target detection network for detection to obtain the training teeth. area, input the training tooth area into the semantic segmentation network for recognition, and obtain the training lesion area.
步骤203,基于训练牙齿区域和标记牙齿区域的第一对比结果调整目标检测网络的网络参数、以及训练病变区域和标记病变区域的第二对比结果调整语义分割网络的网络参数,得到口腔检测模型。Step 203: Adjust the network parameters of the target detection network based on the first comparison result between the training tooth area and the marked tooth area, and adjust the network parameters of the semantic segmentation network based on the second comparison result between the training lesion area and the marked lesion area, to obtain an oral cavity detection model.
在本公开实施例中,标注过的二维纹理图组成一个训练样本,目标检测网络是识别二维纹理图中牙齿的位置,从而获取每个牙齿区域,语义分割网络对牙齿区域对应图像中每一个像素点进行分类,判断每个像素点的类别是否属于病变区域。In the embodiment of the present disclosure, the annotated two-dimensional texture map constitutes a training sample. The target detection network identifies the position of the teeth in the two-dimensional texture map to obtain each tooth region. The semantic segmentation network detects each tooth region in the corresponding image. Classify a pixel to determine whether the category of each pixel belongs to the lesion area.
在本公开实施例中,基于训练牙齿区域和标记牙齿区域的第一对比结果调整目标检测网络的网络参数,以及训练病变区域和标记病变区域的第二对比结果调整语义分割网络的网络参数,不断优化目标检测网络和语义分割网络,直到损失值小于预设阈值,得到口腔检测模型。In the embodiment of the present disclosure, the network parameters of the target detection network are adjusted based on the first comparison result of the training tooth area and the marked tooth area, and the network parameters of the semantic segmentation network are adjusted based on the second comparison result of the training lesion area and the marked lesion area, continuously. Optimize the target detection network and semantic segmentation network until the loss value is less than the preset threshold to obtain the oral cavity detection model.
步骤204、基于口腔检测模型中的目标检测网络对二维纹理图进行检测,得到多个牙齿区域,基于口腔检测模型中的语义分割网络对每个牙齿区域进行识别,得到每个牙齿区域的识别结果,基于每个牙齿区域的识别结果确定病变区域。Step 204: Detect the two-dimensional texture map based on the target detection network in the oral cavity detection model to obtain multiple tooth regions. Identify each tooth region based on the semantic segmentation network in the oral cavity detection model to obtain the identification of each tooth region. As a result, the lesion area is determined based on the recognition result of each tooth area.
具体地,目标检测网络检测出单个牙齿区域,,语义分割网络识别出单个牙齿区域的病变区域,将单个牙齿区域内的病变区域重新叠加到原始的二维纹理图。Specifically, the target detection network detects a single tooth region, the semantic segmentation network identifies the lesion area of a single tooth region, and re-overlays the lesion area within a single tooth region to the original two-dimensional texture map.
示例性的,以图3b中的二维纹理图为例,图3c为本公开实施例提供的另一种二维纹理图的示意图,图3c中展示了进行目标检测得到的牙齿区域,并基于口腔检测模型中的语义分割网络对每个牙齿区域进行识别,比如图3d所示的病变区域11。Illustratively, taking the two-dimensional texture map in Figure 3b as an example, Figure 3c is a schematic diagram of another two-dimensional texture map provided by an embodiment of the present disclosure. Figure 3c shows the tooth area obtained by target detection, and based on The semantic segmentation network in the oral cavity detection model identifies each tooth area, such as the lesion area 11 shown in Figure 3d.
具体地,图3c中的小框是通过深度学习的目标检测技术获取得到, 使用多个框是用来选定牙齿区域,能够将注意力聚焦在单个区域(即后面的分割网络仅处理单个框内的图像数据),相对于对整张展开图进行分割处理,能够更加精准地识别到龋齿等细小的病变区域。Specifically, the small box in Figure 3c is obtained through the target detection technology of deep learning. Using multiple boxes is used to select the tooth area, which can focus attention on a single area (that is, the subsequent segmentation network only processes the image data in a single box), which can be more accurate than segmenting the entire expanded image. Identify small diseased areas such as dental caries.
另外,基于深度学习网络检测出单个牙齿区域仅对该牙齿区域进行识别,相较整张图进行识别,能够识别到更细小的病变区域,类似于人眼观测时,观测窗口固定,当局部区域被进行放大时,更利于观察到该区域内细小的物体,进一步提高口腔检测的精确度。In addition, based on the detection of a single tooth area based on the deep learning network, only the tooth area is identified. Compared with the identification of the entire picture, smaller diseased areas can be identified. Similar to the human eye observation, the observation window is fixed. When the local area is When magnified, it is easier to observe small objects in the area, further improving the accuracy of oral examination.
步骤205、获取病变区域对应的二维坐标点,基于预先存储的维度坐标映射关系表获取与二维坐标点对应的三维坐标点,基于三维坐标点确定三维牙齿模型的病变位置。Step 205: Obtain the two-dimensional coordinate points corresponding to the lesion area, obtain the three-dimensional coordinate points corresponding to the two-dimensional coordinate points based on the pre-stored dimensional coordinate mapping relationship table, and determine the lesion location of the three-dimensional tooth model based on the three-dimensional coordinate points.
具体地,在三维牙齿模型展开成二维纹理图时会保存三维牙齿模型顶点和二维点的坐标的一一对应关系,当二维纹理图识别到某个点是病变时,可以查询对应的三维点的坐标。Specifically, when the three-dimensional tooth model is expanded into a two-dimensional texture map, the one-to-one correspondence between the vertices of the three-dimensional tooth model and the coordinates of the two-dimensional points is saved. When the two-dimensional texture map identifies that a certain point is a lesion, the corresponding point can be queried. The coordinates of a three-dimensional point.
示例性的,以图3d中的二维纹理图为例,将病变区域11投影至三维牙齿模型,如图4所示,展示三维牙齿模型的病变位置21。For example, taking the two-dimensional texture map in Figure 3d as an example, the lesion area 11 is projected onto the three-dimensional tooth model, as shown in Figure 4, showing the lesion location 21 of the three-dimensional tooth model.
步骤206、基于病变位置计算观测视角,并基于观测视角获取目标图片并显示。Step 206: Calculate the observation angle based on the lesion location, and obtain and display the target image based on the observation angle.
其中,观测视角指的是无遮挡观测病变位置的视觉角度。Among them, the observation angle refers to the visual angle from which the lesion location is observed without obstruction.
在本公开实施例中,基于病变位置计算观测视角的方式有很多种,在一些具体实施方式中,获取病变位置的平均法向,基于病变位置确定两个相邻牙齿位置,并基于两个相邻牙齿位置确定目标直线方向,基于平均法向和目标直线方向确定目标夹角,基于目标夹角和预设的角度阈值确定观测视角。In the embodiments of the present disclosure, there are many ways to calculate the observation angle based on the lesion position. In some specific implementations, the average normal direction of the lesion position is obtained, the positions of two adjacent teeth are determined based on the lesion position, and the position of the two adjacent teeth is determined based on the two phases. The position of the adjacent teeth determines the target straight line direction, the target angle is determined based on the average normal direction and the target straight line direction, and the observation angle is determined based on the target included angle and the preset angle threshold.
具体的,根据病变位置计算非遮挡的观测视角,基于观测视角获取目标图片的方式显示,给出具体病变位置的目标图片,能方便口腔医生快速地对病变位置进行定位和统计,有效避免遗漏。Specifically, the non-occluded observation angle is calculated based on the location of the lesion, and the target image is obtained and displayed based on the observation angle. The target image of the specific lesion location is given, which can facilitate the oral surgeon to quickly locate and count the location of the lesion, and effectively avoid omissions.
步骤207、将观测视角与三维牙齿模型的视角进行关联处理。Step 207: Correlate the observation perspective with the perspective of the three-dimensional tooth model.
示例性的,图5为本公开实施例提供的再一种三维牙齿模型的示意图,可以通过自动化或人工标记的方式,获取每个牙齿的牙位号,如图5中标记1-16号,假设病变位置为图5中A位置,该A位置的观 测区域三角面片的平均法向为箭头1方向(可见若从该方向进行观察A位置,会受到5号牙齿的遮挡)。Exemplarily, Figure 5 is a schematic diagram of yet another three-dimensional tooth model provided by an embodiment of the present disclosure. The tooth number of each tooth can be obtained through automated or manual marking, such as marking numbers 1-16 in Figure 5. Assuming that the location of the lesion is position A in Figure 5, the observation of position A The average normal direction of the triangular patch in the measurement area is the direction of arrow 1 (it can be seen that if position A is observed from this direction, it will be blocked by tooth No. 5).
因此,首先根据观测区域三角面片的平均法向箭头1与z轴方向的夹角,选择从顶部(图5中z轴方向)还是侧面(图5中x、y平面方向)进行观察,且顶部不会存在遮挡情况。获取A位置临近的2个牙位(如图5中3号牙位和4号牙位),计算观测区域平均法向(箭头1)与两个牙位号连接的直线方向(箭头2)所形成的夹角α,当夹角α小于预设的阈值时,说明存在遮挡,强行将观测视角变为两个牙位号的中垂线方向(如图5中箭头3方向),从而避免观测视角的遮挡问题。Therefore, first, according to the angle between the average normal arrow 1 of the triangular patch of the observation area and the z-axis direction, choose to observe from the top (z-axis direction in Figure 5) or the side (x, y plane direction in Figure 5), and There will be no occlusion at the top. Obtain the two tooth positions adjacent to position A (tooth No. 3 and tooth No. 4 in Figure 5), and calculate the average normal direction of the observation area (arrow 1) and the straight line direction connecting the two tooth position numbers (arrow 2). The formed angle α, when the angle α is less than the preset threshold, it indicates that there is occlusion, and the observation angle is forcibly changed to the vertical direction of the two tooth numbers (the direction of arrow 3 in Figure 5), thereby avoiding observation View angle occlusion problem.
由此,根据病变位置计算非遮挡的观测视角,通过自动获取目标图片的方式显示,并将目标图片的观测视角关联到三维牙齿模型的观测视角,给出具体病变位置的观测区域,以方便口腔医生快速地对病变位置进行定位和统计,有效避免遗漏。From this, the non-occluded observation angle is calculated based on the location of the lesion, and is displayed by automatically obtaining the target image. The observation angle of the target image is associated with the observation angle of the three-dimensional tooth model, and the observation area of the specific lesion location is given to facilitate oral cavity diagnosis. Doctors can quickly locate and count the location of lesions to effectively avoid omissions.
本公开实施例提供的基于人工智能的口腔检测方案,对三维牙齿模型进行网络参数化处理,得到二维纹理,获取各个三维牙齿模型样本对应的二维纹理图样本;其中,二维纹理图样本中包括标记牙齿区域和标记病变区域,将二维纹理图样本输入目标检测网络进行检测,得到训练牙齿区域,将训练牙齿区域输入语义分割网络进行识别,得到训练病变区域,基于训练牙齿区域和标记牙齿区域的第一对比结果调整目标检测网络的网络参数、以及训练病变区域和标记病变区域的第二对比结果调整语义分割网络的网络参数,得到口腔检测模型,基于口腔检测模型中的目标检测网络对二维纹理图进行检测,得到多个牙齿区域,基于口腔检测模型中的语义分割网络对每个牙齿区域进行识别,得到每个牙齿区域的识别结果,基于每个牙齿区域的识别结果确定病变区域,获取病变区域对应的二维坐标点,基于预先存储的维度坐标映射关系表获取与二维坐标点对应的三维坐标点,基于三维坐标点确定三维牙齿模型的病变位置,基于病变位置获取观测区域,并获取观测区域的平均法向,基于病变位置确定两个相邻牙齿位置,并基于两个相邻牙齿位置确定目标直线方向,基于平均法向和目标直线 方向确定目标夹角,基于目标夹角和预设的角度阈值确定观测视角,将观测视角与三维牙齿模型的视角进行关联处理。采用上述技术方案,在口腔检测过程中针对三维牙齿模型对应的二维纹理图进行检测能够完整地观测到整副牙齿的信息,避免三维牙齿模型由于观测视角选择不当导致病变区域被遗漏的问题,并且在二维纹理图中检测出牙齿区域,再对牙齿区域进行病变问题的分割识别,能够进一步提高龋齿等小病变区域的识别精度,以及如何训练口腔检测模型,以及将截图的观测视角关联到三维牙齿模型的观测视角,进一步方便口腔医生快速地对病变位置进行定位和统计,另外能够以三维模型的方式呈现病变位置,使呈现效果更加直观,最后给出具体病变位置对应的目标图片,方便口腔医生快速地对病变位置进行定位和统计,有效避免遗漏,进一步提高口腔检测场景下的检测效率和效果。The oral cavity detection solution based on artificial intelligence provided by the embodiment of the present disclosure performs network parameterization processing on the three-dimensional tooth model to obtain the two-dimensional texture, and obtains the two-dimensional texture map sample corresponding to each three-dimensional tooth model sample; wherein, the two-dimensional texture map sample It includes marking the tooth area and marking the lesion area. Input the two-dimensional texture map sample into the target detection network for detection to obtain the training tooth area. Enter the training tooth area into the semantic segmentation network for recognition to obtain the training lesion area. Based on the training tooth area and marking The first comparison result of the tooth area adjusts the network parameters of the target detection network, and the second comparison result of the training lesion area and the marked lesion area adjusts the network parameters of the semantic segmentation network to obtain the oral cavity detection model, which is based on the target detection network in the oral cavity detection model. The two-dimensional texture map is detected to obtain multiple tooth regions. Each tooth region is identified based on the semantic segmentation network in the oral cavity detection model to obtain the recognition result of each tooth region. The lesion is determined based on the recognition result of each tooth region. area, obtain the two-dimensional coordinate points corresponding to the lesion area, obtain the three-dimensional coordinate points corresponding to the two-dimensional coordinate points based on the pre-stored dimensional coordinate mapping relationship table, determine the lesion position of the three-dimensional tooth model based on the three-dimensional coordinate points, and obtain observations based on the lesion position area, and obtain the average normal direction of the observation area, determine the position of two adjacent teeth based on the location of the lesion, and determine the direction of the target straight line based on the positions of the two adjacent teeth, based on the average normal direction and the target straight line The direction determines the target angle, determines the observation angle based on the target angle and the preset angle threshold, and associates the observation angle with the angle of the three-dimensional tooth model. Using the above technical solution, during the oral inspection process, the two-dimensional texture map corresponding to the three-dimensional tooth model can be detected to completely observe the information of the entire set of teeth, avoiding the problem of the diseased area being missed due to improper selection of the observation angle of the three-dimensional tooth model. And detecting the tooth area in the two-dimensional texture map, and then segmenting and identifying the lesion problem in the tooth area, can further improve the recognition accuracy of small lesion areas such as dental caries, and how to train the oral cavity detection model, and associate the observation perspective of the screenshot to The observation perspective of the three-dimensional tooth model further facilitates the dentist to quickly locate and count the location of the lesion. In addition, the location of the lesion can be presented in the form of a three-dimensional model, making the presentation more intuitive. Finally, a target picture corresponding to the specific location of the lesion is given, which is convenient Dental doctors can quickly locate and count the location of lesions, effectively avoiding omissions and further improving detection efficiency and effectiveness in oral examination scenarios.
图6为本公开实施例提供的一种基于人工智能的口腔检测装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中。如图6所示,该装置包括:Figure 6 is a schematic structural diagram of an oral cavity detection device based on artificial intelligence provided by an embodiment of the present disclosure. The device can be implemented by software and/or hardware, and can generally be integrated in electronic equipment. As shown in Figure 6, the device includes:
获取图片模块301,用于获取三维牙齿模型对应的二维纹理图;The picture acquisition module 301 is used to obtain the two-dimensional texture map corresponding to the three-dimensional tooth model;
处理模块302,用于基于预先训练的口腔检测模型对所述二维纹理图进行处理,得到病变区域;The processing module 302 is used to process the two-dimensional texture map based on the pre-trained oral cavity detection model to obtain the lesion area;
反投影模块303,用于将所述病变区域反投影至所述三维牙齿模型,得到所述三维牙齿模型的病变位置。The back-projection module 303 is used to back-project the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model.
可选的,所述获取图片模块301具体用于:Optionally, the picture acquisition module 301 is specifically used to:
对所述三维牙齿模型进行网络参数化处理,得到所述二维纹理图。Perform network parameterization processing on the three-dimensional tooth model to obtain the two-dimensional texture map.
可选的,所述处理模块302具体用于:Optionally, the processing module 302 is specifically used to:
基于所述口腔检测模型中的目标检测网络对所述二维纹理图进行检测,得到多个牙齿区域;Detect the two-dimensional texture map based on the target detection network in the oral cavity detection model to obtain multiple tooth regions;
基于所述口腔检测模型中的语义分割网络对每个所述牙齿区域进行识别,得到每个所述牙齿区域的识别结果;Identify each tooth area based on the semantic segmentation network in the oral cavity detection model, and obtain the identification result of each tooth area;
基于每个所述牙齿区域的识别结果确定所述病变区域。The diseased area is determined based on the identification result of each of the tooth areas.
可选的,所述反投影模块303具体用于: Optionally, the back-projection module 303 is specifically used for:
获取所述病变区域对应的二维坐标点;Obtain the two-dimensional coordinate points corresponding to the lesion area;
基于预先存储的维度坐标映射关系表获取与所述二维坐标点对应的三维坐标点;Obtain the three-dimensional coordinate point corresponding to the two-dimensional coordinate point based on the pre-stored dimensional coordinate mapping relationship table;
基于所述三维坐标点确定所述三维牙齿模型的病变位置。The lesion location of the three-dimensional tooth model is determined based on the three-dimensional coordinate points.
可选的,所述装置还包括:Optionally, the device also includes:
计算模块304,用于基于所述病变位置计算观测视角;Calculation module 304, used to calculate the observation angle based on the lesion location;
获取显示模块305,用于基于所述观测视角获取目标图片并显示。The acquisition and display module 305 is used to acquire and display a target picture based on the observation perspective.
可选的,所述计算模块304具体用于:Optionally, the calculation module 304 is specifically used to:
基于所述病变位置获取观测区域,并获取所述观测区域的平均法向;Obtain an observation area based on the lesion location, and obtain the average normal direction of the observation area;
基于所述病变位置确定两个相邻牙齿位置,并基于所述两个相邻牙齿位置确定目标直线方向;Determine the positions of two adjacent teeth based on the location of the lesion, and determine the target straight line direction based on the positions of the two adjacent teeth;
基于所述平均法向和所述目标直线方向确定目标夹角;Determine the target angle based on the average normal direction and the target straight line direction;
基于所述目标夹角和预设的角度阈值确定所述观测视角。The observation angle is determined based on the target angle and a preset angle threshold.
可选的,所述装置还包括:Optionally, the device also includes:
获取样本模块,用于获取各个三维牙齿模型样本对应的二维纹理图样本;其中,所述二维纹理图样本中包括标记牙齿区域和标记病变区域;A sample acquisition module is used to obtain a two-dimensional texture map sample corresponding to each three-dimensional tooth model sample; wherein the two-dimensional texture map sample includes a marked tooth area and a marked lesion area;
检测模块,用于将所述二维纹理图样本输入目标检测网络进行检测,得到训练牙齿区域;A detection module, used to input the two-dimensional texture map sample into the target detection network for detection to obtain the training tooth area;
识别模块,用于将所述训练牙齿区域输入语义分割网络进行识别,得到训练病变区域;A recognition module, used to input the training tooth area into the semantic segmentation network for recognition, and obtain the training lesion area;
训练模块,用于基于所述训练牙齿区域和所述标记牙齿区域的第一对比结果调整所述目标检测网络的网络参数、以及所述训练病变区域和所述标记病变区域的第二对比结果调整所述语义分割网络的网络参数,得到所述口腔检测模型。A training module configured to adjust the network parameters of the target detection network based on the first comparison result of the training tooth area and the marked tooth area, and adjust the second comparison result of the training lesion area and the marked lesion area. The network parameters of the semantic segmentation network are used to obtain the oral cavity detection model.
可选的,所述装置还包括关联模块,用于:Optionally, the device also includes an association module for:
将所述观测视角与所述三维牙齿模型的视角进行关联处理。The observation angle is associated with the angle of view of the three-dimensional tooth model.
本公开实施例所提供的基于人工智能的口腔检测装置可执行本公开任意实施例所提供的基于人工智能的口腔检测方法,具备执行方法 相应的功能模块和有益效果。The artificial intelligence-based oral cavity detection device provided by the embodiments of the present disclosure can execute the artificial intelligence-based oral cavity detection method provided by any embodiment of the present disclosure, and has an execution method Corresponding functional modules and beneficial effects.
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的基于人工智能的口腔检测方法。Embodiments of the present disclosure also provide a computer program product, which includes a computer program/instruction. When executed by a processor, the computer program/instruction implements the artificial intelligence-based oral cavity detection method provided by any embodiment of the present disclosure.
图7为本公开实施例提供的一种电子设备的结构示意图。下面具体参考图7,其示出了适于用来实现本公开实施例中的电子设备400的结构示意图。本公开实施例中的电子设备400可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图7示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure. Referring specifically to FIG. 7 below, a schematic structural diagram of an electronic device 400 suitable for implementing an embodiment of the present disclosure is shown. The electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, laptops, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc. The electronic device shown in FIG. 7 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
如图7所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置408加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。As shown in FIG. 7 , the electronic device 400 may include a processing device (eg, central processing unit, graphics processor, etc.) 401 , which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 402 or from a storage device 408 . The program in the memory (RAM) 403 executes various appropriate actions and processes. In the RAM 403, various programs and data required for the operation of the electronic device 400 are also stored. The processing device 401, ROM 402 and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 407 such as a computer; a storage device 408 including a magnetic tape, a hard disk, etc.; and a communication device 409. The communication device 409 may allow the electronic device 400 to communicate wirelessly or wiredly with other devices to exchange data. Although FIG. 7 illustrates electronic device 400 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计 算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的基于人工智能的口腔检测方法中限定的上述功能。In particular, according to embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program product The computer program contains program code for performing the methods illustrated in the flowcharts. In such embodiments, the computer program may be downloaded and installed from the network via communication device 409, or from storage device 408, or from ROM 402. When the computer program is executed by the processing device 401, the above functions defined in the artificial intelligence-based oral cavity detection method of the embodiment of the present disclosure are performed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. The computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programmed read-only memory (EPROM or flash memory), fiber optics, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In this disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device . Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。 In some embodiments, the client and server can communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium. Communications (e.g., communications network) interconnections. Examples of communication networks include local area networks ("LAN"), wide area networks ("WAN"), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or developed in the future network of.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:在视频的播放过程中,接收用户的信息展示触发操作;获取所述视频关联的至少两个目标信息;在所述视频的播放页面的信息展示区域中展示所述至少两个目标信息中的第一目标信息其中,所述信息展示区域的尺寸小于所述播放页面的尺寸;接收用户的第一切换触发操作,将所述信息展示区域中展示的所述第一目标信息切换为所述至少两个目标信息中的第二目标信息。The computer-readable medium carries one or more programs. When the one or more programs are executed by the electronic device, the electronic device: receives the user's information display triggering operation during the playback of the video; obtains the At least two target information associated with the video; display the first target information among the at least two target information in the information display area of the play page of the video, wherein the size of the information display area is smaller than the size of the play page. Size: Receive the user's first switching triggering operation, and switch the first target information displayed in the information display area to the second target information among the at least two target information.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages—such as "C" or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In situations involving remote computers, the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现, 或者可以用专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operations of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved. It will also be noted that each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , Or it can be implemented using a combination of specialized hardware and computer instructions.
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。The units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of a unit does not constitute a limitation on the unit itself under certain circumstances.
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, and without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical device (CPLD) and so on.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of this disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:According to one or more embodiments of the present disclosure, the present disclosure provides an electronic device, including:
处理器;processor;
用于存储所述处理器可执行指令的存储器;memory for storing instructions executable by the processor;
所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开提供的任一所述的基于人工智能的口腔检测方法。The processor is configured to read the executable instructions from the memory and execute the instructions to implement any of the artificial intelligence-based oral cavity detection methods provided by this disclosure.
根据本公开的一个或多个实施例,本公开提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开提供的任一所述的基于人工智能的口腔检测方法。According to one or more embodiments of the present disclosure, the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, the computer program is used to execute any one of the methods provided by the present disclosure based on Artificial intelligence-based oral detection method.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。 本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a description of the preferred embodiments of the present disclosure and the technical principles applied. Those skilled in the art should understand that the disclosure scope involved in the present disclosure is not limited to technical solutions composed of specific combinations of the above technical features, but should also cover solutions composed of the above technical features or without departing from the above disclosed concept. Other technical solutions formed by any combination of equivalent features. For example, a technical solution is formed by replacing the above features with technical features with similar functions disclosed in this disclosure (but not limited to).
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。Furthermore, although operations are depicted in a specific order, this should not be understood as requiring that these operations be performed in the specific order shown or performed in a sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, although several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are merely example forms of implementing the claims.
工业实用性Industrial applicability
本公开提供的基于人工智能的口腔检测方案,避免三维牙齿模型由于观测视角选择不当导致病变区域被遗漏的问题,并且基于预先训练的口腔检测模型识别病变区域,能够大大提高病变区域的识别精度,另外能够以三维模型的方式呈现病变位置,使呈现效果更加直观,进一步提高口腔检测场景下的检测效率和效果。 The oral detection solution based on artificial intelligence provided by the present disclosure avoids the problem of missing the lesion area due to improper selection of the observation angle of the three-dimensional tooth model, and identifies the lesion area based on the pre-trained oral detection model, which can greatly improve the identification accuracy of the lesion area. In addition, the location of the lesion can be presented in the form of a three-dimensional model, making the presentation effect more intuitive and further improving the detection efficiency and effect in oral detection scenarios.

Claims (11)

  1. 一种基于人工智能的口腔检测方法,其特征在于,包括:An oral cavity detection method based on artificial intelligence, which is characterized by including:
    获取三维牙齿模型对应的二维纹理图;Obtain the two-dimensional texture map corresponding to the three-dimensional tooth model;
    基于预先训练的口腔检测模型对所述二维纹理图进行处理,得到病变区域;The two-dimensional texture map is processed based on the pre-trained oral cavity detection model to obtain the lesion area;
    将所述病变区域反投影至所述三维牙齿模型,得到所述三维牙齿模型的病变位置。The lesion area is back-projected to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model.
  2. 根据权利要求1所述的基于人工智能的口腔检测方法,其特征在于,所述获取三维牙齿模型对应的二维纹理图,包括:The oral cavity detection method based on artificial intelligence according to claim 1, characterized in that said obtaining the two-dimensional texture map corresponding to the three-dimensional tooth model includes:
    对所述三维牙齿模型进行网络参数化处理,得到所述二维纹理图。Perform network parameterization processing on the three-dimensional tooth model to obtain the two-dimensional texture map.
  3. 根据权利要求1所述的基于人工智能的口腔检测方法,其特征在于,所述基于预先训练的口腔检测模型对所述二维纹理图进行检测,得到病变区域,包括:The oral cavity detection method based on artificial intelligence according to claim 1, characterized in that the pre-trained oral cavity detection model detects the two-dimensional texture map to obtain the lesion area, including:
    基于所述口腔检测模型中的目标检测网络对所述二维纹理图进行检测,得到多个牙齿区域;Detect the two-dimensional texture map based on the target detection network in the oral cavity detection model to obtain multiple tooth regions;
    基于所述口腔检测模型中的语义分割网络对每个所述牙齿区域进行识别,得到每个所述牙齿区域的识别结果;Identify each tooth area based on the semantic segmentation network in the oral cavity detection model, and obtain the identification result of each tooth area;
    基于每个所述牙齿区域的识别结果确定所述病变区域。The diseased area is determined based on the identification result of each of the tooth areas.
  4. 根据权利要求1所述的基于人工智能的口腔检测方法,其特征在于,所述将所述病变区域反投影至所述三维牙齿模型,得到所述三维牙齿模型的病变位置,包括:The oral cavity detection method based on artificial intelligence according to claim 1, characterized in that said back-projecting the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model includes:
    获取所述病变区域对应的二维坐标点;Obtain the two-dimensional coordinate points corresponding to the lesion area;
    基于预先存储的维度坐标映射关系表获取与所述二维坐标点对应的三维坐标点;Obtain the three-dimensional coordinate point corresponding to the two-dimensional coordinate point based on the pre-stored dimensional coordinate mapping relationship table;
    基于所述三维坐标点确定所述三维牙齿模型的病变位置。 The lesion location of the three-dimensional tooth model is determined based on the three-dimensional coordinate points.
  5. 根据权利要求1所述的基于人工智能的口腔检测方法,其特征在于,还包括:The oral cavity detection method based on artificial intelligence according to claim 1, further comprising:
    基于所述病变位置计算观测视角,并基于所述观测视角获取目标图片并显示。An observation angle is calculated based on the lesion location, and a target image is obtained and displayed based on the observation angle.
  6. 根据权利要求5所述的基于人工智能的口腔检测方法,其特征在于,基于所述病变位置计算观测视角,包括:The oral cavity detection method based on artificial intelligence according to claim 5, characterized in that calculating the observation angle based on the lesion position includes:
    获取所述病变位置的平均法向;Obtain the average normal direction of the lesion location;
    基于所述病变位置确定两个相邻牙齿位置,并基于所述两个相邻牙齿位置确定目标直线方向;Determine the positions of two adjacent teeth based on the location of the lesion, and determine the target straight line direction based on the positions of the two adjacent teeth;
    基于所述平均法向和所述目标直线方向确定目标夹角;Determine the target angle based on the average normal direction and the target straight line direction;
    基于所述目标夹角和预设的角度阈值确定所述观测视角。The observation angle is determined based on the target angle and a preset angle threshold.
  7. 根据权利要求1所述的基于人工智能的口腔检测方法,其特征在于,在所述基于预先训练的口腔检测模型对所述二维纹理图进行检测,得到病变区域之前,还包括:The oral cavity detection method based on artificial intelligence according to claim 1, characterized in that, before the oral cavity detection model based on pre-training detects the two-dimensional texture map to obtain the lesion area, it also includes:
    获取各个三维牙齿模型样本对应的二维纹理图样本;其中,所述二维纹理图样本中包括标记牙齿区域和标记病变区域;Obtain the two-dimensional texture map sample corresponding to each three-dimensional tooth model sample; wherein the two-dimensional texture map sample includes a marked tooth area and a marked lesion area;
    将所述二维纹理图样本输入目标检测网络进行检测,得到训练牙齿区域;Input the two-dimensional texture map sample into the target detection network for detection to obtain the training tooth area;
    将所述训练牙齿区域输入语义分割网络进行识别,得到训练病变区域;Input the training tooth area into the semantic segmentation network for recognition, and obtain the training lesion area;
    基于所述训练牙齿区域和所述标记牙齿区域的第一对比结果调整所述目标检测网络的网络参数、以及所述训练病变区域和所述标记病变区域的第二对比结果调整所述语义分割网络的网络参数,得到所述口腔检测模型。Adjust the network parameters of the target detection network based on the first comparison result between the training tooth area and the marked tooth area, and adjust the semantic segmentation network based on the second comparison result between the training lesion area and the marked lesion area. network parameters to obtain the oral cavity detection model.
  8. 根据权利要求1所述的基于人工智能的口腔检测方法,其特征在于,还包括: The oral cavity detection method based on artificial intelligence according to claim 1, further comprising:
    将所述观测视角与所述三维牙齿模型的视角进行关联处理。The observation angle is associated with the angle of view of the three-dimensional tooth model.
  9. 一种基于人工智能的口腔检测装置,其特征在于,包括:An oral cavity detection device based on artificial intelligence, which is characterized by including:
    获取图片模块,用于获取三维牙齿模型对应的二维纹理图;The image acquisition module is used to obtain the two-dimensional texture map corresponding to the three-dimensional tooth model;
    处理模块,用于基于预先训练的口腔检测模型对所述二维纹理图进行处理,得到病变区域;A processing module, used to process the two-dimensional texture map based on a pre-trained oral cavity detection model to obtain the lesion area;
    反投影模块,用于将所述病变区域反投影至所述三维牙齿模型,得到所述三维牙齿模型的病变位置。A back-projection module is used to back-project the lesion area to the three-dimensional tooth model to obtain the lesion position of the three-dimensional tooth model.
  10. 一种电子设备,其特征在于,所述电子设备包括:An electronic device, characterized in that the electronic device includes:
    处理器;processor;
    用于存储所述处理器可执行指令的存储器;memory for storing instructions executable by the processor;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-8中任一所述的基于人工智能的口腔检测方法。The processor is configured to read the executable instructions from the memory and execute the instructions to implement the artificial intelligence-based oral cavity detection method described in any one of claims 1-8.
  11. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-8中任一所述的基于人工智能的口腔检测方法。 A computer-readable storage medium, characterized in that the storage medium stores a computer program, and the computer program is used to execute the oral cavity detection method based on artificial intelligence according to any one of the above claims 1-8.
PCT/CN2023/087798 2022-04-12 2023-04-12 Artificial intelligence-based oral cavity examination method and apparatus, electronic device, and medium WO2023198101A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210383604.3 2022-04-12
CN202210383604.3A CN114782345A (en) 2022-04-12 2022-04-12 Oral cavity detection method and device based on artificial intelligence, electronic equipment and medium

Publications (1)

Publication Number Publication Date
WO2023198101A1 true WO2023198101A1 (en) 2023-10-19

Family

ID=82428259

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/087798 WO2023198101A1 (en) 2022-04-12 2023-04-12 Artificial intelligence-based oral cavity examination method and apparatus, electronic device, and medium

Country Status (2)

Country Link
CN (1) CN114782345A (en)
WO (1) WO2023198101A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782345A (en) * 2022-04-12 2022-07-22 先临三维科技股份有限公司 Oral cavity detection method and device based on artificial intelligence, electronic equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080014558A1 (en) * 2006-07-14 2008-01-17 Align Technology, Inc. System and method for automatic detection of dental features
CN111414809A (en) * 2020-02-28 2020-07-14 上海牙典软件科技有限公司 Three-dimensional graph recognition method, device, equipment and storage medium
CN112515787A (en) * 2020-11-05 2021-03-19 上海牙典软件科技有限公司 Three-dimensional dental data analysis method
CN113262070A (en) * 2021-05-18 2021-08-17 苏州苏穗绿梦生物技术有限公司 Dental surgery equipment positioning method and system based on image recognition and storage medium
CN113425440A (en) * 2021-06-24 2021-09-24 广州华视光学科技有限公司 System and method for detecting caries and position thereof based on artificial intelligence
CN114782345A (en) * 2022-04-12 2022-07-22 先临三维科技股份有限公司 Oral cavity detection method and device based on artificial intelligence, electronic equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080014558A1 (en) * 2006-07-14 2008-01-17 Align Technology, Inc. System and method for automatic detection of dental features
CN111414809A (en) * 2020-02-28 2020-07-14 上海牙典软件科技有限公司 Three-dimensional graph recognition method, device, equipment and storage medium
CN112515787A (en) * 2020-11-05 2021-03-19 上海牙典软件科技有限公司 Three-dimensional dental data analysis method
CN113262070A (en) * 2021-05-18 2021-08-17 苏州苏穗绿梦生物技术有限公司 Dental surgery equipment positioning method and system based on image recognition and storage medium
CN113425440A (en) * 2021-06-24 2021-09-24 广州华视光学科技有限公司 System and method for detecting caries and position thereof based on artificial intelligence
CN114782345A (en) * 2022-04-12 2022-07-22 先临三维科技股份有限公司 Oral cavity detection method and device based on artificial intelligence, electronic equipment and medium

Also Published As

Publication number Publication date
CN114782345A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
US20210158533A1 (en) Image processing method and apparatus, and storage medium
CN113573654B (en) AI system, method and storage medium for detecting and determining lesion size
EP3668387B1 (en) Systems and methods for analyzing cutaneous conditions
CN114332019B (en) Endoscopic image detection assistance system, method, medium, and electronic device
WO2023198101A1 (en) Artificial intelligence-based oral cavity examination method and apparatus, electronic device, and medium
US20220058821A1 (en) Medical image processing method, apparatus, and device, medium, and endoscope
WO2023124877A1 (en) Endoscope image processing method and apparatus, and readable medium and electronic device
CN113689577B (en) Method, system, equipment and medium for matching virtual three-dimensional model with entity model
CN110288653B (en) Multi-angle ultrasonic image fusion method and system and electronic equipment
WO2023138619A1 (en) Endoscope image processing method and apparatus, readable medium, and electronic device
US20240221353A1 (en) Method and apparatus for object localization in discontinuous observation scene, and storage medium
WO2024199295A1 (en) Tooth preparation margin line extraction method and apparatus, device, and medium
WO2024183760A1 (en) Scanning data splicing method and apparatus, and device and medium
CN114049417B (en) Virtual character image generation method and device, readable medium and electronic equipment
CN113516639B (en) Training method and device for oral cavity abnormality detection model based on panoramic X-ray film
WO2024087910A1 (en) Orthodontic treatment monitoring method and apparatus, device, and storage medium
WO2024109268A1 (en) Digital model comparison method and apparatus, device, and medium
WO2023198099A1 (en) Artificial-intelligence-based oral cavity detection method and apparatus, and electronic device and medium
CN117257499A (en) Occlusion cross section-based occlusion measurement method, device, equipment and medium
WO2020155908A1 (en) Method and apparatus for generating information
CN114494374A (en) Method for determining fusion error of three-dimensional model and two-dimensional image and electronic equipment
CN112652056A (en) 3D information display method and device
WO2016061802A1 (en) Method and apparatus for displaying region of interest in current ultrasonic image
CN116492082B (en) Data processing method, device, equipment and medium based on three-dimensional model
CN112634439A (en) 3D information display method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23787744

Country of ref document: EP

Kind code of ref document: A1