WO2020063986A1 - 三维模型生成方法、装置、设备和存储介质 - Google Patents
三维模型生成方法、装置、设备和存储介质 Download PDFInfo
- Publication number
- WO2020063986A1 WO2020063986A1 PCT/CN2019/109202 CN2019109202W WO2020063986A1 WO 2020063986 A1 WO2020063986 A1 WO 2020063986A1 CN 2019109202 W CN2019109202 W CN 2019109202W WO 2020063986 A1 WO2020063986 A1 WO 2020063986A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- interest
- image
- category
- mask
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/529—Depth or shape recovery from texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/653—Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
Definitions
- the present application relates to the field of three-dimensional scanning technology, and in particular, to a method, a device, a device, and a storage medium for generating a three-dimensional model.
- the three-dimensional model is used to represent the three-dimensional structure and shape of a real object. Usually, a depth image of a real object is scanned, and then the depth image is processed by a three-dimensional modeling tool to construct a three-dimensional model of the real object. In the fields of medical treatment, architecture, and video games, 3D models have broad application prospects.
- a method for generating a three-dimensional model includes: acquiring a scanned texture image and a corresponding depth image; processing the texture image through a pre-trained mask region convolutional neural network to determine the texture image; An area of interest, and category information and mask information of each of the areas of interest; updating the depth image according to the category information and mask information of the area of interest; and according to the updated depth image, Build the corresponding 3D model.
- the category information of the region of interest includes the category value corresponding to each preset category of the region of interest
- the mask information of the region of interest includes the region of interest corresponding to each of the preset categories.
- the step of updating the depth image according to the category information and mask information of the region of interest includes: determining a region category of the region of interest from the category information of the region of interest. ; When the region category is the positive sample category, obtain a mask image corresponding to the region category from the mask region of interest in the mask information of the region of interest, and map the region of interest to the region The mask image of the region category is determined as the mask image of the region of interest; and the depth image is updated according to the mask image of the region of interest.
- the step of updating the depth image according to the category information and mask information of the region of interest further includes: when the region category is the negative sample category, The depth information corresponding to the region in the depth image is cleared.
- the step of updating the depth image according to the category information and mask information of the region of interest further comprises: obtaining a remaining image region of the texture image except the region of interest ; Clearing the depth information corresponding to the remaining image area in the depth image.
- the method before the step of acquiring the scanned texture image and the corresponding depth image, the method further includes: acquiring a collected sample image set, and performing area type labeling on the sample images in the sample image set to obtain An image region of the preset category in the sample image; inputting the sample image into the mask region convolutional neural network to determine a sample region of interest on the sample image and each of the sample of interest Category information and mask information of the region; performing a convolutional neural network on the mask region according to the image region of the preset category in the sample image, and the category information and mask information of the sample region of interest training.
- the step of determining a sample region of interest on the sample image and category information and mask information of each of the sample region of interest includes: extracting a feature map of the sample image; A candidate region is determined on the feature map, and the sample region of interest is selected from the candidate region; the sample region of interest is performed by a preset region feature aggregation method and a preset fully connected convolutional neural network. Processing to generate category information and mask information of the sample area of interest.
- a three-dimensional model generating device includes an image acquisition module configured to acquire a scanned texture image and a corresponding depth image, and a texture image processing module configured to convolve a nerve through a pre-trained mask region.
- the network processes the texture image to determine a region of interest on the texture image, and category information and mask information of each of the region of interest;
- a depth image update module is configured to The category information and mask information to update the depth image; and
- a model building module configured to construct a corresponding three-dimensional model based on the updated depth image.
- a computer device includes a memory and a processor.
- the memory stores a computer program
- the processor implements the following steps when the processor executes the computer program:
- a computer-readable storage medium stores a computer program thereon.
- the following steps are performed: acquiring a scanned texture image and a corresponding depth image; and convolving through a pre-trained mask region
- a neural network processes the texture image, determines a region of interest on the texture image, and category information and mask information of each of the region of interest; according to the category information and mask information of the region of interest To update the depth image; and construct a corresponding three-dimensional model according to the updated depth image.
- the above-mentioned three-dimensional model generating method, device, device and storage medium extract a region of interest from a textured image through a trained mask convolutional Bible network, and correspond to the texture image according to the category information and mask information of each region of interest.
- the depth image is updated, and a corresponding three-dimensional model is constructed according to the updated depth image, thereby improving the effect of removing noise data in the depth image and improving the accuracy of the three-dimensional model.
- FIG. 1 is a schematic flowchart of a three-dimensional model generation method according to an embodiment
- FIG. 2 is a schematic flowchart of a training process of a masked region convolutional neural network in a three-dimensional model generation method according to an embodiment
- FIG. 3 is a structural block diagram of a three-dimensional model generating apparatus according to an embodiment.
- FIG. 4 is an internal structure diagram of a computer device in one embodiment.
- a method for generating a three-dimensional model including the following steps:
- Step 102 Obtain the scanned texture image and the corresponding depth image.
- a texture image scanned by a three-dimensional scanning device and a depth image corresponding to the texture image are acquired.
- the texture image records the texture information of the scan target
- the depth image records the depth information corresponding to each pixel point on the texture image.
- step 104 the texture image is processed by a pre-trained mask region convolutional neural network to determine a region of interest on the texture image, and category information and mask information of each region of interest.
- Masked Area Convolutional Neural Network is an evolution of Area Convolutional Neural Network (R-CNN), which is an image target detection and segmentation algorithm.
- R-CNN Area Convolutional Neural Network
- ROI region of interest
- a masked region convolutional neural network is trained in advance, a texture image is input into the masked region convolutional neural network, and a region of interest on the texture image and category information and mask information corresponding to each region of interest are output. .
- the category information corresponding to the region of interest includes a category value of the region of interest relative to each preset category, and whether the region of interest belongs to the preset category may be determined based on the category value of the region of interest relative to the preset category.
- the mask information of the region of interest includes a mask image of the region of interest relative to each preset category, and the mask image of the region of interest relative to each preset category is a binary mask image.
- the preset categories are divided into a positive sample category and a negative sample category.
- the region of interest belonging to the positive sample category contains useful data for building a three-dimensional model
- the region of interest belonging to the negative sample category contains easy-to- The noise data caused by the three-dimensional model is disturbed, so that the accuracy of the three-dimensional model is improved by correspondingly processing the regions of interest that belong to different preset categories.
- Step 106 Update the depth image according to the category information and mask information of the region of interest.
- the preset category to which the region of interest belongs can be determined according to the category value of the region of interest relative to each preset category.
- the preset category to which the region of interest belongs is the region category of the region of interest.
- the category value of the region of interest corresponding to each preset category is 0 or 1.
- the category value of the region of interest corresponding to any preset category is 0, the region of interest is not considered to belong to the preset category.
- the category value of the region of interest corresponding to any one of the preset categories is 1, the region of interest is considered to belong to the preset category, so that the region category of the region of interest is accurately judged.
- the mask image of the region of interest corresponding to the region category is obtained from the mask information of the region of interest, and the mask image of the region of interest corresponding to the region category is determined as a sense.
- Mask image of the area of interest Update the depth information corresponding to the region of interest on the depth image according to the region category of the region of interest and the mask image of the region of interest to remove the depth information corresponding to the region of interest that belongs to the negative sample category, including those belonging to the positive sample category. Depth information of the region of interest of the sample category.
- Step 108 Construct a corresponding three-dimensional model according to the updated depth image.
- a 3D model is constructed through a preset 3D reconstruction algorithm and an updated depth image to obtain a constructed 3D model.
- the 3D reconstruction algorithm is not used here. There are specific restrictions.
- the texture image is processed by the trained mask area convolutional neural network to determine the region of interest on the texture image, as well as the category information and mask information of each region of interest, and determine the interest.
- the area category and mask image of the region, and the depth image is processed according to the area category and mask image of the region of interest, thereby improving the effect of removing noise data and retaining valid data in the depth image, and improving the accuracy of 3D model reconstruction.
- the region category of the region of interest when the region category of the region of interest is a positive sample category, perform a mask operation on the mask image of the region of interest and the depth image to obtain an updated depth image, thereby being effective.
- the depth information corresponding to the positive sample category in the depth image is retained.
- the masking operation may be a multiplication of a mask value in a mask image and a depth value of a corresponding region of the depth image.
- the depth information corresponding to the region of interest in the depth image is cleared, thereby effectively removing the negative sample category in the depth image.
- the depth image region corresponding to the region of interest in the depth image may be determined first, and then the depth value of the depth image region may be removed.
- the mask value in the mask image of the region of interest can also be set to zero, and then the updated mask image and the depth image are masked.
- the remaining image areas in the texture image except all the regions of interest are obtained, and the depth information corresponding to the remaining image area in the depth image is cleared, thereby effectively avoiding the corresponding Depth information interferes with the construction of 3D models.
- the scanned texture image and depth image are a tooth texture image and a tooth depth image, respectively.
- the positive sample category includes the gum category and the tooth category
- the negative sample category includes the tongue category and the tongue buccal side category. It is easier to process the tongue and buccal side image data that interfere with the 3D model construction process, and improve the accuracy of the 3D model.
- the region of interest on the tooth depth image and the category information and mask information corresponding to each region of interest are obtained.
- the category information corresponding to the area of interest includes the category values of the area of interest relative to the gum category, tooth category, tongue category, and buccal side category
- the mask information corresponding to the area of interest includes the area of interest relative to gum category, tooth, respectively.
- the category value of the region of interest relative to the gum category, tooth category, tongue category, and buccal side category determine the region category to which the region of interest belongs, and set the mask image of the region of interest relative to the region category as the region of interest Mask image to determine the category of the region of interest more accurately.
- the region category to which the region of interest belongs is a gum category
- a mask image of the region of interest relative to the gum category is set as a mask image of the region of interest.
- the tongue category and the tongue buccal category belong to the negative sample category
- the mask image of the region of interest is compared with The depth image is masked.
- the depth information corresponding to the region of interest in the depth image will be cleared, thereby effectively retaining the positive sample category corresponding to the depth image.
- Depth information effectively removes the depth information corresponding to the negative sample category in the depth image.
- a training process of a masked region convolutional neural network in a three-dimensional model generation method including the following steps:
- Step 202 Acquire the collected sample image set, mark the sample images in the sample image set with an area type, and obtain an image region of a preset category in the sample image.
- the sample images in the sample image set are texture images that belong to the same object as the scan target.
- the sample images in the sample image set may be area-labeled to obtain image regions of a preset category in the sample image.
- the lableme image annotation tool can be used to mark the area of the sample image.
- the preset categories are divided into a positive sample category and a negative sample category, thereby improving the training effect of the convolutional neural network in the mask region.
- dental texture images of people of different ages may be collected, for example, the age range of 0-80 years is divided into 8 segments according to one age segment every 10 years, and each age segment Collect texture images with a male to female ratio of 1: 1.
- the sample image is input into a mask region convolutional neural network to determine a sample region of interest on the sample image, and category information and mask information of each sample region of interest.
- the sample image is processed by a masked region convolutional neural network to obtain a sample region of interest on the sample image, and category information and mask information of each sample region of interest.
- Step 206 Train the masked area convolutional neural network according to the image area of the preset category in the sample image, and the category information and mask information of the sample area of interest.
- the category value of the sample area of interest relative to each preset category can be determined Preset category.
- the sample region of interest can be compared with the image region of the preset category in the sample image to obtain a mask region convolutional neural network
- the error of the training process The network parameters of the masked area convolutional neural network are adjusted according to the errors.
- the network parameters of the masked area convolutional neural network are adjusted so many times to achieve supervised training of the masked area convolutional neural network. .
- an image processing operation is performed on the sample image, where the image processing operation includes brightness consistency processing and de-averaging processing to improve the mask area convolution Training effect of neural network.
- a feature map of the sample image is extracted through a deep residual neural network (ResNet neural network) in the masked area convolutional neural network, and the feature is
- ResNet neural network deep residual neural network
- Each feature point of the graph sets a candidate region of a preset size, and the candidate region is input to a region candidate network (RPN) in a mask region convolutional neural network.
- RPN region candidate network
- Binary classification and border regression are performed to The candidate regions are filtered to obtain the sample region of interest of the sample image.
- the region of interest is processed by a preset region feature aggregation method to determine the category information of the region of interest, and the fully connected convolutional neural network operation in the mask region convolutional neural network is performed to generate mask information of the region of interest.
- the region feature aggregation method is the ROI alignment method of the mask region convolutional neural network.
- steps in the flowchart of FIG. 1-2 are sequentially displayed in accordance with the directions of the arrows, these steps are not necessarily performed in the order indicated by the arrows. Unless explicitly stated in this document, the execution of these steps is not strictly limited, and these steps can be performed in other orders. Moreover, at least a part of the steps in Figure 1-2 may include multiple sub-steps or stages. These sub-steps or stages are not necessarily performed at the same time, but may be performed at different times. These sub-steps or stages The execution order of is not necessarily performed sequentially, but may be performed in turn or alternately with at least a part of another step or a sub-step or stage of another step.
- a three-dimensional model generating device 300 including: an image acquisition module 302, a texture image processing module 304, a depth image update module 306, and a model construction module 308, where:
- the image acquisition module 302 is configured to acquire a scanned texture image and a corresponding depth image.
- a texture image scanned by a three-dimensional scanning device and a depth image corresponding to the texture image are acquired.
- the texture image records the texture information of the scan target
- the depth image records the depth information corresponding to each pixel point on the texture image.
- the texture image processing module 304 is configured to process the texture image through a pre-trained mask region convolutional neural network to determine a region of interest on the texture image, and category information and mask information of each region of interest.
- Masked Area Convolutional Neural Network is an evolution of Area Convolutional Neural Network (R-CNN), which is an image target detection and segmentation algorithm.
- R-CNN Area Convolutional Neural Network
- ROI region of interest
- a masked region convolutional neural network is trained in advance, a texture image is input into the masked region convolutional neural network, and a region of interest on the texture image and category information and mask information corresponding to each region of interest are output. .
- the category information corresponding to the region of interest includes a category value of the region of interest relative to each preset category, and whether the region of interest belongs to the preset category may be determined based on the category value of the region of interest relative to the preset category.
- the mask information of the region of interest includes a mask image of the region of interest relative to each preset category, and the mask image of the region of interest relative to each preset category is a binary mask image.
- the preset categories are divided into a positive sample category and a negative sample category.
- the region of interest belonging to the positive sample category contains useful data for building a three-dimensional model
- the region of interest belonging to the negative sample category contains easy-to- The noise data caused by the three-dimensional model is disturbed, so that the accuracy of the three-dimensional model is improved by correspondingly processing the regions of interest that belong to different preset categories.
- the depth image update module 306 is configured to update the depth image according to the category information and mask information of the region of interest.
- the preset category to which the region of interest belongs can be determined according to the category value of the region of interest relative to each preset category.
- the preset category to which the region of interest belongs is the region category of the region of interest.
- the category value of the region of interest corresponding to each preset category is 0 or 1.
- the category value of the region of interest corresponding to any preset category is 0, the region of interest is not considered to belong to the preset category.
- the category value of the region of interest corresponding to any one of the preset categories is 1, the region of interest is considered to belong to the preset category, so that the region category of the region of interest is accurately judged.
- the mask image of the region of interest corresponding to the region category is obtained from the mask information of the region of interest, and the mask image of the region of interest corresponding to the region category is determined as a sense.
- Mask image of the area of interest Update the depth information corresponding to the region of interest on the depth image according to the region category of the region of interest and the mask image of the region of interest to remove the depth information corresponding to the region of interest that belongs to the negative sample category, including those belonging to the positive sample category. Depth information of the region of interest of the sample category.
- the model construction module 308 is configured to construct a corresponding three-dimensional model according to the updated depth image.
- a 3D model is constructed through a preset 3D reconstruction algorithm and an updated depth image to obtain a constructed 3D model.
- the 3D reconstruction algorithm is not used here. There are specific restrictions.
- the texture image is processed by the trained mask area convolutional neural network to determine the region of interest on the texture image, as well as the category information and mask information of each region of interest, and determine the interest.
- the area category and mask image of the region, and the depth image is processed according to the area category and mask image of the region of interest, thereby improving the effect of removing noise data and retaining valid data in the depth image, and improving the accuracy of 3D model reconstruction.
- the region category of the region of interest when the region category of the region of interest is a positive sample category, perform a mask operation on the mask image of the region of interest and the depth image to obtain an updated depth image, thereby being effective.
- the depth information corresponding to the positive sample category in the depth image is retained.
- the masking operation may be a multiplication of a mask value in a mask image and a depth value of a corresponding region of the depth image.
- the depth information corresponding to the region of interest in the depth image is cleared, thereby effectively removing the negative sample category in the depth image.
- the depth image region corresponding to the region of interest in the depth image may be determined first, and then the depth value of the depth image region may be removed.
- the mask value in the mask image of the region of interest can also be set to zero, and then the updated mask image and the depth image are masked.
- the remaining image areas in the texture image except all the regions of interest are obtained, and the depth information corresponding to the remaining image area in the depth image is cleared, thereby effectively avoiding the corresponding Depth information interferes with the construction of 3D models.
- the scanned texture image and depth image are a tooth texture image and a tooth depth image, respectively.
- the positive sample category includes the gum category and the tooth category
- the negative sample category includes the tongue category and the tongue buccal side category. It is easier to process the tongue and buccal side image data that interfere with the 3D model construction process, and improve the accuracy of the 3D model.
- the region of interest on the tooth depth image and the category information and mask information corresponding to each region of interest are obtained.
- the category information corresponding to the area of interest includes the category values of the area of interest relative to the gum category, tooth category, tongue category, and buccal side category
- the mask information corresponding to the area of interest includes the area of interest relative to gum category, tooth, respectively.
- the category value of the region of interest relative to the gum category, tooth category, tongue category, and buccal side category determine the region category to which the region of interest belongs, and set the mask image of the region of interest relative to the region category as the region of interest Mask image to determine the category of the region of interest more accurately.
- the region category to which the region of interest belongs is a gum category
- a mask image of the region of interest relative to the gum category is set as a mask image of the region of interest.
- the tongue category and the tongue buccal category belong to the negative sample category
- the mask image of the region of interest is compared with The depth image is masked.
- the depth information corresponding to the region of interest in the depth image will be cleared, thereby effectively retaining the positive sample category corresponding to the depth image.
- Depth information effectively removes the depth information corresponding to the negative sample category in the depth image.
- the collected sample image set is acquired, the sample images in the sample image set are area type labeled, image areas of a preset category in the sample image are obtained, and the sample image is Input the mask area convolutional neural network to determine the sample area of interest on the sample image, as well as the category information and mask information of each sample area of interest, according to the image area of the preset category in the sample image and the sample area of interest
- the category information and mask information are used to train the masked area convolutional neural network, thereby improving the training effect of the masked area convolutional neural network by supervised training of the masked area convolutional neural network.
- the sample images in the sample image set are texture images that belong to the same object as the scan target.
- the sample images in the sample image set may be area-labeled to obtain image regions of a preset category in the sample image.
- the sample area of interest can be compared with the image area of the preset category in the sample image to obtain the error of the mask area convolutional neural network training process, and adjust according to the error Network parameters of the masked area convolutional neural network.
- the network parameters of the masked area convolutional neural network are adjusted so many times to achieve supervised training of the masked area convolutional neural network.
- an image processing operation is performed on the sample image, where the image processing operation includes brightness consistency processing and de-averaging processing to improve the mask area convolution Training effect of neural network.
- a feature map of the sample image is extracted through a deep residual neural network in the masked area convolutional neural network, and each feature of the feature map is Set candidate regions with preset sizes, input candidate regions to the region candidate network in the mask region convolutional neural network, perform binary classification and border regression to filter the candidate regions and obtain samples of interest in the sample image region.
- the region of interest is processed by a preset region feature aggregation method to determine the category information of the region of interest, and the fully connected convolutional neural network operation in the mask region convolutional neural network is performed to generate mask information of the region of interest.
- the regional feature aggregation method is the ROI and Align method of the mask region convolutional neural network.
- dental texture images of people of different ages may be collected, for example, the age range of 0-80 years is divided into 8 segments according to one age group every 10 years, and each age Segmented texture images were collected with a male to female ratio of 1: 1.
- Each module in the above-mentioned three-dimensional model generating device may be implemented in whole or in part by software, hardware, and a combination thereof.
- the above-mentioned modules may be embedded in the hardware form or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor calls and performs the operations corresponding to the above modules.
- a computer device is provided.
- the computer device may be a server, and its internal structure diagram may be as shown in FIG. 4.
- the computer device includes a processor, a memory, a network interface, and a database connected through a system bus.
- the processor of the computer device is configured to provide computing and control capabilities.
- the memory of the computer device includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium stores an operating system, a computer program, and a database.
- the internal memory provides an environment for running an operating system and computer programs in a non-volatile storage medium.
- the database of the computer device is set to store a sample image set set as a training mask area convolutional neural network.
- the network interface of the computer device is configured to communicate with an external terminal through a network connection.
- the computer program is executed by a processor to implement a three-dimensional model generation method.
- FIG. 4 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment on which the solution of the present application should be set.
- the specific computer The device may include more or fewer components than shown in the figure, or some components may be combined, or have different component arrangements.
- a computer device including a memory and a processor.
- the memory stores a computer program
- the processor implements the following steps when the computer program is executed:
- the texture image is processed through a pre-trained mask region convolutional neural network to determine the region of interest on the texture image, as well as the category information and mask information of each region of interest; according to the category information and mask of the region of interest Code information to update the depth image; according to the updated depth image, a corresponding three-dimensional model is constructed.
- the following steps are further implemented: determining a region category of the region of interest in the category information of the region of interest; and when the region category is a positive sample category, mask information on the region of interest
- the mask image of the region category corresponding to the region of interest is acquired in, and the mask image of the region category corresponding to the region of interest is determined as the mask image of the region of interest; the depth image is updated according to the mask image of the region of interest.
- the processor executes the computer program, the following steps are further implemented: when the region category is a negative sample category, the depth information corresponding to the region of interest in the depth image is cleared.
- the processor when the processor executes the computer program, the processor further implements the following steps: acquiring the remaining image regions in the texture image except for the region of interest; and removing the depth information corresponding to the remaining image regions in the depth image.
- the processor when the processor executes the computer program, the processor further implements the following steps: acquiring the collected sample image set, labeling the sample image in the sample image set with an area type, obtaining an image region of a preset category in the sample image, and converting the sample image Input the mask area convolutional neural network to determine the sample area of interest on the sample image, and the category information and mask information of each sample area of interest; according to the image area of the preset category in the sample image, and the sample area of interest Class information and mask information to train the masked area convolutional neural network.
- the processor when the processor executes the computer program, the processor further implements the following steps: extracting a feature map of the sample image; determining a candidate region on the feature map, and filtering out a sample region of interest in the candidate region;
- the region of interest sample is processed by a preset region feature aggregation method and a preset fully connected convolutional neural network to generate category information and mask information of the region of interest sample.
- a computer-readable storage medium on which a computer program is stored.
- the following steps are implemented: obtaining a scanned texture image and a corresponding depth image;
- the masked region convolutional neural network processes the texture image to determine the region of interest on the texture image, as well as the category information and mask information of each region of interest; updates based on the category information and mask information of the region of interest Depth image; build a corresponding 3D model based on the updated depth image.
- the following steps are further implemented: determining the region category of the region of interest in the category information of the region of interest; when the region category is a positive sample category, a mask on the region of interest The mask image of the region category corresponding to the region of interest is obtained from the information, and the mask image of the region category corresponding to the region of interest is determined as the mask image of the region of interest; the depth image is updated according to the mask image of the region of interest.
- the following steps are further implemented: when the region type is a negative sample type, the depth information corresponding to the region of interest in the depth image is cleared.
- the following steps are further implemented: acquiring the remaining image regions in the texture image except the region of interest; and clearing the depth information corresponding to the remaining image regions in the depth image.
- the following steps are further implemented: acquiring the collected sample image set, labeling the sample image in the sample image set with a region type, obtaining an image region of a preset category in the sample image; Image input mask area convolution neural network to determine the sample area of interest on the sample image, as well as the category information and mask information of each sample area of interest; according to the image area of the preset category in the sample image, and the sample of interest
- the category information and mask information of the region are used to train the masked region convolutional neural network.
- the following steps are further implemented: extracting a feature map of the sample image; determining candidate regions on the feature map, and filtering out a sample region of interest from the candidate regions; and using a preset region feature
- the aggregation method and the preset fully-connected convolutional neural network process the sample area of interest to generate category information and mask information of the sample area of interest.
- Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory can include random access memory (RAM) or external cache memory.
- RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDRSDRAM dual data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM synchronous chain Synchlink DRAM
- Rambus direct RAM
- DRAM direct memory bus dynamic RAM
- RDRAM memory bus dynamic RAM
- the solution provided by the embodiment of the present invention can be applied to a three-dimensional scanning process.
- the embodiment of the present invention solves the technical problem of low accuracy of the three-dimensional model, improves the effect of removing noise data in the depth image, and improves the accuracy of the three-dimensional model.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Generation (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (10)
- 一种三维模型生成方法,包括:获取扫描到的纹理图像和对应的深度图像;通过预先训练好的掩码区域卷积神经网络对所述纹理图像进行处理,确定所述纹理图像上的感兴趣区域、以及每个所述感兴趣区域的类别信息和掩码信息;根据所述感兴趣区域的类别信息和掩码信息,更新所述深度图像;根据更新后的所述深度图像,构建相应的三维模型。
- 根据权利要求1所述的方法,其中,所述感兴趣区域的类别信息包括所述感兴趣区域对应各个预设类别的类别值,所述感兴趣区域的掩码信息包括所述感兴趣区域对应各个所述预设类别的掩码图像,所述预设类别包括正样本类别和负样本类别。
- 根据权利要求2所述的方法,其中,根据所述感兴趣区域的类别信息和掩码信息,更新所述深度图像的步骤,包括:在所述感兴趣区域的类别信息中确定所述感兴趣区域的区域类别;当所述区域类别为所述正样本类别时,在所述感兴趣区域的掩码信息中获取所述感兴趣区域对应所述区域类别的掩码图像,将所述感兴趣区域对应所述区域类别的掩码图像确定为所述感兴趣区域的掩码图像;根据所述感兴趣区域的掩码图像,对所述深度图像进行更新。
- 根据权利要求3所述的方法,其中,根据所述感兴趣区域的类别信息和掩码信息,更新所述深度图像的步骤,还包括:当所述区域类别为所述负样本类别时,对所述感兴趣区域在所述深度图像中对应的深度信息进行清除。
- 根据权利要求3所述的方法,其中,根据所述感兴趣区域的类别信息和掩码信息,更新所述深度图像的步骤,还包括:获取所述纹理图像中除所述感兴趣区域之外的剩余图像区域;对所述剩余图像区域在所述深度图像中对应的深度信息进行清除。
- 根据权利要求2所述的方法,其中,获取扫描到的纹理图像和对应的深度图像的步骤之前,所述方法还包括:获取采集的样本图像集,对所述样本图像集中的样本图像进行区域类型标记,获得所述样本图像中所述预设类别的图像区域;将所述样本图像输入所述掩码区域卷积神经网络,确定所述样本图像上的感兴趣样本区域、以及每个所述感兴趣样本区域的类别信息和掩码信息;根据所述样本图像中所述预设类别的图像区域、以及所述感兴趣样本区域的类别信息和掩码信息,对所述掩码区域卷积神经网络进行训练。
- 根据权利要求6所述的方法,其中,确定所述样本图像上的感兴趣样本区域、以及每个所述感兴趣样本区域的类别信息和掩码信息的步骤,包括:提取所述样本图像的特征图;在所述特征图上确定候选区域,在所述候选区域中筛选出所述感兴趣样本区域;通过预设的区域特征聚集方式和预设的全连接卷积神经网络对所述感兴趣样本区域进行处理,生成所述感兴趣样本区域的类别信息和掩码信息。
- 一种三维模型生成装置,包括:图像获取模块,被设置为获取扫描到的纹理图像和对应的深度图像;纹理图像处理模块,被设置为通过预先训练好的掩码区域卷积神经网络对所述纹理图像进行处理,确定所述纹理图像上的感兴趣区域、以及每个所述感兴趣区域的类别信息和掩码信息;深度图像更新模块,被设置为根据所述感兴趣区域的类别信息和掩码信息,更新所述深度图像;以及模型构建模块,被设置为根据更新后的所述深度图像,构建相应的三维模型。
- 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述方法的步骤。
- 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的三维模型生成方法的步骤。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19864127.6A EP3859685A4 (en) | 2018-09-30 | 2019-09-29 | METHOD AND DEVICE FOR CREATING A THREE-DIMENSIONAL MODEL, DEVICE AND STORAGE MEDIUM |
US17/280,934 US11978157B2 (en) | 2018-09-30 | 2019-09-29 | Method and apparatus for generating three-dimensional model, device, and storage medium |
AU2019345828A AU2019345828B2 (en) | 2018-09-30 | 2019-09-29 | Method and apparatus for generating three-dimensional model, device, and storage medium |
JP2021517414A JP2022501737A (ja) | 2018-09-30 | 2019-09-29 | 三次元モデルの生成方法、装置、デバイス、および記憶媒体 本出願は、2018年9月30日に中国特許庁に提出された中国特許出願、出願番号201811160166.4、発明タイトル「三次元モデルの生成方法、装置、デバイス、および記憶媒体」の優先権を主張し、その内容全体が本出願に参照として組み込まれている。 |
KR1020217012553A KR20210068077A (ko) | 2018-09-30 | 2019-09-29 | 3d모델 생성 방법, 장치, 기기 및 저장매체 |
CA3114650A CA3114650C (en) | 2018-09-30 | 2019-09-29 | Method and apparatus for generating three-dimensional model, device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811160166.4A CN109410318B (zh) | 2018-09-30 | 2018-09-30 | 三维模型生成方法、装置、设备和存储介质 |
CN201811160166.4 | 2018-09-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020063986A1 true WO2020063986A1 (zh) | 2020-04-02 |
Family
ID=65466672
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/109202 WO2020063986A1 (zh) | 2018-09-30 | 2019-09-29 | 三维模型生成方法、装置、设备和存储介质 |
Country Status (8)
Country | Link |
---|---|
US (1) | US11978157B2 (zh) |
EP (1) | EP3859685A4 (zh) |
JP (1) | JP2022501737A (zh) |
KR (1) | KR20210068077A (zh) |
CN (1) | CN109410318B (zh) |
AU (1) | AU2019345828B2 (zh) |
CA (1) | CA3114650C (zh) |
WO (1) | WO2020063986A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112907726A (zh) * | 2021-01-25 | 2021-06-04 | 重庆金山医疗器械有限公司 | 一种图像处理方法、装置、设备及计算机可读存储介质 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109410318B (zh) * | 2018-09-30 | 2020-09-08 | 先临三维科技股份有限公司 | 三维模型生成方法、装置、设备和存储介质 |
US11238586B2 (en) | 2019-05-02 | 2022-02-01 | Align Technology, Inc. | Excess material removal using machine learning |
US20210118132A1 (en) * | 2019-10-18 | 2021-04-22 | Retrace Labs | Artificial Intelligence System For Orthodontic Measurement, Treatment Planning, And Risk Assessment |
CN110251004B (zh) * | 2019-07-16 | 2022-03-11 | 深圳市杉川机器人有限公司 | 扫地机器人及其清扫方法和计算机可读存储介质 |
CN110874851A (zh) * | 2019-10-25 | 2020-03-10 | 深圳奥比中光科技有限公司 | 重建人体三维模型的方法、装置、系统和可读存储介质 |
CN112069907A (zh) * | 2020-08-11 | 2020-12-11 | 盛视科技股份有限公司 | 基于实例分割的x光机图像识别方法、装置及系统 |
CN113723310B (zh) * | 2021-08-31 | 2023-09-05 | 平安科技(深圳)有限公司 | 基于神经网络的图像识别方法及相关装置 |
EP4266257A1 (en) * | 2022-04-21 | 2023-10-25 | Dassault Systèmes | 3d reconstruction from images |
KR102547323B1 (ko) * | 2023-03-16 | 2023-06-22 | 이채영 | 3d 스캐닝을 통해 형성된 오브젝트의 세그멘테이션 장치 및 그 방법 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101388115A (zh) * | 2008-10-24 | 2009-03-18 | 北京航空航天大学 | 一种结合纹理信息的深度图像自动配准方法 |
CN102945565A (zh) * | 2012-10-18 | 2013-02-27 | 深圳大学 | 一种物体的三维真实感重建方法、系统及电子设备 |
CN107301662A (zh) * | 2017-06-30 | 2017-10-27 | 深圳大学 | 深度图像的压缩恢复方法、装置、设备及存储介质 |
CN107358648A (zh) * | 2017-07-17 | 2017-11-17 | 中国科学技术大学 | 基于单张人脸图像的实时全自动高质量三维人脸重建方法 |
CN108154550A (zh) * | 2017-11-29 | 2018-06-12 | 深圳奥比中光科技有限公司 | 基于rgbd相机的人脸实时三维重建方法 |
WO2018140596A2 (en) * | 2017-01-27 | 2018-08-02 | Arterys Inc. | Automated segmentation utilizing fully convolutional networks |
CN108447082A (zh) * | 2018-03-15 | 2018-08-24 | 深圳市唯特视科技有限公司 | 一种基于联合学习关键点检测器的三维目标匹配方法 |
CN109410318A (zh) * | 2018-09-30 | 2019-03-01 | 先临三维科技股份有限公司 | 三维模型生成方法、装置、设备和存储介质 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040104935A1 (en) * | 2001-01-26 | 2004-06-03 | Todd Williamson | Virtual reality immersion system |
CN101686407A (zh) * | 2008-09-28 | 2010-03-31 | 新奥特(北京)视频技术有限公司 | 一种采样点信息的获取方法和装置 |
US9282321B2 (en) * | 2011-02-17 | 2016-03-08 | Legend3D, Inc. | 3D model multi-reviewer system |
US20140378810A1 (en) * | 2013-04-18 | 2014-12-25 | Digimarc Corporation | Physiologic data acquisition and analysis |
DE102013110445B3 (de) | 2013-06-26 | 2014-12-18 | Grammer Ag | Fahrzeugsitz und Nutzkraftfahrzeug mit einem Fahrzeugsitz |
US11337612B2 (en) | 2013-12-03 | 2022-05-24 | Children's National Medical Center | Method and system for wound assessment and management |
CN106056071B (zh) * | 2016-05-30 | 2019-05-10 | 北京智芯原动科技有限公司 | 一种驾驶员打电话行为的检测方法及装置 |
CN107491459A (zh) * | 2016-06-13 | 2017-12-19 | 阿里巴巴集团控股有限公司 | 三维立体图像的检索方法和装置 |
CN106096561B (zh) * | 2016-06-16 | 2020-02-07 | 重庆邮电大学 | 基于图像块深度学习特征的红外行人检测方法 |
KR101840563B1 (ko) * | 2016-07-04 | 2018-03-20 | 한양대학교 에리카산학협력단 | 신경망을 이용한 3차원 얼굴 복원 방법 및 장치 |
US11529056B2 (en) * | 2016-10-18 | 2022-12-20 | Dentlytec G.P.L. Ltd. | Crosstalk reduction for intra-oral scanning using patterned light |
CN106780512B (zh) * | 2016-11-30 | 2020-01-17 | 厦门美图之家科技有限公司 | 分割图像的方法、应用及计算设备 |
CN106802138B (zh) * | 2017-02-24 | 2019-09-24 | 先临三维科技股份有限公司 | 一种三维扫描系统及其扫描方法 |
CN107644454B (zh) | 2017-08-25 | 2020-02-18 | 北京奇禹科技有限公司 | 一种图像处理方法及装置 |
WO2019061202A1 (en) * | 2017-09-28 | 2019-04-04 | Shenzhen United Imaging Healthcare Co., Ltd. | SYSTEM AND METHOD FOR PROCESSING COLON IMAGE DATA |
CN108269300B (zh) * | 2017-10-31 | 2019-07-09 | 先临三维科技股份有限公司 | 牙齿三维数据重建方法、装置和系统 |
-
2018
- 2018-09-30 CN CN201811160166.4A patent/CN109410318B/zh active Active
-
2019
- 2019-09-29 AU AU2019345828A patent/AU2019345828B2/en active Active
- 2019-09-29 EP EP19864127.6A patent/EP3859685A4/en active Pending
- 2019-09-29 WO PCT/CN2019/109202 patent/WO2020063986A1/zh active Application Filing
- 2019-09-29 US US17/280,934 patent/US11978157B2/en active Active
- 2019-09-29 CA CA3114650A patent/CA3114650C/en active Active
- 2019-09-29 JP JP2021517414A patent/JP2022501737A/ja active Pending
- 2019-09-29 KR KR1020217012553A patent/KR20210068077A/ko not_active Application Discontinuation
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101388115A (zh) * | 2008-10-24 | 2009-03-18 | 北京航空航天大学 | 一种结合纹理信息的深度图像自动配准方法 |
CN102945565A (zh) * | 2012-10-18 | 2013-02-27 | 深圳大学 | 一种物体的三维真实感重建方法、系统及电子设备 |
WO2018140596A2 (en) * | 2017-01-27 | 2018-08-02 | Arterys Inc. | Automated segmentation utilizing fully convolutional networks |
CN107301662A (zh) * | 2017-06-30 | 2017-10-27 | 深圳大学 | 深度图像的压缩恢复方法、装置、设备及存储介质 |
CN107358648A (zh) * | 2017-07-17 | 2017-11-17 | 中国科学技术大学 | 基于单张人脸图像的实时全自动高质量三维人脸重建方法 |
CN108154550A (zh) * | 2017-11-29 | 2018-06-12 | 深圳奥比中光科技有限公司 | 基于rgbd相机的人脸实时三维重建方法 |
CN108447082A (zh) * | 2018-03-15 | 2018-08-24 | 深圳市唯特视科技有限公司 | 一种基于联合学习关键点检测器的三维目标匹配方法 |
CN109410318A (zh) * | 2018-09-30 | 2019-03-01 | 先临三维科技股份有限公司 | 三维模型生成方法、装置、设备和存储介质 |
Non-Patent Citations (1)
Title |
---|
HE, KAIMING ET AL.: "Mask R-CNN", 25 December 2017 (2017-12-25), pages 2980 - 2988, XP055756845, ISSN: 2380-7504 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112907726A (zh) * | 2021-01-25 | 2021-06-04 | 重庆金山医疗器械有限公司 | 一种图像处理方法、装置、设备及计算机可读存储介质 |
CN112907726B (zh) * | 2021-01-25 | 2022-09-20 | 重庆金山医疗技术研究院有限公司 | 一种图像处理方法、装置、设备及计算机可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN109410318B (zh) | 2020-09-08 |
CA3114650A1 (en) | 2020-04-02 |
US11978157B2 (en) | 2024-05-07 |
CN109410318A (zh) | 2019-03-01 |
JP2022501737A (ja) | 2022-01-06 |
US20210375043A1 (en) | 2021-12-02 |
AU2019345828A1 (en) | 2021-05-13 |
EP3859685A1 (en) | 2021-08-04 |
AU2019345828B2 (en) | 2022-10-13 |
CA3114650C (en) | 2023-08-22 |
EP3859685A4 (en) | 2021-12-08 |
KR20210068077A (ko) | 2021-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020063986A1 (zh) | 三维模型生成方法、装置、设备和存储介质 | |
US11972572B2 (en) | Intraoral scanning system with excess material removal based on machine learning | |
US20240087097A1 (en) | Domain specific image quality assessment | |
US11995839B2 (en) | Automated detection, generation and/or correction of dental features in digital models | |
US20220218449A1 (en) | Dental cad automation using deep learning | |
DE102019106666A1 (de) | Dentale CAD-Automatisierung unter Verwendung von tiefem Lernen | |
BR112020012292A2 (pt) | previsão automatizada de formato de raiz 3d com o uso de métodos de aprendizado profundo | |
CN108205806B (zh) | 一种锥束ct图像三维颅面结构的自动解析方法 | |
CN107909622A (zh) | 模型生成方法、医学成像的扫描规划方法及医学成像系统 | |
US20230196570A1 (en) | Computer-implemented method and system for predicting orthodontic results based on landmark detection | |
CN114424246A (zh) | 用于配准口内测量的方法、系统和计算机可读存储介质 | |
CN115641323A (zh) | 医学图像自动标注的方法及装置 | |
Goutham et al. | Automatic localization of landmarks in cephalometric images via modified U-Net | |
CN110795623B (zh) | 一种图像增强训练方法及其系统、计算机可读存储介质 | |
Chen et al. | Detection of Various Dental Conditions on Dental Panoramic Radiography Using Faster R-CNN | |
CN113313722B (zh) | 一种牙根图像交互标注方法 | |
CN114708237A (zh) | 一种用于头发健康状况的检测算法 | |
CN115439409A (zh) | 牙齿类型的识别方法和装置 | |
CN114972026A (zh) | 图像处理方法和存储介质 | |
CN116385474B (zh) | 基于深度学习的牙齿扫描模型分割方法、装置、电子设备 | |
CN113610956B (zh) | 一种口内扫描中特征匹配种植体的方法、装置及相关设备 | |
Kahurke | Artificial Intelligence Algorithms and Techniques for Dentistry | |
CN117197348A (zh) | 一种基于cbct融合口扫数据的口腔三维模型建立方法及系统 | |
CN117274601A (zh) | 一种三维牙齿图像的处理方法、装置及系统 | |
CN115661172A (zh) | 一种牙齿图像分割方法、存储介质和电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19864127 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021517414 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3114650 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217012553 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2019864127 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2019345828 Country of ref document: AU Date of ref document: 20190929 Kind code of ref document: A |