CN111598901B - Method for estimating processing progress of dental restoration product based on depth image - Google Patents

Method for estimating processing progress of dental restoration product based on depth image Download PDF

Info

Publication number
CN111598901B
CN111598901B CN202010425917.1A CN202010425917A CN111598901B CN 111598901 B CN111598901 B CN 111598901B CN 202010425917 A CN202010425917 A CN 202010425917A CN 111598901 B CN111598901 B CN 111598901B
Authority
CN
China
Prior art keywords
dental restoration
product
depth
depth image
dental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010425917.1A
Other languages
Chinese (zh)
Other versions
CN111598901A (en
Inventor
刘大鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Weiyun Industrial Group Co ltd
Original Assignee
Shanghai Weiyun Industrial Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weiyun Industrial Group Co ltd filed Critical Shanghai Weiyun Industrial Group Co ltd
Priority to CN202010425917.1A priority Critical patent/CN111598901B/en
Publication of CN111598901A publication Critical patent/CN111598901A/en
Application granted granted Critical
Publication of CN111598901B publication Critical patent/CN111598901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The method for estimating the processing progress of the dental restoration product based on the depth image comprises the following steps of S101: collecting a depth image of the model after the dental restoration product is processed; s102: dividing a depth image of a target area; s103: filtering the depth image of the target area; s104: generating point cloud data by the filtered depth image of the target area; s105: obtaining depth information of a model after the dental restoration product is processed; s201: reading a machining template file of the dental product; s202: converting the processing template file into point cloud data; s203: downsampling the point cloud data; s204: counting the depth value of the processing template file; s301: and calculating the depth value proportion. The method judges the similarity by comparing the depth image of the dental restoration product with the depth value proportion of the depth image of the template file, so as to reflect whether the dental restoration product needs to be continuously processed or not and avoid manual measurement.

Description

Method for estimating processing progress of dental restoration product based on depth image
Technical Field
The invention relates to the field of dental product manufacturing methods, in particular to a method for estimating the processing progress of a dental restoration product based on a depth image.
Background
Along with the vigorous development of the digital technology, a new direction is brought to the field of dental restoration, the current dental restoration product is being changed from a manual mode to a mechanical processing mode, but the current dental restoration product usually continues the subsequent manufacturing process after mechanical processing, then judges whether the product meets the processing requirements or not by means of an instrument manually, does not meet the processing requirements, needs to be continuously processed, cannot be automatically measured after the mechanical processing is finished, and therefore resources are wasted.
The ToF depth camera has the characteristics of non-contact, high precision and the like, is widely used for three-dimensional measurement in various three-dimensional measurement directions at present, and can reduce errors by an algorithm aiming at three-dimensional measurement in different scenes and different distances, so that high-precision measurement is realized, and the highest precision can reach a micron level.
Disclosure of Invention
In order to overcome the defects in the prior art, the method for estimating the processing progress of the dental restoration product based on the depth image, which is provided by the invention, reflects the similarity degree of the dental restoration product and the dental product template through the ratio value of the depth image of the dental restoration product to the depth image of the dental product processing template, namely indirectly reflects the completion degree of the dental restoration product, determines whether the dental restoration product needs to be continuously processed according to the completion degree, realizes mechanical judgment through a ToF depth camera, and avoids personnel resource waste caused by manual measurement.
In order to achieve the above object, the method for estimating the processing progress of a depth image-based dental restoration product of the present invention includes the steps of: the TOF depth camera collects a depth image of the model after the dental restoration product is processed and inputs the depth image into the PCL; step S102: in step S101, segmenting a depth image of a target area from a depth image of a model of a dental restorative product after processing; step S103: filtering the depth image of the target area in step S102; step S104: generating point cloud data for the depth image of the target area filtered in step S103; step S105: obtaining and counting the depth information of the model after the dental restoration product is processed; step S201: reading a machining template file of the dental product by PCL; step S202: converting the processing template file in the step S201 into point cloud data through coordinates in PCL; step S203: downsampling the point cloud data in step S202; step S204: counting the depth value of the processing template file at each point in step S103; step S301: calculating the ratio of the depth value of the model after the dental restoration product is processed to the depth value of the processing template of the dental product; estimating the processing progress of the dental restoration product according to the depth value proportion, wherein the method comprises the following steps: comparing the depth value of the model after the dental restoration product is processed with the depth value proportion of the processing template of the dental product to obtain the similarity between the depth value of the model after the dental restoration product is processed and the depth value of the processing template of the dental product, and estimating whether the model after the dental restoration product is processed is completely close to the processing template according to the similarity so as to judge whether the dental restoration product is processed.
Further, in step S102, the point cloud data of the depth image of the target area is filtered by a thresholding method.
Further, in step S102, the formula of the threshold method is:
Figure GDA0004137136490000021
wherein dis is the distance between the TOF depth camera and the dental restoration product processed model, h is the height of the dental restoration product processed model, and d (x, y) is the point cloud depth value.
Further, in step S201, the machining template file is a three-dimensional model file for adding a blank and simulating the machined dental product.
Further, in step S203, the point cloud data is downsampled by using the voxel grid downsampling method.
Further, determining whether the dental restoration article is finished comprises: estimating the processing progress of the dental restoration product according to the depth value proportion comprises: and when the estimated processing progress in the single module of the dental restoration product is more than 95 percent and the overall processing progress of the dental restoration product is more than 90 percent, judging that the dental restoration product is processed, and no continuous processing is needed.
The beneficial effects are that: 1. the depth image of the dental restoration product is acquired by adopting a TOF depth camera, the depth image of the dental restoration product is obtained through the PCL, the template file of the dental restoration product is read by the PCL, the depth image of the template file is obtained by the PCL, and the similarity is judged by comparing the depth image of the dental restoration product and the depth image of the template file, so that whether the dental restoration product needs to be continuously processed or not is indirectly reflected, and the manual measurement is avoided.
2. And the point cloud data is downsampled by adopting a voxel grid downsampling method, so that the calculation amount of processing template file data can be reduced.
Drawings
The invention is further described and illustrated below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a preferred embodiment of the present invention.
Detailed Description
The technical solution of the present invention will be more clearly and completely explained by the description of the preferred embodiments of the present invention with reference to the accompanying drawings.
Examples
The method for estimating the processing progress of the dental restoration product based on the depth image comprises the following steps of: the TOF depth camera acquires a depth image of the model of the dental restoration product after processing and inputs the depth image into the PCL.
Specifically, PCL is collectively referred to as Point Cloud Learning. The ToF depth camera has the characteristics of non-contact, high precision and the like, is widely used for three-dimensional measurement in various three-dimensional measurement directions at present, and can reduce errors through an algorithm for three-dimensional measurement in different scenes and different distances, so that high-precision measurement is realized. The TOF depth camera is a pixel depth camera.
Step S102: in step S101, a depth image of a target region is segmented from a depth image of a model after processing of a dental restorative product.
Specifically, the depth image of the model after the dental restoration product is processed is segmented by a thresholding method, which is an area-based image segmentation technique based on the principle of classifying image pixels into several classes. It is particularly suitable for images where the target and the background occupy different gray level ranges. The purpose of thresholding the image is to divide the pixel sets into a subset of regions corresponding to the real scene according to gray levels, each region having a consistent attribute within it, and adjacent regions not having such consistent attribute. Such a division may be achieved by choosing one or more thresholds from the gray level point of view.
The thresholding formula is shown below:
Figure GDA0004137136490000031
wherein dis is the distance between the TOF depth camera and the dental restoration product processed model, h is the height of the dental restoration product processed model, and d (x, y) is the point cloud depth value.
Step S103: the PCL reads the point cloud data of the depth image of the target area in step S102, and filters the point cloud data of the depth image of the target area. Noise points generated after the thresholding operation are removed.
Specifically, when the PCL acquires the point cloud data, due to the influence of equipment precision, operator experience environmental factors and diffraction characteristics of electromagnetic waves, the influence of the surface property change of the measured object and the data stitching registration operation process, some noise will inevitably appear in the point cloud data. In the point cloud processing flow, the filtering process is used as the first step of preprocessing, and the influence on the follow-up process is relatively large.
For example by a pass filter. Firstly, creating a straight-through filter object, setting parameters for the straight-through filter, including a field range to be filtered, and finally executing the straight-through filter, wherein point cloud data set in the field range in the point cloud data are filtered, and PCL retains the filtered point cloud data result.
And generating point cloud data for the depth image of the target area filtered in step S103.
First, PCL traverses the depth image, acquires the value of a point at (m, n) in the depth image, calculates the spatial coordinates of the point, and then forms point cloud data of the entire depth image.
Step S105: and obtaining and counting the depth information of the model after the dental restoration product is processed.
Specifically, the post-processing model depth information of the dental restoration product includes depth values of the entire target area, and the like.
Step S201: the PCL reads in a machining template file of the dental article.
Specifically, the same dental restoration product has a corresponding processing template file of the dental product, wherein the processing template file is a three-dimensional model file added with blanks and subjected to simulation processing.
Step S202: the processing template file in step S201 is coordinate-converted into point cloud data by PCL.
Specifically, the point cloud data of the processing template file is generated in the same manner as in step S104, first, the PCL traverses the depth image in the processing template file, acquires the value of the point at (m, n) in the depth image, calculates the spatial coordinates of the point, and then forms the point cloud data of the entire depth image.
Step S203: the point cloud data in step S202 is downsampled.
Specifically, the downsampling method adopts a voxel grid downsampling method. The voxel grid class implemented by PCL creates a three-dimensional voxel grid (a collection of small spatial three-dimensional cubes can be thought of as a voxel grid) from the input point cloud data, then within each voxel (i.e., three-dimensional cube), the other points in the voxel are approximately displayed by the centers of gravity of all points in the voxel, so that all points in the voxel are finally represented by a center of gravity point, and the filtered point cloud is obtained after processing all voxels.
The voxel grid downsampling method comprises the following steps:
1. a three-dimensional voxel grid is created for each three-dimensional point in the input point cloud data.
2. Within each voxel grid, other points in the voxel grid are approximately replaced with the center of gravity of the voxel grid, with the remaining points removed.
3. And processing all voxel grids to obtain filtered point cloud data.
Step S204: the depth value of the machining template file at each point in step S103 is counted.
Specifically, in the point cloud data in step S203, the PCL acquires a depth value of the depth image in the processing template file.
Step S301: the ratio of the depth value of the model after the dental restoration product is processed to the depth value of the processing template of the dental product is calculated.
Specifically, comparing the ratio of the depth value of the model after the dental restoration product is processed to the depth value of the processing template of the dental product can obtain the similarity between the depth value of the model after the dental restoration product is processed and the depth value of the processing template of the dental product, so that whether the model after the dental restoration product is processed is completely close to the processing template or not can be estimated according to the similarity, for example, the overall similarity reaches more than 90, the fact that the processing is completed without continuous processing can be judged, and the similarity of a single small area (module) reaches more than 95. Automatic detection of the dental restoration product is achieved through TOF camera and point cloud data operation, and human resources are released.
The above detailed description is merely illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Various modifications, substitutions and improvements of the technical scheme of the present invention will be apparent to those skilled in the art from the description and drawings provided herein without departing from the spirit and scope of the invention. The scope of the invention is defined by the claims.

Claims (6)

1. A method for estimating the processing progress of a dental restoration product based on depth images is characterized by comprising the following steps,
step S101: the TOF depth camera collects a depth image of the model after the dental restoration product is processed and inputs the depth image into the PCL;
step S102: in step S101, segmenting a depth image of a target area from a depth image of a model of a dental restorative product after processing;
step S103: filtering the depth image of the target area in step S102;
step S104: generating point cloud data for the depth image of the target area filtered in step S103;
step S105: obtaining and counting the depth information of the model after the dental restoration product is processed;
step S201: reading a machining template file of the dental product by PCL;
step S202: converting the processing template file in the step S201 into point cloud data through coordinates in PCL;
step S203: downsampling the point cloud data in step S202;
step S204: counting the depth value of each point in the point cloud data of the processing template file after downsampling in the step S203;
step S301: calculating the ratio of the depth value of the model after the dental restoration product is processed to the depth value of the processing template of the dental product; estimating the processing progress of the dental restoration product according to the depth value proportion, wherein the method comprises the following steps: comparing the depth value ratio of the model after the dental restoration product is processed with the depth value ratio of the processing template of the dental product to obtain the similarity between the depth value of the model after the dental restoration product is processed and the depth value of the processing template of the dental product, and estimating whether the model after the dental restoration product is processed is completely close to the processing template according to the similarity so as to judge whether the dental restoration product is processed.
2. The method according to claim 1, wherein in step S102, the point cloud data of the depth image of the target area is filtered by a thresholding method.
3. The method for estimating a process progress of a dental restoration product based on a depth image according to claim 2, wherein in step S102, the formula of the thresholding method is:
Figure FDA0004137136480000011
wherein dis is the distance between the TOF depth camera and the dental restoration product processed model, h is the height of the dental restoration product processed model, and d (x, y) is the point cloud depth value.
4. The method according to claim 1, wherein in step S201, the machining template file is a three-dimensional model file for adding blanks and simulating the machined dental product.
5. The method according to claim 1, wherein in step S203, the point cloud data is downsampled by using a voxel grid downsampling method.
6. The method of estimating a process progress of a dental restoration product based on a depth image according to claim 1, wherein determining whether the dental restoration product is finished comprises: and when the estimated processing progress in the single module of the dental restoration product is more than 95 percent and the overall processing progress of the dental restoration product is more than 90 percent, judging that the dental restoration product is processed, and no continuous processing is needed.
CN202010425917.1A 2020-05-19 2020-05-19 Method for estimating processing progress of dental restoration product based on depth image Active CN111598901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010425917.1A CN111598901B (en) 2020-05-19 2020-05-19 Method for estimating processing progress of dental restoration product based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010425917.1A CN111598901B (en) 2020-05-19 2020-05-19 Method for estimating processing progress of dental restoration product based on depth image

Publications (2)

Publication Number Publication Date
CN111598901A CN111598901A (en) 2020-08-28
CN111598901B true CN111598901B (en) 2023-04-28

Family

ID=72187438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010425917.1A Active CN111598901B (en) 2020-05-19 2020-05-19 Method for estimating processing progress of dental restoration product based on depth image

Country Status (1)

Country Link
CN (1) CN111598901B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326511B (en) * 2021-06-25 2024-04-09 深信服科技股份有限公司 File repair method, system, equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251353A (en) * 2016-08-01 2016-12-21 上海交通大学 Weak texture workpiece and the recognition detection method and system of three-dimensional pose thereof
CN108182689B (en) * 2016-12-08 2021-06-22 中国科学院沈阳自动化研究所 Three-dimensional identification and positioning method for plate-shaped workpiece applied to robot carrying and polishing field
CN107702662B (en) * 2017-09-27 2020-01-21 深圳拎得清软件有限公司 Reverse monitoring method and system based on laser scanner and BIM
KR101899549B1 (en) * 2017-12-27 2018-09-17 재단법인 경북아이티융합 산업기술원 Obstacle recognition apparatus of obstacle recognition using camara and lidar sensor and method thereof
CN108629849A (en) * 2018-05-16 2018-10-09 浙江大学 A kind of component quality inspection system based on BIM with point cloud

Also Published As

Publication number Publication date
CN111598901A (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CN109978839B (en) Method for detecting wafer low-texture defects
CN104331876B (en) Method for detecting straight line and processing image and related device
Xu et al. Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor
CN109816664B (en) Three-dimensional point cloud segmentation method and device
CN110349092B (en) Point cloud filtering method and device
WO2021109697A1 (en) Character segmentation method and apparatus, and computer-readable storage medium
Yogeswaran et al. 3d surface analysis for automated detection of deformations on automotive body panels
CN112164145B (en) Method for rapidly extracting indoor three-dimensional line segment structure based on point cloud data
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
CN111598901B (en) Method for estimating processing progress of dental restoration product based on depth image
CN111354047B (en) Computer vision-based camera module positioning method and system
CN117292076A (en) Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery
CN111783648A (en) Method and device for extracting guardrail in road point cloud
CN109920049B (en) Edge information assisted fine three-dimensional face reconstruction method and system
CN116051980B (en) Building identification method, system, electronic equipment and medium based on oblique photography
Ali et al. Robust window detection from 3d laser scanner data
CN113129348B (en) Monocular vision-based three-dimensional reconstruction method for vehicle target in road scene
CN113139982B (en) Automatic segmentation method for indoor room point cloud
CN113781315A (en) Multi-view-angle-based homologous sensor data fusion filtering method
Zabuawala et al. Fusion of LiDAR and aerial imagery for accurate building footprint extraction
Deshmukh et al. Analysis of distance measurement system of leading vehicle
Bazazian et al. Segmentation-based multi-scale edge extraction to measure the persistence of features in unorganized point clouds
Feng et al. Liquid surface location of milk bottle based on digital image processing
CN112991327A (en) Steel grid welding system and method based on machine vision and terminal equipment
Yogeswaran 3D Surface Analysis for the Automated Detection of Deformations on Automotive Panels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221215

Address after: Room 801, 8th Floor, Building 1, No. 1188, Qinzhou North Road, Xuhui District, Shanghai, 200000

Applicant after: Shanghai Weiyun Industrial Group Co.,Ltd.

Address before: 210000 Room 201, building 2, No.2, Shuanglong street, Qinhuai District, Nanjing City, Jiangsu Province

Applicant before: Nanjing Jiahe Dental Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant