CN111489312A - Tooth restoration product surface feature extraction method based on computer graphic image - Google Patents
Tooth restoration product surface feature extraction method based on computer graphic image Download PDFInfo
- Publication number
- CN111489312A CN111489312A CN202010279368.1A CN202010279368A CN111489312A CN 111489312 A CN111489312 A CN 111489312A CN 202010279368 A CN202010279368 A CN 202010279368A CN 111489312 A CN111489312 A CN 111489312A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- dental model
- pixel
- dimensional image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 18
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims description 21
- 238000001514 detection method Methods 0.000 claims description 14
- 238000003708 edge detection Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 4
- 210000003464 cuspid Anatomy 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims 4
- 238000011282 treatment Methods 0.000 description 9
- 238000005259 measurement Methods 0.000 description 2
- 238000005299 abrasion Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004195 computer-aided diagnosis Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
- G06T3/067—Reshaping or unfolding 3D tree structures onto 2D planes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/34—Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The tooth restoration product surface feature extraction method based on the computer graphic image comprises the steps of scanning a tooth body through non-contact blue light by a 3D scanner to generate complete tooth model three-dimensional data and outputting ST L three-dimensional images, acquiring two-dimensional images through an orthogonal projection mode based on ST L three-dimensional images and preprocessing the two-dimensional images, and extracting feature points of the preprocessed images based on a FAST algorithm.
Description
Technical Field
The invention relates to the field of computer vision image processing, in particular to a tooth restoration product surface feature extraction method based on computer graphic images.
Background
With the progress of science and technology and the improvement of living standard of people, more and more people pay attention to the oral health problem, and the technical requirement on oral treatment is higher and higher. Orthodontic treatment is a popular type of current oral cavity restoration method, and computer-aided diagnosis and treatment technology is being widely applied to orthodontic treatment. In orthodontic treatment, tooth surface characteristics are very important reference points for measurement and treatment, and an orthodontist needs to quickly extract interested characteristic points on the tooth surface so as to provide a basis for subsequent diagnosis and treatment.
Because tooth data is different from a common model, sharp edges and inflection points do not exist on the surface of teeth actually, and only points and regions with relatively severe fluctuation change exist on the surface of the teeth, the existing tooth characteristic extraction technology can misjudge parts such as tooth cusps with serious abrasion and relatively flat surfaces as facial points, and the tooth characteristics extracted by the method are not accurate enough, so that the orthodontic treatment effect is directly influenced. In addition, the rapidity and the real-time performance of the existing tooth feature extraction method cannot meet the requirements at present. Therefore, aiming at the problem that the rapid and real-time extraction of tooth features is an urgent need to solve in the existing orthodontic treatment, the invention provides a tooth restoration product surface feature extraction method based on a computer graphic image based on the current technology.
Disclosure of Invention
In order to solve the problems in the prior art, the method provides a tooth restoration product surface feature extraction method based on computer graphic images, and the tooth surface features can be accurately and quickly extracted.
In order to achieve the purpose, the invention adopts the following technical scheme:
the tooth restoration product surface feature extraction method based on the computer graphic image comprises the following steps:
s1, scanning the dental body by non-contact blue light by adopting a 3D scanner to generate complete dental model three-dimensional data and outputting an ST L three-dimensional image;
s2, acquiring a two-dimensional image by adopting an orthogonal projection mode based on the ST L three-dimensional image, and preprocessing the two-dimensional image;
s3, feature point extraction is carried out on the preprocessed image based on FAST algorithm.
Further, in the orthogonal projection in step S2, a near clipping plane is selected as the projection plane, x and y after the projection of the point in the ST L three-dimensional image are unchanged, z is-n, and the point p in the ST L three-dimensional image after the orthogonal projection results in p', wherein,
the CVV is constructed in the z direction such that when z is at the near clipping plane, az + b is-1, when z is at the far clipping plane, az + b is 1, then a and b are derived,
the orthogonal projection matrix is reversely deduced through the obtained a and b, and the method comprises the following steps:
where n denotes the distance of the near clipping plane to the camera plane, f denotes the distance of the far clipping plane to the camera plane, p denotes a point in the three-dimensional image of ST L, p' denotes a point after the projection of the p point,
based on the orthogonal projection matrix, a dental model image under ST L three-dimensional image projection is obtained, and the complete outline and structure of the dental model can be saved.
Further, in step S2, the preprocessing of the two-dimensional image includes:
s201, graying the obtained dental model two-dimensional image by component method operation:
f1(i,j)=R(i,j)f2(i,j)=G(i,j)f3(i,j)=B(i,j)
wherein ,fk(i, j) (k ═ 1,2,3) represents the grayscale value of the converted grayscale image of the dental model at (i, j);
s202, threshold segmentation is carried out on the two-dimensional image by adopting a large law method, the original image is divided into a background area and a target dental model area by utilizing the threshold, then the optimal threshold of the maximum between-class variance is calculated so as to enable the degree of distinction between the two areas to be maximum,
T=max2=max|ωA(μA-μ)2+ωB(μB-μ)2|
where T denotes a division threshold value, μAMean value of gray scale, mu, representing target dental model area ABMean value of the gray scale representing the background region B, mu representing the overall mean value of the dental model image, omegaAProbability of region A, ωBThe probability of the region B is represented by,2representing the between-class variance of the target dental model area A and the background area B;
s203, carrying out edge detection on the segmented target dental model gray-scale image to obtain the outline of the image;
s204, a local search method is adopted for the image contour obtained in the step S203 to obtain a local optimal solution.
Further, the edge detection of the target dental model gray scale map of step S203 includes:
s2031, removing noise of the image data through Gaussian smoothing;
s2032 generating a luminance gradient map of each point in the image and a direction of the luminance gradient from the image processed in step S2031;
s2033 tracks the image edge processed in step S2032 using a hysteresis threshold according to the luminance gradient map of each point and the direction of the luminance gradient, and extracts an edge based on the local target feature as the contour of the target dental model image.
Further, step S204 includes:
s2041 gives an initial solution S, defining t neighborhoods, denoted as N _ k (k 1,2,3.... m), i 1;
s2042 performs a search using the neighborhood structure N _ i (i.e., N _ i (S)), and if a better solution S' than S is found in N _ i (S), let S be 1;
s2043, if the neighborhood structure N _ i cannot find a better solution than S, making i + +;
s2044, if i is less than or equal to t, returning to step S2042;
s2045 outputs the optimal solution S.
Further, in step S3, a circle is defined by using the optimal candidate feature point p as the center of the circle and the radius of the circle as R, and if there are enough consecutive gray values of the pixel points and the gray values are greater than or less than a certain threshold of the gray value of the center point, the point is selected as the corner point.
Further, a p point of a pixel point to be detected is located at the center, a detection point neighborhood is a pixel value with p as the center and the radius of 3, pixels of the circumference are marked clockwise by 1-16, and 1-16 pixel points on the circumference are divided into the following 3 types:
wherein ,IpPixel value representing p points, Ip→iExpressing the ith pixel point on the circumference, n expressing a parameter, and obtaining a result Sp→iThere are 3 values, d indicates darker than the detection point, s indicates similar to the detection point, b indicates lighter than the detection point,
firstly detecting the 1 st pixel point and the 9 th pixel point, if the two pixel points are similar, not selecting the point as a candidate point, secondly detecting the 5 th pixel point and the 13 th pixel point, if 3 values in the 4 values are dark or bright, taking the point as a candidate point, continuously calculating other pixel values, if at least 9 continuous points in the 16 detected points are dark or bright, determining the point as a characteristic point, and finally achieving the purpose of quickly extracting the incisal ridges and the cuspids.
According to the method, firstly, the data of the surface of an object are comprehensively acquired in a photographing type measuring mode of a 3D scanner, a standard ST L image file is output, then a two-dimensional image is acquired in an orthogonal projection mode, the two-dimensional image is preprocessed, finally, the characteristic points of the dental model image are extracted based on a FAST algorithm, and the characteristic points interested by a doctor are selected.
Drawings
The present invention will be further described and illustrated with reference to the following drawings.
FIG. 1 is a flow chart of the method for extracting surface features of a dental restoration product based on computer graphic images.
Detailed Description
The technical solution of the present invention will be more clearly and completely explained by the description of the preferred embodiments of the present invention with reference to the accompanying drawings.
As shown in fig. 1, the method for extracting surface features of a dental restoration product based on computer graphics images of the present invention comprises the following steps:
s1, scanning the dental body by non-contact blue light by adopting a 3D scanner to generate complete dental model three-dimensional data and outputting an ST L three-dimensional image;
s2, acquiring a two-dimensional image by adopting an orthogonal projection mode based on the ST L three-dimensional image, and preprocessing the two-dimensional image;
s3, feature point extraction is carried out on the preprocessed image based on FAST algorithm.
Specifically, in the orthogonal projection in step S2, a near clipping plane is selected as the projection plane, and since there is no uniform projection ray target point, x and y after the projection of the point in the ST L three-dimensional image are unchanged, z is always changed to-n, and goes onto the projection plane, that is, z is-n, so that useless information can be omitted and the point p in the z.st L three-dimensional image is saved to obtain p' after the orthogonal projection,
the CVV is constructed in the z direction such that when z is at the near clipping plane, az + b is-1, when z is at the far clipping plane, az + b is 1, then a and b are derived,
the orthogonal projection matrix is reversely deduced through the obtained a and b, and the method comprises the following steps:
where n denotes the distance of the near clipping plane to the camera plane, f denotes the distance of the far clipping plane to the camera plane, p denotes a point in the three-dimensional image of ST L, p' denotes a point after the projection of the p point,
based on the orthogonal projection matrix, a dental model image under ST L three-dimensional image projection is obtained, and the complete outline and structure of the dental model can be saved.
Specifically, in step S2, the preprocessing of the two-dimensional image includes:
s201, graying the obtained dental model two-dimensional image by component method operation:
f1(i,j)=R(i,j)f2(i,j)=G(i,j)f3(i,j)=B(i,j)
wherein ,fk(i, j) (k ═ 1,2,3) represents the grayscale value of the converted grayscale image of the dental model at (i, j);
s202, threshold segmentation is carried out on the two-dimensional image by adopting a large law method, the original image is divided into a background area and a target dental model area by utilizing the threshold, then the optimal threshold of the maximum between-class variance is calculated so as to enable the degree of distinction between the two areas to be maximum,
T=max2=max|ωA(μA-μ)2+ωp(μB_μ)2|
where T denotes a division threshold value, μAMean value of gray scale, mu, representing target dental model area ABMean value of the gray scale representing the background region B, mu representing the overall mean value of the dental model image, omegaAProbability of region A, ωBThe probability of the region B is represented by,2representing the between-class variance of the target dental model area A and the background area B;
s203, carrying out edge detection on the segmented target dental model gray-scale image to obtain the outline of the image;
s204, a local search method is adopted for the image contour obtained in the step S203 to obtain a local optimal solution.
Specifically, the edge detection of the target dental model gray scale image in step S203 includes:
s2031, removing noise of the image data through Gaussian smoothing;
s2032 generating a luminance gradient map of each point in the image and a direction of the luminance gradient from the image processed in step S2031;
s2033 tracks the image edge processed in step S2032 using a hysteresis threshold according to the luminance gradient map of each point and the direction of the luminance gradient, and extracts an edge based on the local target feature as the contour of the target dental model image.
Specifically, step S204 includes:
s2041 gives an initial solution S, defining t neighborhoods, denoted as N _ k (k 1,2,3.... m), i 1;
s2042 performs a search using the neighborhood structure N _ i (i.e., N _ i (S)), and if a better solution S' than S is found in N _ i (S), let S be 1;
s2043, if the neighborhood structure N _ i cannot find a better solution than S, making i + +;
s2044, if i is less than or equal to t, returning to step S2042;
s2045 outputs the optimal solution S.
Specifically, in step S3, a circle is defined by taking the optimal candidate feature point p as the center of the circle and the radius of the circle as R, and if there are enough consecutive gray values of the pixel points, which are greater than or less than a certain threshold of the gray value of the center point, the point is selected as the corner point.
Preferably, a point p of a pixel point to be detected is located at the center, a detection point neighborhood is a pixel value with the center being p and the radius being 3, pixels of the circumference are marked clockwise by 1-16, and 1-16 pixel points on the circumference are divided into the following 3 types:
wherein ,IpPixel value representing p points, Ip→iExpressing the ith pixel point on the circumference, n expressing a parameter, and obtaining a result Sp→iThere are 3 values, d indicates darker than the detection point, s indicates similar to the detection point, b indicates lighter than the detection point,
firstly detecting the 1 st pixel point and the 9 th pixel point, if the two pixel points are similar, not selecting the point as a candidate point, secondly detecting the 5 th pixel point and the 13 th pixel point, if 3 values in the 4 values are dark or bright, taking the point as a candidate point, continuously calculating other pixel values, if at least 9 continuous points in the 16 detected points are dark or bright, determining the point as a characteristic point, and finally achieving the purpose of quickly extracting the incisal ridges and the cuspids.
According to the method, firstly, the data of the surface of an object is comprehensively acquired in a photographing type measurement mode of a 3D scanner, a standard ST L image file is output, then a two-dimensional image is acquired in an orthogonal projection mode, the two-dimensional image is preprocessed, finally, the feature points of the dental model image are extracted based on a FAST algorithm, feature points which are interested by a doctor are selected, in the feature extraction, pixel points of the p neighborhood circumference of the pixel point to be detected are divided into 3 types, then a special detection method is adopted, and each pixel point is not detected in sequence, so that the detection speed is increased, the method has the advantages of short extraction time and large quantity, different types of features of the surface of the tooth can be effectively identified, the extraction accuracy is high, and the image matching speed is increased.
The above detailed description merely describes preferred embodiments of the present invention and does not limit the scope of the invention. Without departing from the spirit and scope of the present invention, it should be understood that various changes, substitutions and alterations can be made herein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims and their equivalents. The scope of the invention is defined by the claims.
Claims (7)
1. The tooth restoration product surface feature extraction method based on the computer graphic image is characterized by comprising the following steps of:
s1, scanning the dental body by non-contact blue light by adopting a 3D scanner to generate complete dental model three-dimensional data and outputting an ST L three-dimensional image;
s2, acquiring a two-dimensional image by adopting an orthogonal projection mode based on the ST L three-dimensional image, and preprocessing the two-dimensional image;
s3, feature point extraction is carried out on the preprocessed image based on FAST algorithm.
2. The computer graphic image-based surface feature extraction method for dental restoration products according to claim 1, wherein in the orthogonal projection of step S2, a near clipping plane is selected as a projection plane, x and y after the projection of the point in the ST L three-dimensional image are unchanged, z is-n, and the point p in the ST L three-dimensional image after the orthogonal projection is obtained as p', wherein,
the CVV is constructed in the z direction such that when z is at the near clipping plane, az + b is-1, when z is at the far clipping plane, az + b is 1, then a and b are derived,
the orthogonal projection matrix is reversely deduced through the obtained a and b, and the method comprises the following steps:
where n denotes the distance of the near clipping plane to the camera plane, f denotes the distance of the far clipping plane to the camera plane, p denotes a point in the three-dimensional image of ST L, p' denotes a point after the projection of the p point,
based on the orthogonal projection matrix, a dental model image under ST L three-dimensional image projection is obtained, and the complete outline and structure of the dental model can be saved.
3. The computer graphic image-based method for extracting surface features of dental restoration articles according to claim 2, wherein the preprocessing of the two-dimensional image in step S2 comprises:
s201, graying the obtained dental model two-dimensional image by component method operation:
f1(i,j)=R(i,j)f2(i,j)=G(i,j0f3(i,j)=B(i,j)
wherein ,fk(i, j) (k ═ 1,2,3) represents the grayscale value of the converted grayscale image of the dental model at (i, j);
s202, threshold segmentation is carried out on the two-dimensional image by adopting a large law method, the original image is divided into a background area and a target dental model area by utilizing the threshold, then the optimal threshold of the maximum between-class variance is calculated so as to enable the degree of distinction between the two areas to be maximum,
T=max2=max|ωA(μA-μ)2+ωB(μB-μ)2|
where T denotes a division threshold value, μAMean value of gray scale, mu, representing target dental model area ABMean value of the gray scale representing the background region B, mu representing the overall mean value of the dental model image, omegaAProbability of region A, ωBThe probability of the region B is represented by,2representing the between-class variance of the target dental model area A and the background area B;
s203, carrying out edge detection on the segmented target dental model gray-scale image to obtain the outline of the image;
s204, a local search method is adopted for the image contour obtained in the step S203 to obtain a local optimal solution.
4. The method for extracting surface features of a dental restoration product based on computer graphic images as claimed in claim 3, wherein the edge detection of the target dental model gray-scale image of step S203 comprises:
s2031, removing noise of the image data through Gaussian smoothing;
s2032 generating a luminance gradient map of each point in the image and a direction of the luminance gradient from the image processed in the step S2031;
s2033 tracks the image edge processed in step S2032 using a hysteresis threshold according to the luminance gradient map of each point and the direction of the luminance gradient, and extracts an edge based on the local target feature as the contour of the target dental model image.
5. The computer graphics image-based method for extracting surface features of dental restoration products according to claim 4, wherein the step S204 comprises:
s2041 gives an initial solution S, defining t neighborhoods, denoted as N _ k (k 1,2,3.... m), i 1;
s2042 performs a search using the neighborhood structure N _ i (i.e., N _ i (S)), and if a better solution S' than S is found in N _ i (S), let S be 1;
s2043, if the neighborhood structure N _ i cannot find a better solution than S, making i + +;
s2044, if i is less than or equal to t, returning to step S2042;
s2045 outputs the optimal solution S.
6. The computer graphics image-based method for extracting surface features of dental restoration products, as claimed in claim 5, wherein in step S3, a circle is defined with the optimal candidate feature point p as the center and the radius as R, and if there are consecutive and sufficient gray values of pixel points and greater or less than a certain threshold of the gray value of the center point, the point is selected as the corner point.
7. The computer graphic image-based method for extracting surface features of dental restoration products, as claimed in claim 6, wherein p point of a pixel point to be detected is located at the center, the neighborhood of the detection point is a pixel value with p as the center and radius of 3, the pixels of the circumference are labeled clockwise by 1-16, and 1-16 pixel points on the circumference are classified into 3 types as follows:
wherein ,IpPixel value representing p points, Ip→iExpressing the ith pixel point on the circumference, n expressing a parameter, and obtaining a result Sp→iThere are 3 values, d indicates darker than the detection point, s indicates similar to the detection point, b indicates lighter than the detection point,
firstly detecting the 1 st pixel point and the 9 th pixel point, if the two pixel points are similar, not selecting the point as a candidate point, secondly detecting the 5 th pixel point and the 13 th pixel point, if 3 values in the 4 values are dark or bright, taking the point as a candidate point, continuously calculating other pixel values, if at least 9 continuous points in the 16 detected points are dark or bright, determining the point as a characteristic point, and finally achieving the purpose of quickly extracting the incisal ridges and the cuspids.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010279368.1A CN111489312B (en) | 2020-04-10 | 2020-04-10 | Dental restoration product surface feature extraction method based on computer graphic image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010279368.1A CN111489312B (en) | 2020-04-10 | 2020-04-10 | Dental restoration product surface feature extraction method based on computer graphic image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111489312A true CN111489312A (en) | 2020-08-04 |
CN111489312B CN111489312B (en) | 2023-04-28 |
Family
ID=71798337
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010279368.1A Active CN111489312B (en) | 2020-04-10 | 2020-04-10 | Dental restoration product surface feature extraction method based on computer graphic image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111489312B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745501A (en) * | 2014-01-28 | 2014-04-23 | 广东药学院 | STL (Standard Template Library) file format based three-dimensional model coloring and color information access method |
CN103886306A (en) * | 2014-04-08 | 2014-06-25 | 山东大学 | Tooth X-ray image matching method based on SURF point matching and RANSAC model estimation |
CN103927732A (en) * | 2013-01-11 | 2014-07-16 | 上海联影医疗科技有限公司 | Method for detecting chest wall lines |
US20160239631A1 (en) * | 2015-02-13 | 2016-08-18 | Align Technology, Inc. | Three-dimensional tooth modeling using a two-dimensional x-ray image |
-
2020
- 2020-04-10 CN CN202010279368.1A patent/CN111489312B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927732A (en) * | 2013-01-11 | 2014-07-16 | 上海联影医疗科技有限公司 | Method for detecting chest wall lines |
CN103745501A (en) * | 2014-01-28 | 2014-04-23 | 广东药学院 | STL (Standard Template Library) file format based three-dimensional model coloring and color information access method |
CN103886306A (en) * | 2014-04-08 | 2014-06-25 | 山东大学 | Tooth X-ray image matching method based on SURF point matching and RANSAC model estimation |
US20160239631A1 (en) * | 2015-02-13 | 2016-08-18 | Align Technology, Inc. | Three-dimensional tooth modeling using a two-dimensional x-ray image |
Also Published As
Publication number | Publication date |
---|---|
CN111489312B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107909589B (en) | Tooth image segmentation method combining C-V level set and GrabCont algorithm | |
CN110458831B (en) | Scoliosis image processing method based on deep learning | |
Youssif et al. | Automatic facial expression recognition system based on geometric and appearance features | |
Li et al. | Robust retinal image enhancement via dual-tree complex wavelet transform and morphology-based method | |
CN106709964B (en) | Sketch generation method and device based on gradient correction and multidirectional texture extraction | |
JP2010044439A (en) | Feature value extraction device, feature value extraction method, image processor, and program | |
JP4834464B2 (en) | Image processing method and image processing apparatus | |
CN112515787B (en) | Three-dimensional dental data analysis method | |
WO2022048171A1 (en) | Method and apparatus for measuring blood vessel diameter in fundus image | |
CN112052842B (en) | Palm vein-based personnel identification method and device | |
Sihotang | Implementation of Gray Level Transformation Method for Sharping 2D Images | |
CN111105427B (en) | Lung image segmentation method and system based on connected region analysis | |
CN113223140A (en) | Method for generating image of orthodontic treatment effect by using artificial neural network | |
CN110136139B (en) | Dental nerve segmentation method in facial CT image based on shape feature | |
CN115100494A (en) | Identification method, device and equipment of focus image and readable storage medium | |
Sari et al. | Interactive image inpainting of large-scale missing region | |
CN111489312A (en) | Tooth restoration product surface feature extraction method based on computer graphic image | |
KR20180034237A (en) | Image processing apparatus, image processing method, storage medium, and program | |
Zhan et al. | Real-time 3D face modeling based on 3D face imaging | |
CN116778559A (en) | Face wrinkle three-dimensional evaluation method and system based on Gaussian process and random transformation | |
CN107729863A (en) | Human body refers to vein identification method | |
Choi et al. | Relief extraction from a rough stele surface using SVM-based relief segment selection | |
KR20020085669A (en) | The Apparatus and Method for Abstracting Peculiarity of Two-Dimensional Image & The Apparatus and Method for Creating Three-Dimensional Image Using Them | |
CN115953534A (en) | Three-dimensional reconstruction method and device, electronic equipment and computer-readable storage medium | |
CN115761226A (en) | Oral cavity image segmentation identification method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20221221 Address after: Room 801, 8th Floor, Building 1, No. 1188, Qinzhou North Road, Xuhui District, Shanghai, 200000 Applicant after: Shanghai Weiyun Industrial Group Co.,Ltd. Address before: 210000 Room 201, building 2, No.2, Shuanglong street, Qinhuai District, Nanjing City, Jiangsu Province Applicant before: Nanjing Jiahe Dental Technology Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |