WO2022068883A1 - 扫描结果的处理方法、装置、处理器和扫描系统 - Google Patents

扫描结果的处理方法、装置、处理器和扫描系统 Download PDF

Info

Publication number
WO2022068883A1
WO2022068883A1 PCT/CN2021/121752 CN2021121752W WO2022068883A1 WO 2022068883 A1 WO2022068883 A1 WO 2022068883A1 CN 2021121752 W CN2021121752 W CN 2021121752W WO 2022068883 A1 WO2022068883 A1 WO 2022068883A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
scanning
result
dimensional
data
Prior art date
Application number
PCT/CN2021/121752
Other languages
English (en)
French (fr)
Inventor
麻腾超
Original Assignee
先临三维科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 先临三维科技股份有限公司 filed Critical 先临三维科技股份有限公司
Priority to JP2023519520A priority Critical patent/JP2023543298A/ja
Priority to US18/027,933 priority patent/US20230368460A1/en
Priority to EP21874538.8A priority patent/EP4224419A4/en
Publication of WO2022068883A1 publication Critical patent/WO2022068883A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • A61C9/004Means or methods for taking digitized impressions
    • A61C9/0046Data acquisition means or methods
    • A61C9/0053Optical means or methods, e.g. scanning the teeth by a laser or light beam
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects

Definitions

  • the present application relates to the technical field of scanners, and in particular, to a method, an apparatus, a computer-readable storage medium, a processor, and a scanning system for processing scan results.
  • the dental data of the patient's mouth is extracted and scanned and collected by an entrance scanner, which mainly collects the data of teeth and gums.
  • the oral environment is relatively small, and images of the tongue, lips, buccal side, and auxiliary tools are easily collected during the scanning process, resulting in the generated model data containing invalid data, which mainly has the following effects:
  • the invalid data with a certain interval with the teeth can be deleted by the relevant algorithm; and the connected part needs to be deleted actively, otherwise it will be retained, and the inspection, selection and deletion steps will be added, which will affect the overall scanning efficiency. .
  • the digital model data generation process is: original depth map ⁇ point cloud data (reconstruction) ⁇ grid model (fusion), so when a frame of data contains images of lips and cheeks, the fused grid data will contain These triangular mesh patches are also unnecessary miscellaneous data.
  • the existing methods for removing some miscellaneous data are mainly through the analysis of the generated data, through the correlation of data blocks (points), such as: whether there are some isolated data blocks (points), or through some limited conditions or strategy to find invalid data points that are deemed ineligible, and then remove these points from the overall data.
  • the real-time performance of intraoral data scanning is relatively high, and as the overall mesh data increases, if the deletion optimization algorithm is executed, the scanning needs to be paused and the optimization is completed, which will affect the scanning efficiency.
  • the algorithm has a certain strategic nature, which is relatively absolute and can only be used for data in general situations, while invalid data in some relatively special situations cannot be removed.
  • the main purpose of this application is to provide a scanning result processing method, device, computer-readable storage medium, processor and scanning system, so as to solve the problem that the scanning result processing method in the prior art determines invalid data through data analysis, which leads to low scanning efficiency The problem.
  • a method for processing a scan result including: acquiring a scan result of an object to be measured, wherein the scan result includes a two-dimensional image and/or a three-dimensional model; calling an intelligent recognition function to identify the scanning result, and obtain a classification result, wherein the intelligent recognition function is a classification model obtained by training picture samples; based on the classification result, determine invalid data in the scanning result, wherein the invalid data is the scan result of the non-target area in the object to be measured.
  • the two-dimensional image is the two-dimensional image
  • the two-dimensional image includes a texture image
  • the classification result includes: a first image corresponding to the target area in the object to be measured in the texture image data, and second image data corresponding to the non-target area in the object to be measured.
  • the scan result further includes a reconstructed image corresponding to the texture image, and based on the classification result, determining invalid data in the scan result includes: constructing a three-dimensional point cloud based on the reconstructed image, and based on the reconstructed image.
  • the corresponding relationship between the reconstructed image and the texture image is determined, and the invalid point cloud is determined, wherein the invalid point cloud is the point cloud corresponding to the second image data in the 3D point cloud; delete the 3D point Invalid point cloud in the cloud; based on the remaining point cloud in the three-dimensional point cloud, a valid three-dimensional model of the object to be measured is obtained by splicing.
  • the scan result is the three-dimensional model
  • calling an intelligent recognition function to identify the scan result, and obtaining a classification result further comprising: obtaining a Reconstructing the 3D point cloud data of the 3D model; calling the intelligent recognition function to analyze the 3D point cloud data, and identifying the classification result of the 3D point cloud data, wherein the classification result includes: the The three-dimensional point cloud data includes first point cloud data corresponding to the target area in the object to be measured, and second point cloud data corresponding to the non-target area in the object to be measured.
  • the point cloud data of the valid area in the three-dimensional model is determined by deleting the invalid data in the three-dimensional point cloud data.
  • the method before acquiring the three-dimensional point cloud data for reconstructing the three-dimensional model, the method further includes: collecting a two-dimensional image of the object to be measured; three-dimensionally reconstructing three-dimensional point cloud data based on the two-dimensional image. , and the three-dimensional model is obtained by splicing the reconstructed three-dimensional point cloud data.
  • the method before acquiring the scanning result of the object to be measured, the method further includes: starting and initializing a scanning process and an AI recognition process, wherein the scanning process is used to scan the object to be measured, and the AI recognition process is used to scan the object to be measured. Processes are used to identify and classify the scan results.
  • the scanning process sends a processing instruction to the AI recognition process, wherein the AI recognition process executes and invokes the intelligent recognition function based on the processing instruction to recognize the scanning result.
  • the AI recognition process runs in parallel, and when the operating environment satisfies a predetermined condition, the AI recognition process initializes the recognition algorithm. , and after the processing instruction is received and the identification algorithm is successfully initialized, the intelligent identification function is executed.
  • a scanning result processing device comprising: an acquisition unit for acquiring a scanning result of an object to be measured, wherein the scanning result includes a two-dimensional image and/or a three-dimensional model;
  • the first identification unit invokes an intelligent identification function to identify the scan result, and obtains a classification result, wherein the intelligent identification function is a classification model obtained by training picture samples;
  • the first determination unit determines based on the classification result Invalid data in the scanning result, wherein the invalid data is the scanning result of the non-target area in the object to be measured.
  • a storage medium includes a stored program, wherein the program executes any one of the processing methods.
  • a processor is also provided, and the processor is used for running a program, wherein any one of the processing methods is executed when the program is running.
  • a scanning system including a scanner and a scanning result processing apparatus, where the scanning result processing apparatus is configured to execute any one of the processing methods.
  • the scanning result of the object to be measured is obtained, that is, the two-dimensional image and/or the three-dimensional model obtained by the scanner is obtained, and then the intelligent recognition function is called to identify the scanning result, and the classification result is obtained, that is, through training.
  • the classification model classifies the 2D image and/or the 3D model, and finally, based on the classification result, determines the invalid data in the scan result, that is, removes invalid data in the 2D image and/or 3D model according to the classification result, so that there is no need to pause Scan to identify invalid data through data analysis, thereby improving scanning efficiency.
  • FIG. 1 shows a flowchart of a method for processing a scan result according to an embodiment of the present application
  • Figure 2 shows a schematic diagram of a three-dimensional model of teeth and gums according to an embodiment of the present application
  • FIG. 3 shows a flowchart of an AI recognition process according to an embodiment of the present application
  • FIG. 4 shows a flowchart of the startup of the scanning process and the AI recognition process according to an embodiment of the present application
  • FIG. 5 shows a flowchart of establishing a three-dimensional model of an object to be measured according to an embodiment of the present application.
  • FIG. 6 shows a schematic diagram of an apparatus for processing scan results according to an embodiment of the present application.
  • the scanning result processing method in the prior art determines invalid data through data analysis, which leads to low scanning efficiency.
  • a typical implementation of the present application provides a scanning result The processing method, apparatus, computer readable storage medium, processor and scanning system.
  • a method for processing a scan result is provided.
  • FIG. 1 is a flowchart of a method for processing a scan result according to an embodiment of the present application. As shown in Figure 1, the method includes the following steps:
  • Step S101 acquiring a scan result of the object to be measured, wherein the scan result includes a two-dimensional image and/or a three-dimensional model;
  • Step S102 calling an intelligent identification function to identify the above-mentioned scanning result, and obtain a classification result, wherein the above-mentioned intelligent identification function is a classification model obtained by training a picture sample;
  • Step S103 based on the classification result, determine invalid data in the scanning result, wherein the invalid data is the scanning result of the non-target area in the object to be measured.
  • the scanning result of the object to be measured is obtained, that is, the two-dimensional image and/or the three-dimensional model obtained by the scanner is obtained, and then the intelligent recognition function is called to identify the scanning result, and the classification result is obtained, that is, through training.
  • the classification model classifies the 2D image and/or the 3D model, and finally, based on the classification result, determines the invalid data in the scan result, that is, removes invalid data in the 2D image and/or 3D model according to the classification result, so that there is no need to pause Scan to identify invalid data through data analysis, thereby improving scanning efficiency.
  • the scanning result is the two-dimensional image
  • the two-dimensional image includes a texture image
  • the classification result includes: the first image data in the texture image corresponding to the target area in the object to be measured, and second image data corresponding to the non-target area in the object to be measured.
  • the texture image is recognized by the intelligent recognition function, so that the texture image is divided into first image data and second image data, and the first image data is the image data in the texture map corresponding to the target area in the object to be measured,
  • the above-mentioned second image data is the image data in the above-mentioned texture map corresponding to the non-target area in the above-mentioned object to be measured, and the target area and the non-target area are determined by preset, for example, the object to be measured is the oral cavity, and the target area includes teeth and gums, Non-target areas include tongue, lips, buccal, etc.
  • the first image data is image data corresponding to teeth and gums
  • the second image data is image data corresponding to areas such as tongue, lips, buccal, etc.
  • the gums can also be preset as non-target areas.
  • the classification results include teeth, gums, tongue, lips and buccal, of course, the classification results can also include teeth, gums and others, the tongue, lips and buccal are classified as other, and the classification results can also include target classes and non-target classes , classifies teeth and gums as target classes and tongue, lips and buccal as non-target classes (i.e. other).
  • the two-dimensional image further includes a reconstructed image corresponding to the texture image, and based on the classification result, determining invalid data in the scan result includes: constructing a three-dimensional point cloud based on the reconstructed image, and The corresponding relationship between the above-mentioned reconstructed image and the above-mentioned texture image is to determine an invalid point cloud, wherein the above-mentioned invalid point cloud is a point cloud corresponding to the above-mentioned second image data in the above-mentioned three-dimensional point cloud; delete the invalid point in the above-mentioned three-dimensional point cloud.
  • a three-dimensional point cloud is constructed based on the above-mentioned reconstructed image, and a point cloud corresponding to the above-mentioned second image data in the above-mentioned three-dimensional point cloud data is determined according to the corresponding relationship between the above-mentioned reconstructed image and the above-mentioned texture image, so as to determine an invalid point cloud and delete it.
  • the remaining point cloud can be spliced to obtain the valid 3D model of the object to be measured.
  • the remaining point cloud of the first frame obtain the remaining point cloud of the second frame based on the two-dimensional image of the second frame, and splicing the remaining point cloud of the first frame and the remaining point cloud of the second frame.
  • the object is the oral cavity
  • the target area includes teeth and gums
  • the non-target area includes the tongue, lips, buccal, etc., delete the point clouds corresponding to the tongue, lips, buccal areas, etc., and splicing the remaining point clouds to obtain the 3D model of the teeth and gums. as shown in picture 2.
  • the scan result is the three-dimensional model
  • the intelligent recognition function is invoked to identify the scan result to obtain a classification result
  • the three-dimensional point cloud data used for reconstructing the above three-dimensional model is identified by the intelligent recognition function, so that the three-dimensional point cloud data is divided into first point cloud data and second point cloud data, and the above-mentioned first point cloud data is the above-mentioned three-dimensional point cloud.
  • the cloud data corresponds to the point cloud data of the target area in the object to be measured
  • the second point cloud data is the point cloud data of the three-dimensional point cloud data that corresponds to the non-target area in the object to be measured, for example, the object to be measured
  • the target area includes teeth and gums
  • the non-target area includes tongue, lips, buccal, etc.
  • the first point cloud data is the point cloud data corresponding to the teeth and gums
  • the second point cloud data is corresponding to the tongue, lips
  • the classification results include teeth, gums, tongue, lips and buccal, of course, the classification results can also include teeth, gums and others, the tongue, lips and buccal are classified as other, the classification results are also Can include target and non-target classes, classifying teeth and gums as target classes and tongue, lips and buccal as non-target classes (i.e. other).
  • the point cloud data of the valid area in the above-mentioned three-dimensional model is determined by deleting the above-mentioned invalid data in the above-mentioned three-dimensional point cloud data .
  • the invalid data in the three-dimensional model is deleted, so as to determine the valid area in the three-dimensional model, that is, the area corresponding to the object to be measured.
  • the method before acquiring the three-dimensional point cloud data for reconstructing the three-dimensional model, the method further includes: collecting a two-dimensional image of the object to be measured by scanning the object to be measured; Three-dimensional point cloud data is reconstructed from the image, and the three-dimensional model is obtained by splicing based on the reconstructed three-dimensional point cloud data, wherein the pixels of the two-dimensional image have a corresponding relationship with the three-dimensional point cloud data.
  • the three-dimensional point cloud data for reconstructing the above-mentioned three-dimensional model, based on the collected two-dimensional image of the object to be measured, three-dimensionally reconstruct the three-dimensional point cloud data corresponding to the object to be measured, and use the three-dimensional point cloud corresponding to the object to be measured
  • the data can be spliced to obtain a three-dimensional model of the object to be measured.
  • the method before acquiring the scanning result of the object to be measured, the method further includes: starting and initializing a scanning process and an AI recognition process, wherein the scanning process is used to scan the object to be measured, and the AI The identification process is used to identify and classify the above scan results.
  • the scanning process executes the scanning of the object to be measured, and obtains the scanning result
  • the AI recognition process recognizes and classifies the above-mentioned scanning results, obtains the classification result, and initializes the The process clears the previous scan results and classification results to avoid interfering with the current processing process.
  • the scanning process sends a processing instruction to the AI recognition process, wherein the AI recognition process invokes the intelligent recognition function based on the processing instruction to recognize the scanning result.
  • the scanning process sends a processing instruction to the AI recognition process, so that the AI recognition process invokes the intelligent recognition function to recognize the scanning result.
  • the above-mentioned AI recognition process runs in parallel, and when the operating environment satisfies a predetermined condition, the above-mentioned AI recognition process initializes the recognition process
  • the above-mentioned intelligent identification function is executed after the above-mentioned processing instruction is received and the above-mentioned identification algorithm is successfully initialized.
  • the AI recognition process runs in parallel.
  • the AI recognition process initializes the recognition algorithm to avoid the subsequent failure of calling the intelligent recognition function.
  • the AI recognition process will only perform the intelligent recognition function to recognize the scan result when the processing instruction is received, that is, before the recognition algorithm is successfully initialized, even if the processing instruction is received, the AI recognition process will not perform the intelligent recognition function.
  • AI recognition process runs as an independent process.
  • AI recognition and scanning are set in the same process to be executed serially.
  • the AI recognition process and the scanning process are independent respectively.
  • the logic is clearer.
  • the AI recognition function reduces software coupling and is easy to maintain and modify.
  • you add/modify requirements you only need to add/modify the corresponding communication protocol, and the scalability is high.
  • the AI recognition function depends on the hardware configuration of the host, and requires some checks and algorithm initialization. It takes some time to start, and it is more reasonable in terms of software structure as a separate process.
  • the above-mentioned scanning process establishes communication with the above-mentioned AI recognition process, and exchanges data in a shared memory manner.
  • the above-mentioned AI recognition process includes the following steps : Read the texture image obtained by the above scanning process, store the read image data in the shared memory, input the above image data into the AI recognition algorithm, output the result label and write the result label into the above shared memory, the above result label and The points corresponding to the above image data are in one-to-one correspondence.
  • the object to be tested is the mouth
  • the target area includes teeth and gums
  • the result label is 0 for other
  • the result label is 1 for teeth
  • the result label is 2 for gums.
  • the steps for starting the above-mentioned scanning process and the above-mentioned AI recognition process are as follows: when the above-mentioned scanning process is started, the above-mentioned AI recognition process is started, that is, the above-mentioned AI recognition process is started, The above-mentioned scanning process and the above-mentioned AI recognition process are initialized, and the above-mentioned scanning process and the above-mentioned AI recognition process establish communication. After the connection is confirmed, the above-mentioned scanning process obtains the scanning result and sends processing instructions to the above-mentioned AI recognition process. The above-mentioned AI recognition process is based on the above-mentioned processing The instruction execution invokes the intelligent identification function to identify the above scanning result, writes the identified result label into the shared memory, and the scanning process applies the result label to process the scanning result.
  • the steps of establishing a three-dimensional model of the object to be measured are as follows: acquiring a frame of image data, reconstructing three-dimensional point cloud data based on the above-mentioned image data, and turning it on after the reconstruction is successful.
  • AI intelligent recognition function otherwise it returns to obtain the next frame of image data, the AI intelligent recognition function fails to be enabled, the 3D point cloud data is directly spliced to obtain the 3D model of the object to be measured, the AI intelligent recognition function is successfully enabled, and the AI recognition result is obtained. If the result times out, it returns to obtain the next frame of image data. If there is no time out, the AI recognition result is used to process the 3D point cloud data, delete the invalid point cloud, and stitch the remaining point clouds to obtain the 3D model of the object to be measured.
  • This embodiment of the present application further provides a scanning result processing apparatus. It should be noted that the scanning result processing apparatus of the present application embodiment may be used to execute the scanning result processing method provided by the present application embodiment. The apparatus for processing the scan result provided by the embodiment of the present application will be introduced below.
  • FIG. 6 is a schematic diagram of an apparatus for processing a scan result according to an embodiment of the present application. As shown in Figure 6, the device includes:
  • an acquisition unit 10 configured to acquire a scan result of the object to be measured, wherein the scan result includes a two-dimensional image and/or a three-dimensional model;
  • the first identification unit 20 is used for calling an intelligent identification function to identify the above-mentioned scanning result and obtain a classification result, wherein the above-mentioned intelligent identification function is a classification model obtained by training a picture sample;
  • the first determining unit 30 is configured to determine invalid data in the above-mentioned scanning result based on the above-mentioned classification result, wherein the above-mentioned invalid data is the scanning result of the non-target area in the above-mentioned object to be measured.
  • the acquisition unit acquires the scanning result of the object to be measured, that is, acquires the two-dimensional image and/or the three-dimensional model scanned by the scanner, and the recognition unit invokes the intelligent recognition function to recognize the scanning result, and obtains the classification result, that is, through training.
  • the classification model classifies the two-dimensional image and/or the three-dimensional model, and the processing unit determines the invalid data in the scan result based on the classification result, that is, the invalid data in the two-dimensional image and/or the three-dimensional model is determined according to the classification result, so that there is no need to pause Scan to identify invalid data through data analysis, thereby improving scanning efficiency.
  • the scanning result is the two-dimensional image
  • the two-dimensional image includes a texture image
  • the classification result includes: the first image data in the texture image corresponding to the target area in the object to be measured, and second image data corresponding to the non-target area in the object to be measured.
  • the texture image is recognized by the intelligent recognition function, so that the texture image is divided into first image data and second image data, and the first image data is the image data in the texture map corresponding to the target area in the object to be measured,
  • the above-mentioned second image data is the image data in the above-mentioned texture map corresponding to the non-target area in the above-mentioned object to be measured, and the target area and the non-target area are determined by preset, for example, the object to be measured is the oral cavity, and the target area includes teeth and gums, Non-target areas include tongue, lips, buccal, etc.
  • the first image data is image data corresponding to teeth and gums
  • the second image data is image data corresponding to areas such as tongue, lips, buccal, etc.
  • the target area and non-target area can be adjusted.
  • the gum can also be preset as the non-target area.
  • the classification results include teeth, gums, tongue, lips and buccal.
  • the classification results can also include teeth, gums and others, classify tongue, lips and buccal as other, the classification result can also include target class and non-target class, classify teeth and gums as target class, classify tongue, lips and buccal as Non-target classes (i.e. other).
  • the two-dimensional image further includes a reconstructed image corresponding to the texture image
  • the processing unit includes a determination module, a first processing module, and a second processing module, wherein the determination module is configured to be based on the above-mentioned constructing a three-dimensional point cloud from the reconstructed image, and determining an invalid point cloud based on the correspondence between the above-mentioned reconstructed image and the above-mentioned texture image, wherein the above-mentioned invalid point cloud is a point cloud corresponding to the above-mentioned second image data in the above-mentioned three-dimensional point cloud;
  • the above-mentioned first processing module is used for deleting invalid point clouds in the above-mentioned three-dimensional point cloud;
  • the above-mentioned second processing module is used for splicing the above-mentioned valid three-dimensional model of the object to be measured based on the remaining point clouds in the above-mentioned three-dimensional point cloud.
  • a three-dimensional point cloud is constructed based on the above-mentioned reconstructed image, and a point cloud corresponding to the above-mentioned second image data in the above-mentioned three-dimensional point cloud data is determined according to the corresponding relationship between the above-mentioned reconstructed image and the above-mentioned texture image, so as to determine an invalid point cloud and delete it.
  • the remaining point cloud can be spliced to obtain the valid 3D model of the object to be measured.
  • obtain the remaining point cloud of the second frame based on the two-dimensional image of the second frame, and splicing the remaining point cloud of the first frame and the remaining point cloud of the second frame.
  • the object is the oral cavity
  • the target area includes teeth and gums
  • the non-target area includes the tongue, lips, buccal, etc., delete the point clouds corresponding to the tongue, lips, buccal areas, etc., and splicing the remaining point clouds to obtain the 3D model of the teeth and gums. as shown in picture 2.
  • the scan result is the three-dimensional model
  • the device further includes a second identification unit, and the second identification unit includes a first acquisition module and an identification module, wherein the first acquisition module is used to acquire the three-dimensional point cloud data used to reconstruct the three-dimensional model; the identification module is used to call the intelligent identification function to analyze the three-dimensional point cloud data, and identify the three-dimensional point cloud data.
  • the classification result of cloud data wherein the classification result includes: the first point cloud data in the three-dimensional point cloud data corresponding to the target area in the object to be measured, and the second point corresponding to the non-target area in the object to be measured cloud data.
  • the three-dimensional point cloud data used for reconstructing the above three-dimensional model is identified by the intelligent recognition function, so that the three-dimensional point cloud data is divided into first point cloud data and second point cloud data, and the above-mentioned first point cloud data is the above-mentioned three-dimensional point cloud.
  • the cloud data corresponds to the point cloud data of the target area in the object to be measured
  • the second point cloud data is the point cloud data of the three-dimensional point cloud data that corresponds to the non-target area in the object to be measured, for example, the object to be measured
  • the target area includes teeth and gums
  • the non-target area includes tongue, lips, buccal, etc.
  • the first point cloud data is the point cloud data corresponding to the teeth and gums
  • the second point cloud data is corresponding to the tongue, lips
  • the classification results include teeth, gums, tongue, lips and buccal, of course, the classification results can also include teeth, gums and others, the tongue, lips and buccal are classified as other, the classification results are also Can include target and non-target classes, classifying teeth and gums as target classes and tongue, lips and buccal as non-target classes (i.e. other).
  • the above-mentioned apparatus further includes a second determination unit, and the above-mentioned second determination unit is configured to delete the above-mentioned three-dimensional point cloud data by deleting the above-mentioned three-dimensional point cloud data in the case that the above-mentioned second point cloud data is determined to be the above-mentioned invalid data.
  • the above-mentioned invalid data is used to determine the point cloud data of the valid area in the above-mentioned three-dimensional model. Specifically, after it is determined that the second point cloud data is invalid data, the invalid data in the three-dimensional model is deleted, so as to determine the valid area in the three-dimensional model, that is, the area corresponding to the object to be measured.
  • the above-mentioned device further includes a reconstruction unit, and the above-mentioned reconstruction unit includes a second acquisition module and a reconstruction module, wherein the above-mentioned second acquisition module is used for acquiring the three-dimensional point cloud data used for reconstructing the above-mentioned three-dimensional model.
  • the two-dimensional image of the object to be measured is collected by scanning the object to be measured; the above-mentioned reconstruction module is used for three-dimensional reconstruction of the three-dimensional point cloud data based on the above-mentioned two-dimensional image, and based on the reconstructed three-dimensional point cloud data splicing to obtain the above three-dimensional point cloud data
  • the model wherein the pixel points of the two-dimensional image and the three-dimensional point cloud data have a corresponding relationship.
  • the three-dimensional point cloud data for reconstructing the above-mentioned three-dimensional model, based on the collected two-dimensional image of the object to be measured, three-dimensionally reconstruct the three-dimensional point cloud data corresponding to the object to be measured, and use the three-dimensional point cloud corresponding to the object to be measured
  • the data can be spliced to obtain a three-dimensional model of the object to be measured.
  • the above-mentioned device further includes a control unit, and the above-mentioned control unit is used to start and initialize the scanning process and the AI recognition process before acquiring the scanning result of the object to be measured, wherein the above-mentioned scanning process is used to perform scanning
  • the above-mentioned AI recognition process is used to identify and classify the above-mentioned scanning results.
  • the scanning process executes the scanning of the object to be measured, and obtains the scanning result
  • the AI recognition process recognizes and classifies the above-mentioned scanning results, obtains the classification result, and initializes the The process clears the previous scan results and classification results to avoid interfering with the current processing process.
  • control unit includes a first control module, and the first control module is configured to monitor whether the scanning process and the AI recognition process are successful during the process of initializing the scanning process and the AI recognition process. After establishing communication and confirming that the connection is successful, if the above-mentioned scanning result is detected, the above-mentioned scanning process sends a processing instruction to the above-mentioned AI recognition process, wherein the above-mentioned AI recognition process executes and invokes the above-mentioned intelligent recognition function based on the above-mentioned processing instruction to recognize the above-mentioned scanning result.
  • the scanning process sends a processing instruction to the AI recognition process, so that the AI recognition process invokes the intelligent recognition function to recognize the scanning result.
  • the above-mentioned control unit includes a second control module, and the above-mentioned second control module is used to monitor whether the above-mentioned scanning process and the above-mentioned AI recognition process successfully establish communication, and the above-mentioned AI recognition process runs in parallel,
  • the above-mentioned AI recognition process initializes the recognition algorithm, and after receiving the above-mentioned processing instruction and the above-mentioned recognition algorithm is successfully initialized, the above-mentioned intelligent recognition function is executed. Specifically, after the AI recognition process is started, the AI recognition processes run in parallel.
  • the AI recognition process When the operating environment meets the predetermined conditions, the AI recognition process initializes the recognition algorithm to avoid the subsequent failure of calling the intelligent recognition function. After the recognition algorithm is successfully initialized, and The AI recognition process will only perform the intelligent recognition function to recognize the scan result when the processing instruction is received, that is, before the recognition algorithm is successfully initialized, even if the processing instruction is received, the AI recognition process will not perform the intelligent recognition function.
  • AI recognition process runs as an independent process.
  • AI recognition and scanning are set in the same process to be executed serially.
  • the AI recognition process and the scanning process are independent respectively.
  • the logic is clearer.
  • the AI recognition function reduces software coupling and is easy to maintain and modify.
  • you add/modify requirements you only need to add/modify the corresponding communication protocol, and the scalability is high.
  • the AI recognition function depends on the hardware configuration of the host, and requires some checks and algorithm initialization. It takes some time to start, and it is more reasonable in terms of software structure as a separate process.
  • a scanning system including a scanner and a scanning result processing apparatus, wherein the scanning result processing apparatus is configured to execute any one of the above processing methods.
  • the acquisition unit acquires the scan results of the object to be measured, that is, acquires the two-dimensional image and/or three-dimensional model scanned by the scanner
  • the identification unit calls the intelligent identification function to identify Scan the result to obtain the classification result, that is, classify the two-dimensional image and/or the three-dimensional model through the trained classification model
  • the processing unit determines the invalid data in the scanning result based on the classification result, that is, determines the two-dimensional image and/or according to the classification result. Invalid data in the 3D model, so that it is not necessary to suspend scanning to determine invalid data through data analysis, thereby improving scanning efficiency.
  • the adjustment device includes a processor and a memory.
  • the acquisition unit, the first identification unit, and the first determination unit are all stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
  • the processor includes a kernel, and the kernel calls the corresponding program unit from the memory.
  • One or more kernels can be set, and the problem of low scanning efficiency caused by invalid data is determined by data analysis in the processing method of the scanning result in the prior art by adjusting kernel parameters.
  • Memory may include non-persistent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash memory (flash RAM), the memory including at least one memory chip.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • An embodiment of the present invention provides a storage medium on which a program is stored, and when the program is executed by a processor, the above processing method is implemented.
  • An embodiment of the present invention provides a processor, where the above-mentioned processor is used to run a program, wherein the above-mentioned processing method is executed when the above-mentioned program is running.
  • the embodiment of the present invention provides a kind of equipment, the equipment comprises a processor, a memory and a program stored on the memory and can be run on the processor, and the processor implements at least the following steps when executing the program:
  • Step S101 acquiring a scan result of the object to be measured, wherein the scan result includes a two-dimensional image and/or a three-dimensional model;
  • Step S102 calling an intelligent identification function to identify the above-mentioned scanning result, and obtain a classification result, wherein the above-mentioned intelligent identification function is a classification model obtained by training a picture sample;
  • Step S103 based on the classification result, determine invalid data in the scanning result, wherein the invalid data is the scanning result of the non-target area in the object to be measured.
  • the devices in this article can be servers, PCs, PADs, mobile phones, and so on.
  • the present application also provides a computer program product that, when executed on a data processing device, is adapted to execute a program initialized with at least the following method steps:
  • Step S101 acquiring a scan result of the object to be measured, wherein the scan result includes a two-dimensional image and/or a three-dimensional model;
  • Step S102 calling an intelligent identification function to identify the above-mentioned scanning result, and obtain a classification result, wherein the above-mentioned intelligent identification function is a classification model obtained by training a picture sample;
  • Step S103 based on the classification result, determine invalid data in the scanning result, wherein the invalid data is the scanning result of the non-target area in the object to be measured.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the above-mentioned units may be a logical function division.
  • multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
  • the units described above as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the above-mentioned integrated units are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , which includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the above-mentioned methods in various embodiments of the present invention.
  • the aforementioned storage medium includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes .
  • the scanning result of the object to be measured is obtained, that is, the two-dimensional image and/or the three-dimensional model obtained by scanning by the scanner is obtained, and then, the intelligent recognition function is called to identify the scanning result, and the classification is obtained.
  • the 2D image and/or the 3D model is classified by the trained classification model, and finally, based on the classification result, invalid data in the scan result is determined, that is, the invalid data in the 2D image and/or the 3D model is removed according to the classification result. data, so that it is not necessary to pause scanning to determine invalid data through data analysis, thereby improving scanning efficiency.
  • the acquisition unit acquires the scan result of the object to be measured, that is, acquires the two-dimensional image and/or the three-dimensional model scanned by the scanner, and the recognition unit calls the intelligent recognition function to recognize the scan result, and obtains the classification.
  • the 2D image and/or the 3D model is classified by the trained classification model, and the processing unit determines the invalid data in the scan result based on the classification result, that is, the invalid data in the 2D image and/or the 3D model is determined according to the classification result. data, so that it is not necessary to pause scanning to determine invalid data through data analysis, thereby improving scanning efficiency.
  • the acquisition unit acquires the scanning result of the object to be measured, that is, acquires the two-dimensional image and/or the three-dimensional model scanned by the scanner
  • the identification unit calls the
  • the intelligent recognition function is used to identify the scan results and obtain the classification results, that is, classify the two-dimensional images and/or three-dimensional models through the trained classification model
  • the processing unit determines the invalid data in the scan results based on the classification results, that is, according to the classification results. Invalid data in 3D images and/or 3D models is eliminated, so that it is not necessary to suspend scanning to determine invalid data through data analysis, thereby improving scanning efficiency.
  • the solution provided by the embodiment of the present application acquires a two-dimensional image and/or a three-dimensional model scanned by a scanner, classifies the two-dimensional image and/or three-dimensional model through a trained classification model, and removes the two-dimensional image and/or three-dimensional model according to the classification result.
  • Invalid data in the model so that it is not necessary to suspend scanning to determine invalid data through data analysis, thereby improving scanning efficiency, and solving the processing method of scanning results in the prior art by suspending scanning to perform data analysis on scanning results to determine invalid data, resulting in scanning efficiency low problem.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Public Health (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Dentistry (AREA)
  • Animal Behavior & Ethology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Optics & Photonics (AREA)
  • Epidemiology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

一种扫描结果的处理方法、装置、处理器和扫描系统,该处理方法包括:获取待测物体的扫描结果,其中,扫描结果包括二维图像和/或三维模型(S101);调取智能识别功能来识别扫描结果,得到分类结果,其中,智能识别功能为通过训练图片样本得到的分类模型(S102);基于分类结果,确定扫描结果中的无效数据,其中,无效数据为待测物体中非目标区域的扫描结果(S103)。

Description

扫描结果的处理方法、装置、处理器和扫描系统
本申请要求于2020年09月29日提交中国专利局、申请号为202011057266.1、申请名称“扫描结果的处理方法、装置、处理器和扫描系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及扫描仪技术领域,具体而言,涉及一种扫描结果的处理方法、装置、计算机可读存储介质、处理器和扫描系统。
背景技术
在数字化口腔修复设计应用中,提取患者口腔牙齿数据,采用入口式扫描仪进行扫描采集,其主要采集牙齿与牙龈数据。而口腔环境相对比较狭小,扫描过程中极易采集到舌头、嘴唇、颊侧、辅助工具的图像,造成生成的模型数据含有无效数据,其主要有以下影响:
1.在扫描过程中,当出现非牙齿牙龈数据时,对新扫描数据的拼接将会产生一些干扰,如某些区域新数据拼接困难。
2.在网格数据优化中,与牙齿具有一定间隔的无效数据,可以通过相关算法进行删除;而相连的部分则需主动删除,否则将被保留下来,增加检查选取删除步骤,影响整体扫描效率。
数字化模型数据生成过程为:原始深度图→点云数据(重建)→网格模型(融合),因此当一帧数据中包含嘴唇、颊侧这些图像时,那么融合出来的网格数据中将包含这些三角网格面片,也就是不需要的杂数据。
在口内数据扫描采集过程中,受口内空间约束,极易出现一些非牙齿牙龈数据,这些数据将会在扫描结果中呈现,且对新数据扫描与整体数据优化产生一定影响。而扫描牙齿数据是一个实时且连续的过程,扫描帧率相对较高,杂数据的产生也会影响操作者的扫描过程。而针对产生的杂数据,采用“去孤岛”、“强连通”等算法可以剔除大部分。但算法处理相对比较耗时,只能间歇性执行。当存在一些算法无法去除的杂数据,则需执行手动删除。
现有去除一些杂数据的方法,主要是通过对已经生成的数据进行分析,通过数据块(点)的关联性,如:是否存在一些孤立的数据块(点),再或者通过一些限定条件或策略来找出认为不符合条件的无效数据点,再把这些点从整体数据中删除。
但是,口内数据扫描实时性相对较高,而随着整体网格数据的增加,若执行删除优化算法,需暂停扫描,等待优化完成,对扫描效率产生影响。并且,算法具有一定的策略性,相对比较绝对,只能针对普遍情况下的数据,而一些相对特殊情况下的无效数据,无法去除。
在背景技术部分中公开的以上信息只是用来加强对本文所描述技术的背景技术的理解, 因此,背景技术中可能包含某些信息,这些信息对于本领域技术人员来说并未形成在本国已知的现有技术。
发明内容
本申请的主要目的在于提供一种扫描结果的处理方法、装置、计算机可读存储介质、处理器和扫描系统,以解决现有技术中扫描结果的处理方法通过数据分析确定无效数据导致扫描效率低下的问题。
根据本发明实施例的一个方面,提供了一种扫描结果的处理方法,包括:获取待测物体的扫描结果,其中,所述扫描结果包括二维图像和/或三维模型;调取智能识别功能来识别所述扫描结果,得到分类结果,其中,所述智能识别功能为通过训练图片样本得到的分类模型;基于所述分类结果,确定所述扫描结果中的无效数据,其中,所述无效数据为所述待测物体中非目标区域的扫描结果。
可选地,所述二维图像为所述二维图像,所述二维图像包括纹理图像,所述分类结果包括:所述纹理图中对应于所述待测物体中目标区域的第一图像数据,以及对应于所述待测物体中非目标区域的第二图像数据。
可选地,所述扫描结果还包括对应于所述纹理图像的重建图像,基于所述分类结果,确定所述扫描结果中的无效数据包括:基于所述重建图像构建三维点云,并基于所述重建图像与所述纹理图像的对应关系,确定无效的点云,其中,所述无效的点云为所述三维点云中与所述第二图像数据对应的点云;删除所述三维点云中无效的点云;基于所述三维点云中剩余的点云,拼接得到所述待测物体的有效三维模型。
可选地,当所述待测物体的三维模型重建成功时,所述扫描结果为所述三维模型,其中,调取智能识别功能来识别所述扫描结果,得到分类结果,还包括:获取用于重建所述三维模型的三维点云数据;调取所述智能识别功能来分析所述三维点云数据,识别出所述三维点云数据的分类结果,其中,所述分类结果包括:所述三维点云数据中对应于所述待测物体中目标区域的第一点云数据,以及对应于所述待测物体中非目标区域的第二点云数据。
可选地,在确定所述第二点云数据为所述无效数据的情况下,通过在所述三维点云数据中删除所述无效数据,来确定所述三维模型中有效区域的点云数据。
可选地,在获取用于重建所述三维模型的三维点云数据之前,所述方法还包括:采集所述待测物体的二维图像;基于所述二维图像三维重建出三维点云数据,并基于重建的所述三维点云数据拼接得到所述三维模型。
可选地,在获取待测物体的扫描结果之前,所述方法还包括:启动并初始化扫描进程和AI识别进程,其中,所述扫描进程用于执行扫描所述待测物体,所述AI识别进程用于识别并分类所述扫描结果。
可选地,在初始化所述扫描进程和所述AI识别进程的过程中,监听所述扫描进程与所述 AI识别进程是否成功建立通信,并在确认连接成功之后,如果检测到所述扫描结果,所述扫描进程发出处理指令给所述AI识别进程,其中,所述AI识别进程基于所述处理指令执行调取所述智能识别功能来识别所述扫描结果。
可选地,在监听所述扫描进程与所述AI识别进程是否成功建立通信的过程中,所述AI识别进程并行运行,在运行环境满足预定条件的情况下,所述AI识别进程初始化识别算法,并在接收到所述处理指令,且所述识别算法初始化成功之后,执行所述智能识别功能。
根据本发明实施例的另一方面,还提供了一种扫描结果的处理装置,包括:获取单元,获取待测物体的扫描结果,其中,所述扫描结果包括二维图像和/或三维模型;第一识别单元,调取智能识别功能来识别所述扫描结果,得到分类结果,其中,所述智能识别功能为通过训练图片样本得到的分类模型;第一确定单元,基于所述分类结果,确定所述扫描结果中的无效数据,其中,所述无效数据为与所述待测物体中非目标区域的扫描结果。
根据本发明实施例的再一方面,还提供了一种存储介质,所述存储介质包括存储的程序,其中,所述程序执行任意一种所述的处理方法。
根据本发明实施例的又一方面,还提供了一种处理器,所述处理器用于运行程序,其中,所述程序运行时执行任意一种所述的处理方法。
根据本发明实施例的再一方面,还提供了一种扫描系统,包括扫描仪和扫描结果的处理装置,所述扫描结果的处理装置用于执行任意一种所述的处理方法。
上述处理方法中,首先,获取待测物体的扫描结果,即获取扫描仪扫描得到的二维图像和/或三维模型,然后,调取智能识别功能来识别扫描结果,得到分类结果,即通过训练的分类模型对二维图像和/或三维模型进行分类,最后,基于分类结果,确定扫描结果中的无效数据,即根据分类结果去除二维图像和/或三维模型中的无效数据,从而不必暂停扫描来通过数据分析确定无效数据,进而提高了扫描效率。
附图说明
构成本申请的一部分的说明书附图用来提供对本申请的进一步理解,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1示出了根据本申请的一种实施例的扫描结果的处理方法的流程图;
图2示出了根据本申请的一种实施例的牙齿和牙龈的三维模型的示意图;
图3示出了根据本申请的一种实施例的AI识别进程的流程图;
图4示出了根据本申请的一种实施例的扫描进程和AI识别进程的启动的流程图;
图5示出了根据本申请的一种实施例的建立待测物体的三维模型的流程图;以及
图6示出了根据本申请的一种实施例的扫描结果的处理装置的示意图。
具体实施方式
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
应该理解的是,当元件(诸如层、膜、区域、或衬底)描述为在另一元件“上”时,该元件可直接在该另一元件上,或者也可存在中间元件。而且,在说明书以及权利要求书中,当描述有元件“连接”至另一元件时,该元件可“直接连接”至该另一元件,或者通过第三元件“连接”至该另一元件。
正如背景技术中所说的,现有技术中扫描结果的处理方法通过数据分析确定无效数据导致扫描效率低下,为了解决上述问题,本申请的一种典型的实施方式中,提供了一种扫描结果的处理方法、装置、计算机可读存储介质、处理器和扫描系统。
根据本申请的实施例,提供了一种扫描结果的处理方法。
图1是根据本申请实施例的扫描结果的处理方法的流程图。如图1所示,该方法包括以下步骤:
步骤S101,获取待测物体的扫描结果,其中,上述扫描结果包括二维图像和/或三维模型;
步骤S102,调取智能识别功能来识别上述扫描结果,得到分类结果,其中,上述智能识别功能为通过训练图片样本得到的分类模型;
步骤S103,基于上述分类结果,确定上述扫描结果中的无效数据,其中,上述无效数据为上述待测物体中非目标区域的扫描结果。
上述处理方法中,首先,获取待测物体的扫描结果,即获取扫描仪扫描得到的二维图像和/或三维模型,然后,调取智能识别功能来识别扫描结果,得到分类结果,即通过训练的分类模型对二维图像和/或三维模型进行分类,最后,基于分类结果,确定扫描结果中的无效数据,即根据分类结果去除二维图像和/或三维模型中的无效数据,从而不必暂停扫描来通过数 据分析确定无效数据,进而提高了扫描效率。
需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
本申请的一种实施例中,上述扫描结果为上述二维图像,上述二维图像包括纹理图像,上述分类结果包括:上述纹理图中对应于上述待测物体中目标区域的第一图像数据,以及对应于上述待测物体中非目标区域的第二图像数据。具体地,通过智能识别功能识别纹理图像,从而将纹理图像分为第一图像数据和第二图像数据,上述第一图像数据为上述纹理图中对应于上述待测物体中目标区域的图像数据,上述第二图像数据为上述纹理图中对应于上述待测物体中非目标区域的图像数据,目标区域与非目标区域通过预设确定,例如,待测物体为口腔,目标区域包括牙齿和牙龈,非目标区域包括舌头、嘴唇、颊侧等,第一图像数据为对应于牙齿和牙龈的图像数据,第二图像数据为对应于舌头、嘴唇、颊侧等区域的图像数据,当然,根据实际需要可以对目标区域和非目标区域进行调整,例如,当只需要牙齿数据时,牙龈也可预设为非目标区域。,分类结果包括牙齿、牙龈、舌头、嘴唇和颊侧,当然分类结果也可以包括牙齿、牙龈和其他,将舌头、嘴唇和颊侧归类为其他,分类结果也可以包括目标类和非目标类,将牙齿和牙龈归类为目标类,将舌头、嘴唇和颊侧归类为非目标类(即其他)。
本申请的一种实施例中,上述二维图像还包括对应于上述纹理图像的重建图像,基于上述分类结果,确定上述扫描结果中的无效数据包括:基于上述重建图像构建三维点云,并基于上述重建图像与上述纹理图像的对应关系,确定无效的点云,其中,上述无效的点云为上述三维点云中与上述第二图像数据对应的点云;删除上述三维点云中无效的点云;基于上述三维点云中剩余的点云,拼接得到上述待测物体的有效三维模型。具体地,基于上述重建图像构建三维点云,根据上述重建图像与上述纹理图像的对应关系,确定上述三维点云数据中与上述第二图像数据对应的点云,从而确定无效的点云,删除无效的点云后,即可将剩余的点云进行拼接得到上述待测物体的有效三维模型,例如,获取第一帧二维图像和第二帧二维图像,基于第一帧二维图像获得第一帧剩余的点云,基于第二帧二维图像获得第二帧剩余的点云,将第一帧剩余的点云与第二帧剩余的点云进行拼接,更为具体地,待测物体为口腔,目标区域包括牙齿和牙龈,非目标区域包括舌头、嘴唇、颊侧等,删除舌头、嘴唇、颊侧等区域对应的点云,剩余的点云拼接得到牙齿和牙龈的三维模型,如图2所示。
本申请的一种实施例中,当上述待测物体的三维模型重建成功时,上述扫描结果为上述三维模型,其中,调取智能识别功能来识别上述扫描结果,得到分类结果,还包括:获取用于重建上述三维模型的三维点云数据;调取上述智能识别功能来分析上述三维点云数据,识别出上述三维点云数据的分类结果,其中,上述分类结果包括:上述三维点云数据中对应于上述待测物体中目标区域的第一点云数据,以及对应于上述待测物体中非目标区域的第二点云数据。具体地,通过智能识别功能识别用于重建上述三维模型的三维点云数据,从而将三维点云数据分为第一点云数据和第二点云数据,上述第一点云数据为上述三维点云数据中对应于上述待测物体中目标区域的点云数据,上述第二点云数据为上述三维点云数据中对应于 上述待测物体中非目标区域的点云数据,例如,待测物体为口腔,目标区域包括牙齿和牙龈,非目标区域包括舌头、嘴唇、颊侧等,第一点云数据为对应于牙齿和牙龈的点云数据,第二点云数据为对应于舌头、嘴唇、颊侧等区域的点云数据,分类结果包括牙齿、牙龈、舌头、嘴唇和颊侧,当然分类结果也可以包括牙齿、牙龈和其他,将舌头、嘴唇和颊侧归类为其他,分类结果也可以包括目标类和非目标类,将牙齿和牙龈归类为目标类,将舌头、嘴唇和颊侧归类为非目标类(即其他)。
本申请的一种实施例中,在确定上述第二点云数据为上述无效数据的情况下,通过在上述三维点云数据中删除上述无效数据,来确定上述三维模型中有效区域的点云数据。具体地,确定第二点云数据为无效数据后,删除三维模型中的无效数据,从而确定三维模型中的有效区域,即待测物体对应的区域。
本申请的一种实施例中,在获取用于重建上述三维模型的三维点云数据之前,上述方法还包括:通过扫描上述待测物体来采集上述待测物体的二维图像;基于上述二维图像三维重建出三维点云数据,并基于重建的上述三维点云数据拼接得到上述三维模型,其中,上述二维图像的像素点与上述三维点云数据存在对应关系。具体地,在获取用于重建上述三维模型的三维点云数据之前,基于采集的待测物体的二维图像,三维重建待测物体对应的三维点云数据,通过待测物体对应的三维点云数据即可拼接得到待测物体的三维模型。
本申请的一种实施例中,在获取待测物体的扫描结果之前,上述方法还包括:启动并初始化扫描进程和AI识别进程,其中,上述扫描进程用于执行扫描上述待测物体,上述AI识别进程用于识别并分类上述扫描结果。具体地,在获取待测物体的扫描结果之前,启动并初始化扫描进程和AI识别进程,扫描进程执行扫描待测物体,得到扫描结果,AI识别进程识别并分类上述扫描结果,得到分类结果,初始化过程清除之前的扫描结果和分类结果,避免对当前的处理过程造成干扰。
本申请的一种实施例中,在初始化上述扫描进程和上述AI识别进程的过程中,监听上述扫描进程与上述AI识别进程是否成功建立通信,并在确认连接成功之后,如果检测到上述扫描结果,上述扫描进程发出处理指令给上述AI识别进程,其中,上述AI识别进程基于上述处理指令执行调取上述智能识别功能来识别上述扫描结果。具体地,在初始化扫描进程和AI识别进程的过程中,通过监听确认扫描进程与AI识别进程是否成功建立通信,如未建立通信,将两者进行通信连接,在成功连接的情况下,检测到扫描结果后,扫描进程发出处理指令给AI识别进程,使得AI识别进程调取智能识别功能来识别该扫描结果。
本申请的一种实施例中,在监听上述扫描进程与上述AI识别进程是否成功建立通信的过程中,上述AI识别进程并行运行,在运行环境满足预定条件的情况下,上述AI识别进程初始化识别算法,并在接收到上述处理指令,且上述识别算法初始化成功之后,执行上述智能识别功能。具体地,启动AI识别进程后,AI识别进程并行运行,在运行环境满足预定条件的情况下,AI识别进程初始化识别算法,以避免后续调取智能识别功能失败,在识别算法初始化成功之后,且接收到处理指令的情况下,AI识别进程才执行智能识别功能来识别该扫描结果,即在识别算法初始化成功之前,即便接收到处理指令,AI识别进程也不执行智能识别 功能。
需要说明的是,上述AI识别进程采用独立进程的方式运行,当然,AI识别与扫描设置于同一个进程串行执行,AI识别进程与扫描进程分别独立相比于将AI识别与扫描串行在一起,逻辑更加清晰化,AI识别功能作为一个功能模块,软件耦合度降低,也易于维护修改,并且,若添加/修改需求,只需添加/修改对应的通信协议即可,扩展性高,由于AI识别功能依赖于主机硬件配置,且需要做一些检查以及算法初始化,启动时需要花费一些时间,其作为单独进程在软件结构上也更加合理。
在实际运行过程中,上述扫描进程与上述AI识别进程建立通信,并以共享内存方式进行交换数据,本申请的一种具体的实施例中,如图3所示,上述AI识别进程包括以下步骤:读取上述扫描进程获取的纹理图像,并将读取的图像数据存储在共享内存中,将上述图像数据输入AI识别算法,输出结果标签并将结果标签写入上述共享内存,上述结果标签与上述图像数据对应的点一一对应。例如,待测物体为口腔,目标区域包括牙齿和牙龈,结果标签为0表示其他,结果标签为1表示牙齿,结果标签为2表示牙龈。
本申请的一种具体的实施例中,如图4所示,上述扫描进程和上述AI识别进程的启动步骤如下:上述扫描进程启动时,拉起上述AI识别进程,即启动上述AI识别进程,上述扫描进程和上述AI识别进程进行初始化,上述扫描进程和上述AI识别进程建立通信,连接确认后,上述扫描进程获取扫描结果,并发送处理指令至上述AI识别进程,上述AI识别进程基于上述处理指令执行调取智能识别功能来识别上述扫描结果,将识别的结果标签写入共享内存,扫描进程应用结果标签对扫描结果进行处理。
本申请的一种具体的实施例中,如图5所示,建立待测物体的三维模型的步骤如下:获取一帧图像数据,基于上述图像数据三维重建出三维点云数据,重建成功后开启AI智能识别功能,否则返回获取下一帧图像数据,AI智能识别功能开启失败,将三维点云数据直接拼接得到待测物体的三维模型,AI智能识别功能开启成功,获取AI识别结果,若获取结果超时,则返回获取下一帧图像数据,若没有超时,则应用AI识别结果对三维点云数据进行处理,删除无效点云,将剩余的点云拼接得到待测物体的三维模型。
本申请实施例还提供了一种扫描结果的处理装置,需要说明的是,本申请实施例的扫描结果的处理装置可以用于执行本申请实施例所提供的用于扫描结果的处理方法。以下对本申请实施例提供的扫描结果的处理装置进行介绍。
图6是根据本申请实施例的扫描结果的处理装置的示意图。如图6所示,该装置包括:
获取单元10,用于获取待测物体的扫描结果,其中,上述扫描结果包括二维图像和/或三维模型;
第一识别单元20,用于调取智能识别功能来识别上述扫描结果,得到分类结果,其中,上述智能识别功能为通过训练图片样本得到的分类模型;
第一确定单元30,用于基于上述分类结果,确定上述扫描结果中的无效数据,其中,上 述无效数据为上述待测物体中非目标区域的扫描结果。
上述处理装置中,获取单元获取待测物体的扫描结果,即获取扫描仪扫描得到的二维图像和/或三维模型,识别单元调取智能识别功能来识别扫描结果,得到分类结果,即通过训练的分类模型对二维图像和/或三维模型进行分类,处理单元基于分类结果,确定扫描结果中的无效数据,即根据分类结果确定二维图像和/或三维模型中的无效数据,从而不必暂停扫描来通过数据分析确定无效数据,进而提高了扫描效率。
本申请的一种实施例中,上述扫描结果为上述二维图像,上述二维图像包括纹理图像,上述分类结果包括:上述纹理图中对应于上述待测物体中目标区域的第一图像数据,以及对应于上述待测物体中非目标区域的第二图像数据。具体地,通过智能识别功能识别纹理图像,从而将纹理图像分为第一图像数据和第二图像数据,上述第一图像数据为上述纹理图中对应于上述待测物体中目标区域的图像数据,上述第二图像数据为上述纹理图中对应于上述待测物体中非目标区域的图像数据,目标区域与非目标区域通过预设确定,例如,待测物体为口腔,目标区域包括牙齿和牙龈,非目标区域包括舌头、嘴唇、颊侧等,第一图像数据为对应于牙齿和牙龈的图像数据,第二图像数据为对应于舌头、嘴唇、颊侧等区域的图像数据,当然,根据实际需要可以对目标区域和非目标区域进行调整,例如,当只需要牙齿数据时,牙龈也可预设为非目标区域,分类结果包括牙齿、牙龈、舌头、嘴唇和颊侧,当然分类结果也可以包括牙齿、牙龈和其他,将舌头、嘴唇和颊侧归类为其他,分类结果也可以包括目标类和非目标类,将牙齿和牙龈归类为目标类,将舌头、嘴唇和颊侧归类为非目标类(即其他)。
本申请的一种实施例中,上述二维图像还包括对应于上述纹理图像的重建图像,上述处理单元包括确定模块、第一处理模块和第二处理模块,其中,上述确定模块用于基于上述重建图像构建三维点云,并基于上述重建图像与上述纹理图像的对应关系,确定无效的点云,其中,上述无效的点云为上述三维点云中与上述第二图像数据对应的点云;上述第一处理模块用于删除上述三维点云中无效的点云;上述第二处理模块用于基于上述三维点云中剩余的点云,拼接得到上述待测物体的有效三维模型。具体地,基于上述重建图像构建三维点云,根据上述重建图像与上述纹理图像的对应关系,确定上述三维点云数据中与上述第二图像数据对应的点云,从而确定无效的点云,删除无效的点云后,即可将剩余的点云进行拼接得到上述待测物体的有效三维模型,例如,获取第一帧二维图像和第二帧二维图像,基于第一帧二维图像获得第一帧剩余的点云,基于第二帧二维图像获得第二帧剩余的点云,将第一帧剩余的点云与第二帧剩余的点云进行拼接,更为具体地,待测物体为口腔,目标区域包括牙齿和牙龈,非目标区域包括舌头、嘴唇、颊侧等,删除舌头、嘴唇、颊侧等区域对应的点云,剩余的点云拼接得到牙齿和牙龈的三维模型,如图2所示。
本申请的一种实施例中,当上述待测物体的三维模型重建成功时,上述扫描结果为上述三维模型,其中,上述装置还包括第二识别单元,上述第二识别单元包括第一获取模块和识别模块,其中,上述第一获取模块用于获取用于重建上述三维模型的三维点云数据;上述识别模块用于调取上述智能识别功能来分析上述三维点云数据,识别出上述三维点云数据的分类结果,其中,上述分类结果包括:上述三维点云数据中对应于上述待测物体中目标区域的 第一点云数据,以及对应于上述待测物体中非目标区域的第二点云数据。具体地,通过智能识别功能识别用于重建上述三维模型的三维点云数据,从而将三维点云数据分为第一点云数据和第二点云数据,上述第一点云数据为上述三维点云数据中对应于上述待测物体中目标区域的点云数据,上述第二点云数据为上述三维点云数据中对应于上述待测物体中非目标区域的点云数据,例如,待测物体为口腔,目标区域包括牙齿和牙龈,非目标区域包括舌头、嘴唇、颊侧等,第一点云数据为对应于牙齿和牙龈的点云数据,第二点云数据为对应于舌头、嘴唇、颊侧等区域的点云数据,分类结果包括牙齿、牙龈、舌头、嘴唇和颊侧,当然分类结果也可以包括牙齿、牙龈和其他,将舌头、嘴唇和颊侧归类为其他,分类结果也可以包括目标类和非目标类,将牙齿和牙龈归类为目标类,将舌头、嘴唇和颊侧归类为非目标类(即其他)。
本申请的一种实施例中,上述装置还包括第二确定单元,上述第二确定单元用于在确定上述第二点云数据为上述无效数据的情况下,通过在上述三维点云数据中删除上述无效数据,来确定上述三维模型中有效区域的点云数据。具体地,确定第二点云数据为无效数据后,删除三维模型中的无效数据,从而确定三维模型中的有效区域,即待测物体对应的区域。
本申请的一种实施例中,上述装置还包括重建单元,上述重建单元包括第二获取模块和重建模块,其中,上述第二获取模块用于在获取用于重建上述三维模型的三维点云数据之前,通过扫描上述待测物体来采集上述待测物体的二维图像;上述重建模块用于基于上述二维图像三维重建三维点云数据,并基于重建到的上述三维点云数据拼接得到上述三维模型,其中,上述二维图像的像素点与上述三维点云数据存在对应关系。具体地,在获取用于重建上述三维模型的三维点云数据之前,基于采集的待测物体的二维图像,三维重建待测物体对应的三维点云数据,通过待测物体对应的三维点云数据即可拼接得到测物体的三维模型。
本申请的一种实施例中,上述装置还包括控制单元,上述控制单元用于在获取待测物体的扫描结果之前,启动并初始化扫描进程和AI识别进程,其中,上述扫描进程用于执行扫描上述待测物体,上述AI识别进程用于识别并分类上述扫描结果。具体地,在获取待测物体的扫描结果之前,启动并初始化扫描进程和AI识别进程,扫描进程执行扫描待测物体,得到扫描结果,AI识别进程识别并分类上述扫描结果,得到分类结果,初始化过程清除之前的扫描结果和分类结果,避免对当前的处理过程造成干扰。
本申请的一种实施例中,上述控制单元包括第一控制模块,上述第一控制模块用于在初始化上述扫描进程和上述AI识别进程的过程中,监听上述扫描进程与上述AI识别进程是否成功建立通信,并在确认连接成功之后,如果检测到上述扫描结果,上述扫描进程发出处理指令给上述AI识别进程,其中,上述AI识别进程基于上述处理指令执行调取上述智能识别功能来识别上述扫描结果。具体地,在初始化扫描进程和AI识别进程的过程中,通过监听确认扫描进程与AI识别进程是否成功建立通信,如未建立通信,将两者进行通信连接,在成功连接的情况下,检测到扫描结果后,扫描进程发出处理指令给AI识别进程,使得AI识别进程调取智能识别功能来识别该扫描结果。
本申请的一种实施例中,上述控制单元包括第二控制模块,上述第二控制模块用于在监听上述扫描进程与上述AI识别进程是否成功建立通信的过程中,上述AI识别进程并行运行,在运行环境满足预定条件的情况下,上述AI识别进程初始化识别算法,并在接收到上述处理指令,且上述识别算法初始化成功之后,执行上述智能识别功能。具体地,启动AI识别进程后,AI识别进程并行运行,在运行环境满足预定条件的情况下,AI识别进程初始化识别算法,以避免后续调取智能识别功能失败,在识别算法初始化成功之后,且接收到处理指令的情况下,AI识别进程才执行智能识别功能来识别该扫描结果,即在识别算法初始化成功之前,即便接收到处理指令,AI识别进程也不执行智能识别功能。
需要说明的是,上述AI识别进程采用独立进程的方式运行,当然,AI识别与扫描设置于同一个进程串行执行,AI识别进程与扫描进程分别独立相比于将AI识别与扫描串行在一起,逻辑更加清晰化,AI识别功能作为一个功能模块,软件耦合度降低,也易于维护修改,并且,若添加/修改需求,只需添加/修改对应的通信协议即可,扩展性高,由于AI识别功能依赖于主机硬件配置,且需要做一些检查以及算法初始化,启动时需要花费一些时间,其作为单独进程在软件结构上也更加合理。
根据本发明实施例,还提供了一种扫描系统,包括扫描仪和扫描结果的处理装置,上述扫描结果的处理装置用于执行任意一种上述的处理方法。
上述扫描系统中,包括扫描仪和扫描结果的处理装置,获取单元获取待测物体的扫描结果,即获取扫描仪扫描得到的二维图像和/或三维模型,识别单元调取智能识别功能来识别扫描结果,得到分类结果,即通过训练的分类模型对二维图像和/或三维模型进行分类,处理单元基于分类结果,确定扫描结果中的无效数据,即根据分类结果确定二维图像和/或三维模型中的无效数据,从而不必暂停扫描来通过数据分析确定无效数据,进而提高了扫描效率。
上述调整装置包括处理器和存储器,上述获取单元、第一识别单元和第一确定单元等均作为程序单元存储在存储器中,由处理器执行存储在存储器中的上述程序单元来实现相应的功能。
处理器中包含内核,由内核去存储器中调取相应的程序单元。内核可以设置一个或以上,通过调整内核参数来解决现有技术中扫描结果的处理方法通过数据分析确定无效数据导致扫描效率低下的问题。
存储器可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM),存储器包括至少一个存储芯片。
本发明实施例提供了一种存储介质,其上存储有程序,该程序被处理器执行时实现上述处理方法。
本发明实施例提供了一种处理器,上述处理器用于运行程序,其中,上述程序运行时执行上述处理方法。
本发明实施例提供了一种设备,设备包括处理器、存储器及存储在存储器上并可在处理 器上运行的程序,处理器执行程序时实现至少以下步骤:
步骤S101,获取待测物体的扫描结果,其中,上述扫描结果包括二维图像和/或三维模型;
步骤S102,调取智能识别功能来识别上述扫描结果,得到分类结果,其中,上述智能识别功能为通过训练图片样本得到的分类模型;
步骤S103,基于上述分类结果,确定上述扫描结果中的无效数据,其中,上述无效数据为上述待测物体中非目标区域的扫描结果。
本文中的设备可以是服务器、PC、PAD、手机等。
本申请还提供了一种计算机程序产品,当在数据处理设备上执行时,适于执行初始化有至少如下方法步骤的程序:
步骤S101,获取待测物体的扫描结果,其中,上述扫描结果包括二维图像和/或三维模型;
步骤S102,调取智能识别功能来识别上述扫描结果,得到分类结果,其中,上述智能识别功能为通过训练图片样本得到的分类模型;
步骤S103,基于上述分类结果,确定上述扫描结果中的无效数据,其中,上述无效数据为上述待测物体中非目标区域的扫描结果。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对 现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例上述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
从以上的描述中,可以看出,本申请上述的实施例实现了如下技术效果:
1)、本申请的处理方法中,首先,获取待测物体的扫描结果,即获取扫描仪扫描得到的二维图像和/或三维模型,然后,调取智能识别功能来识别扫描结果,得到分类结果,即通过训练的分类模型对二维图像和/或三维模型进行分类,最后,基于分类结果,确定扫描结果中的无效数据,即根据分类结果去除二维图像和/或三维模型中的无效数据,从而不必暂停扫描来通过数据分析确定无效数据,进而提高了扫描效率。
2)、本申请的处理装置中,获取单元获取待测物体的扫描结果,即获取扫描仪扫描得到的二维图像和/或三维模型,识别单元调取智能识别功能来识别扫描结果,得到分类结果,即通过训练的分类模型对二维图像和/或三维模型进行分类,处理单元基于分类结果,确定扫描结果中的无效数据,即根据分类结果确定二维图像和/或三维模型中的无效数据,从而不必暂停扫描来通过数据分析确定无效数据,进而提高了扫描效率。
3)、本申请的扫描系统中,包括扫描仪和扫描结果的处理装置,获取单元获取待测物体的扫描结果,即获取扫描仪扫描得到的二维图像和/或三维模型,识别单元调取智能识别功能来识别扫描结果,得到分类结果,即通过训练的分类模型对二维图像和/或三维模型进行分类,处理单元基于分类结果,确定扫描结果中的无效数据,即根据分类结果确定二维图像和/或三维模型中的无效数据,从而不必暂停扫描来通过数据分析确定无效数据,进而提高了扫描效率。
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
工业实用性
本申请实施例提供的方案获取扫描仪扫描得到的二维图像和/或三维模型,通过训练的分类模型对二维图像和/或三维模型进行分类,根据分类结果去除二维图像和/或三维模型中的无效数据,从而不必暂停扫描来通过数据分析确定无效数据,进而提高了扫描效率,解决现有技术中扫描结果的处理方法通过暂停扫描以对扫描结果进行数据分析确定无效数据导致扫描效率低下的问题。

Claims (13)

  1. 一种扫描结果的处理方法,包括:
    获取待测物体的扫描结果,其中,所述扫描结果包括二维图像和/或三维模型;
    调取智能识别功能来识别所述扫描结果,得到分类结果,其中,所述智能识别功能为通过训练图片样本得到的分类模型;
    基于所述分类结果,确定所述扫描结果中的无效数据,其中,所述无效数据为所述待测物体中非目标区域的扫描结果。
  2. 根据权利要求1所述的方法,其中,所述扫描结果为所述二维图像,所述二维图像包括纹理图像,所述分类结果包括:所述纹理图像中对应于所述待测物体中目标区域的第一图像数据,以及对应于所述待测物体中非目标区域的第二图像数据。
  3. 根据权利要求2所述的方法,其中,所述二维图像还包括对应于所述纹理图像的重建图像,基于所述分类结果,确定所述扫描结果中的无效数据包括:
    基于所述重建图像构建三维点云,并基于所述重建图像与所述纹理图像的对应关系,确定无效的点云,其中,所述无效的点云为所述三维点云中与所述第二图像数据对应的点云;
    删除所述三维点云中无效的点云;
    基于所述三维点云中剩余的点云,拼接得到所述待测物体的有效三维模型。
  4. 根据权利要求1所述的方法,其中,当所述待测物体的三维模型重建成功时,所述扫描结果为所述三维模型,其中,调取智能识别功能来识别所述扫描结果,得到分类结果,还包括:
    获取用于重建所述三维模型的三维点云数据;
    调取所述智能识别功能来分析所述三维点云数据,识别出所述三维点云数据的分类结果,其中,
    所述分类结果包括:所述三维点云数据中对应于所述待测物体中目标区域的第一点云数据,以及对应于所述待测物体中非目标区域的第二点云数据。
  5. 根据权利要求4所述的方法,其中,在确定所述第二点云数据为所述无效数据的情况下,通过在所述三维点云数据中删除所述无效数据,来确定所述三维模型中有效区域的点云数据。
  6. 根据权利要求5所述的方法,其中,在获取用于重建所述三维模型的三维点云数据之前,所述方法还包括:
    采集所述待测物体的二维图像;
    基于所述二维图像三维重建出三维点云数据,并基于重建的所述三维点云数据拼接得到所述三维模型。
  7. 根据权利要求1至6中任意一项所述的方法,其中,在获取待测物体的扫描结果之前,所述方法还包括:启动并初始化扫描进程和AI识别进程,其中,所述扫描进程用于执行扫描所述待测物体,所述AI识别进程用于识别并分类所述扫描结果。
  8. 根据权利要求7所述的方法,其中,在初始化所述扫描进程和所述AI识别进程的过程中,监听所述扫描进程与所述AI识别进程是否成功建立通信,并在确认连接成功之后,如果检测到所述扫描结果,所述扫描进程发出处理指令给所述AI识别进程,其中,所述AI识别进程基于所述处理指令执行调取所述智能识别功能来识别所述扫描结果。
  9. 根据权利要求8所述的方法,其中,在监听所述扫描进程与所述AI识别进程是否成功建立通信的过程中,所述AI识别进程并行运行,在运行环境满足预定条件的情况下,所述AI识别进程初始化识别算法,并在接收到所述处理指令,且所述识别算法初始化成功之后,执行所述智能识别功能。
  10. 一种扫描结果的处理装置,包括:
    获取单元,获取待测物体的扫描结果,其中,所述扫描结果包括二维图像和/或三维模型;
    第一识别单元,调取智能识别功能来识别所述扫描结果,得到分类结果,其中,所述智能识别功能为通过训练图片样本得到的分类模型;
    第一确定单元,基于所述分类结果,确定所述扫描结果中的无效数据,其中,所述无效数据为与所述待测物体中非目标区域的扫描结果。
  11. 一种计算机可读存储介质,所述存储介质包括存储的程序,其中,所述程序执行权利要求1至9中任意一项所述的处理方法。
  12. 一种处理器,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至9中任意一项所述的处理方法。
  13. 一种扫描系统,包括扫描仪和扫描结果的处理装置,所述扫描结果的处理装置用于执行权利要求1至9中任意一项所述的处理方法。
PCT/CN2021/121752 2020-09-29 2021-09-29 扫描结果的处理方法、装置、处理器和扫描系统 WO2022068883A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023519520A JP2023543298A (ja) 2020-09-29 2021-09-29 走査結果の処理方法、装置、プロセッサおよび走査システム
US18/027,933 US20230368460A1 (en) 2020-09-29 2021-09-29 Method and Device for Processing Scan Result, Processor and Scanning System
EP21874538.8A EP4224419A4 (en) 2020-09-29 2021-09-29 METHOD AND DEVICE FOR PROCESSING SCANNING RESULTS AND PROCESSOR AND SCANNING SYSTEM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011057266.1 2020-09-29
CN202011057266.1A CN114332970A (zh) 2020-09-29 2020-09-29 扫描结果的处理方法、装置、处理器和扫描系统

Publications (1)

Publication Number Publication Date
WO2022068883A1 true WO2022068883A1 (zh) 2022-04-07

Family

ID=80951201

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121752 WO2022068883A1 (zh) 2020-09-29 2021-09-29 扫描结果的处理方法、装置、处理器和扫描系统

Country Status (5)

Country Link
US (1) US20230368460A1 (zh)
EP (1) EP4224419A4 (zh)
JP (1) JP2023543298A (zh)
CN (1) CN114332970A (zh)
WO (1) WO2022068883A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116052850A (zh) * 2023-02-01 2023-05-02 南方医科大学珠江医院 一种基于人工智能的ctmr成像解剖注释和3d建模映射教学系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115527663A (zh) * 2022-09-30 2022-12-27 先临三维科技股份有限公司 一种口腔扫描数据的处理方法、装置及设备
CN117414110B (zh) * 2023-12-14 2024-03-22 先临三维科技股份有限公司 三维扫描设备的控制方法、装置、终端设备及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103558129A (zh) * 2013-11-22 2014-02-05 王学重 一种探头式在线立体成像检测系统及方法
CN108664937A (zh) * 2018-05-14 2018-10-16 宁波江丰生物信息技术有限公司 一种基于数字病理切片扫描仪的多区域扫描方法
CN110010249A (zh) * 2019-03-29 2019-07-12 北京航空航天大学 基于视频叠加的增强现实手术导航方法、系统及电子设备
CN110007764A (zh) * 2019-04-11 2019-07-12 北京华捷艾米科技有限公司 一种手势骨架识别方法、装置、系统及存储介质
US20200113542A1 (en) * 2018-10-16 2020-04-16 General Electric Company Methods and system for detecting medical imaging scan planes using probe position feedback
CN111568376A (zh) * 2020-05-11 2020-08-25 四川大学 一种口腔软组织生理运动边界的直接三维扫描方法及系统

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11744681B2 (en) * 2019-03-08 2023-09-05 Align Technology, Inc. Foreign object identification and image augmentation for intraoral scanning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103558129A (zh) * 2013-11-22 2014-02-05 王学重 一种探头式在线立体成像检测系统及方法
CN108664937A (zh) * 2018-05-14 2018-10-16 宁波江丰生物信息技术有限公司 一种基于数字病理切片扫描仪的多区域扫描方法
US20200113542A1 (en) * 2018-10-16 2020-04-16 General Electric Company Methods and system for detecting medical imaging scan planes using probe position feedback
CN110010249A (zh) * 2019-03-29 2019-07-12 北京航空航天大学 基于视频叠加的增强现实手术导航方法、系统及电子设备
CN110007764A (zh) * 2019-04-11 2019-07-12 北京华捷艾米科技有限公司 一种手势骨架识别方法、装置、系统及存储介质
CN111568376A (zh) * 2020-05-11 2020-08-25 四川大学 一种口腔软组织生理运动边界的直接三维扫描方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4224419A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116052850A (zh) * 2023-02-01 2023-05-02 南方医科大学珠江医院 一种基于人工智能的ctmr成像解剖注释和3d建模映射教学系统

Also Published As

Publication number Publication date
CN114332970A (zh) 2022-04-12
EP4224419A4 (en) 2024-01-24
JP2023543298A (ja) 2023-10-13
US20230368460A1 (en) 2023-11-16
EP4224419A1 (en) 2023-08-09

Similar Documents

Publication Publication Date Title
WO2022068883A1 (zh) 扫描结果的处理方法、装置、处理器和扫描系统
US20220218449A1 (en) Dental cad automation using deep learning
US11455727B2 (en) Method and apparatus for excessive materials removal from intraoral scans
US11232573B2 (en) Artificially intelligent systems to manage virtual dental models using dental images
US20220148173A1 (en) Intraoral scanning system with excess material removal based on machine learning
EP3709266A1 (en) Human-tracking methods, apparatuses, systems, and storage media
JP6573907B2 (ja) 3dスキャンされるオブジェクトに関する色情報を収集するためのシステム、方法、装置、及びコンピュータ可読記憶媒体
CN109635803A (zh) 基于人工智能的图像数据处理方法及设备
CN109063679A (zh) 一种人脸表情检测方法、装置、设备、系统及介质
CN107610239B (zh) 一种脸谱的虚拟试戴方法及装置
WO2021061611A1 (en) Method, system and computer readable storage media for registering intraoral measurements
JP2018026064A (ja) 画像処理装置、画像処理方法、システム
CN111932518A (zh) 一种深度学习的全景牙片病灶检测及分割方法及装置
US20230225832A1 (en) Photo-based dental attachment detection
CN111192223A (zh) 人脸纹理图像的处理方法、装置、设备及存储介质
WO2024046400A1 (zh) 牙齿模型生成方法、装置、电子设备和存储介质
FR3034000A1 (fr) Procede de determination d'une cartographie des contacts et/ou des distances entre les arcades maxillaire et mandibulaire d'un individu
WO2024088061A1 (zh) 人脸重建和遮挡区域识别方法、装置、设备及存储介质
US20140198177A1 (en) Realtime photo retouching of live video
US20220378548A1 (en) Method for generating a dental image
FR3111066A1 (fr) Procédé et dispositif de reconstruction tridimensionnelle d’un visage avec partie dentée à partir d’une seule image
US20230042643A1 (en) Intuitive Intraoral Scanning
CN113591656A (zh) 图像处理方法、系统、装置、设备及计算机存储介质
CN110269715B (zh) 一种基于ar的根管监测方法及系统
EP3927276B1 (fr) Procédé d'estimation et de visualisation d'un résultat d'un plan de traitement dentaire

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21874538

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023519520

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021874538

Country of ref document: EP

Effective date: 20230502