WO2023213253A1 - 一种扫描数据处理方法、装置、电子设备及介质 - Google Patents

一种扫描数据处理方法、装置、电子设备及介质 Download PDF

Info

Publication number
WO2023213253A1
WO2023213253A1 PCT/CN2023/091808 CN2023091808W WO2023213253A1 WO 2023213253 A1 WO2023213253 A1 WO 2023213253A1 CN 2023091808 W CN2023091808 W CN 2023091808W WO 2023213253 A1 WO2023213253 A1 WO 2023213253A1
Authority
WO
WIPO (PCT)
Prior art keywords
auxiliary feature
data
point
points
dimensional coordinate
Prior art date
Application number
PCT/CN2023/091808
Other languages
English (en)
French (fr)
Inventor
赵晓波
陈晓军
马超
张伟
Original Assignee
先临三维科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 先临三维科技股份有限公司 filed Critical 先临三维科技股份有限公司
Publication of WO2023213253A1 publication Critical patent/WO2023213253A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present disclosure relates to the technical field of intraoral scanning, and in particular, to a scanning data processing method, device, electronic equipment and media.
  • the relevant target location is determined by scanning.
  • the technical problem to be solved by this disclosure is to solve the problem of low overall accuracy of the model in the existing scanning scene.
  • a scan data processing method including:
  • Target scan data is generated based on the three-dimensional coordinate points and point cloud data of the auxiliary feature points.
  • a scan data processing device including:
  • the image acquisition module is used to acquire multiple scanned images containing auxiliary feature points
  • An image processing module used to process the plurality of scanned images to obtain three-dimensional coordinate points of auxiliary feature points
  • a generation module is used to generate target scanning data based on the three-dimensional coordinate points and point cloud data of the auxiliary feature points.
  • an electronic device including:
  • the computer program is stored in the memory and configured to be executed by the processor to implement the above scan data processing method.
  • a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the steps of the above-mentioned scan data processing method are implemented.
  • the scan data processing scheme provided by the embodiment of the present disclosure acquires multiple scan images containing auxiliary feature points, processes the multiple scan images, and obtains three-dimensional coordinate points of the auxiliary feature points. Based on the three-dimensional coordinate points and point cloud data of the auxiliary feature points, Generate target scan data.
  • the above technical solution is adopted to improve the accuracy of scanning rod positioning information, thereby improving the efficiency and accuracy of scanning data processing in oral scanning scenarios.
  • Figure 1 is an application scenario diagram of scanning data processing provided by an embodiment of the present disclosure
  • Figure 2 is a schematic flowchart of a scan data processing method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of another scan data processing method provided by an embodiment of the present disclosure.
  • Figure 4 is a schematic flowchart of another scan data processing method provided by an embodiment of the present disclosure.
  • Figure 5 is a schematic structural diagram of a scan data processing device provided by an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 1 is an application scenario diagram of scanning data processing provided by an embodiment of the present disclosure.
  • the application environment includes: installing multiple intraoral scanning rods in the target oral cavity.
  • the intraoral scanning rods include: a scanning rod component 11 connected to the scanning rod component 11
  • the auxiliary component 12 is provided with auxiliary feature points on the scanning rod component 11 and/or the auxiliary component 12.
  • the scanning rod component 11 is adapted to the implant installed in the target oral cavity.
  • the scanning rod component 11 is adapted and installed with the implant, so that The intraoral scanning rod is installed in the target mouth.
  • the auxiliary parts 12 of any two intraoral scanning rods among the plurality of intraoral scanning rods are adapted to each other, so that when any two intraoral scanning rods 10 are installed adjacently in the oral cavity, the auxiliary feature points on the two auxiliary parts are continuously distributed.
  • SLR photogrammetry system, coordinate instrument, High-precision industrial 3D scanners and the like obtain the true value coordinate points of the auxiliary feature points.
  • the three-dimensional coordinate points corresponding to the scanned image correspond to the true value coordinate points of the pre-acquired auxiliary feature points.
  • the intraoral scanning rod includes a scanning rod component for connecting to the implant and an auxiliary component connected to the scanning rod.
  • the intraoral scanning rod is provided with a target feature, and the target feature is Continuously distributed on the scanning rod and/or auxiliary components, and the target features are not distributed on one side of the scanning rod and/or auxiliary components.
  • the intraoral scanner scans the target oral cavity, acquires multiple frames of images, and transmits them to the data processing module for data processing.
  • the data processing module performs the following methods:
  • the initial three-dimensional data includes the initial point set of the target oral cavity under the same coordinate system and the three-dimensional coordinate measurement values of the target features;
  • the preset model includes the true three-dimensional coordinate value of the target feature in the same coordinate system and the true point set of the intraoral scanning rod (the true three-dimensional coordinate value of each point)
  • the initial point set of the target oral cavity and the true point set of the intraoral scanning rod are spliced;
  • the positioning information of the intraoral scanning rod is determined based on the real point set of the spliced intraoral scanning rod.
  • the positioning information of the intraoral scanning rod is the positioning information of the implant. Based on this positioning information, the tooth design is carried out so that the designed tooth can match the Implant adaptation and installation.
  • auxiliary feature points by scanning the target oral cavity, multiple scan images containing auxiliary feature points are obtained; the multiple scan images are processed to obtain the three-dimensional coordinate points of the auxiliary feature points, and based on the three-dimensional coordinate points and point cloud data of the auxiliary feature points, a Target scan data.
  • the above technical solution is adopted to improve the accuracy of scanning rod positioning information, thereby improving the efficiency and accuracy of scanning data processing in oral scanning scenarios.
  • the above technical solution is adopted to improve the accuracy of scanning rod positioning information, thereby improving the efficiency and accuracy of scanning data processing in oral scanning scenarios.
  • FIG. 2 is a schematic flowchart of a scan data processing method provided by an embodiment of the present disclosure.
  • the method can be executed by a scan data processing device, where the device can be implemented using software and/or hardware, and can generally be integrated in an electronic device. middle. As shown in Figure 2, the method includes:
  • Step 101 Obtain multiple scanned images containing auxiliary feature points.
  • the target oral cavity refers to the oral cavity that requires dental implantation.
  • Intraoral scanning is required to locate the coordinates of specific points in the oral cavity.
  • the scanning rod is connected through the auxiliary feature body to perform intraoral scanning.
  • the scanning rod includes an auxiliary feature body
  • the auxiliary feature body can also have many shapes (such as spherical, square, rectangular, conical and other features or a combination of these features).
  • the scan rod is a feature object containing auxiliary feature points, where the auxiliary feature points can uniquely identify a feature, that is, the scan rod is provided with auxiliary feature points, and each auxiliary feature point can uniquely identify the scan.
  • Corresponding position features on the rod such as setting target feature a and target feature b at position 1 and position 2 on the scanning rod respectively.
  • Target feature a can uniquely identify the position feature of position 1 on the scanning rod
  • target feature b can uniquely identify the position feature on the scanning rod.
  • Position characteristics of position 2 on the scan bar are examples of position 2 on the scan bar.
  • auxiliary feature bodies For example, such as setting convex or concave spherical or square shapes as auxiliary feature points on the scanning rod and its auxiliary feature bodies, printing different colors on the scanning rod and its auxiliary feature bodies, or printing different colors on the scanning rod and its auxiliary feature bodies, or printing different colors on the scanning rod and its auxiliary feature bodies.
  • Different QR code patterns, different colors of circles, squares, etc. are printed on the auxiliary features, and the settings can be selected according to the application needs.
  • the designed scan rod data after acquiring the scan rod data and auxiliary feature point data, the designed scan rod data needs to be aligned with the real-time acquired scan rod data through a splicing algorithm. Due to the non-rigid characteristics of intraoral data during oral scanning, the accuracy of single or multiple scanning rod data may deteriorate. Through the methods of the embodiments of the present disclosure, the accuracy of aligning the designed scan rod data to the real-time scan rod data can be improved, thereby improving the overall accuracy.
  • the target oral cavity can be scanned with a handheld oral scanner (monocular or binocular camera), that is, multiple scan images can be obtained by taking photos. For example, dozens of scan images can be collected in one second, and can be collected in a loop.
  • a handheld oral scanner monocular or binocular camera
  • the auxiliary feature points refer to points set on the main body of the scanning rod and/or the auxiliary feature body, which are specifically set according to the application scenario.
  • the camera is controlled according to a While rotating in a certain direction, the target oral cavity is scanned at a certain frequency to obtain multiple scan images.
  • the target oral cavity including the scanning rod is scanned to obtain multiple scan images.
  • Step 102 Process multiple scanned images to obtain three-dimensional coordinate points of auxiliary feature points.
  • the three-dimensional coordinate point of the auxiliary feature point refers to the three-dimensional coordinate point corresponding to the auxiliary feature point in the target oral cavity.
  • three-dimensional reconstruction is performed based on each scanned image to obtain a reconstructed map sequence, and the reconstructed image sequence is obtained.
  • the graph sequence is calculated to obtain the three-dimensional coordinate points and point cloud data of the auxiliary feature points.
  • the two-dimensional coordinate points of the auxiliary feature points in each scanned image are obtained, and the coordinate system conversion is performed on the two-dimensional coordinate points to obtain the three-dimensional coordinate points of the auxiliary feature points in each scanned image.
  • the three-dimensional coordinate points of the auxiliary feature points are spliced to obtain the three-dimensional coordinate points and point cloud data of all the auxiliary feature points.
  • the multiple scanned images can be processed to obtain three-dimensional coordinate points of auxiliary feature points.
  • Step 103 Generate target scanning data based on the three-dimensional coordinate points of the auxiliary feature points and point cloud data.
  • the three-dimensional coordinate points of the auxiliary feature points may be optimized three-dimensional coordinate points, which can more accurately reflect the three-dimensional coordinate points of the auxiliary feature points.
  • target scan data is generated based on the three-dimensional coordinate points of the auxiliary feature points and point cloud data, which can be understood as splicing the three-dimensional coordinate points of the auxiliary feature points with point cloud data such as teeth and gums to obtain the target scan data.
  • generating target scan data based on the three-dimensional coordinate points and point cloud data of the auxiliary feature points includes directly splicing the three-dimensional coordinate points and point cloud data of the auxiliary feature points to obtain the target scan data.
  • generating target scan data based on the three-dimensional coordinate points and point cloud data of the auxiliary feature points includes: generating target scan data based on the three-dimensional coordinate points, point cloud data and standard data of the auxiliary feature points.
  • standard data refers to scanned data that can be spliced with the three-dimensional coordinate points of auxiliary feature points. It can be scanned data designed through computer-aided design in advance, or it can be passed through, for example, SLR photogrammetry systems, three-dimensional coordinate machines and high-precision industrial three-dimensional scanners. Wait to obtain standard scan data. It can be understood that the standard scan data can be obtained in advance and stored in the database to be directly obtained during processing to improve processing efficiency. The standard scan data can also be obtained by real-time measurement according to the required scenario. In the oral scan scenario, the standard scan data can be a standard scan. pole data.
  • the target scanning data refers to converting the computer-aided design scanning rod data into the same coordinate system as the three-dimensional coordinate points of the auxiliary feature points, replacing the real-time scanning scanning rod data.
  • the auxiliary feature points have corresponding true value coordinate points
  • standard The data is standard scanning rod data. It is judged whether the three-dimensional coordinate point of the auxiliary feature point and the true value coordinate point of the standard auxiliary feature point are spliced. If the three-dimensional coordinate point of the auxiliary feature point and the true value coordinate point of the auxiliary feature point are spliced, the three-dimensional coordinate point of the auxiliary feature point is obtained. The position transformation matrix between the coordinate points and the true coordinate points of the auxiliary feature points. Based on the position transformation matrix, the standard scanning rod data is transferred to the auxiliary feature point coordinate system and then spliced with the auxiliary feature point three-dimensional coordinate points and point cloud data to obtain the target scan. data.
  • the standard data is standard scanning rod data
  • multiple planes are fitted based on the three-dimensional coordinate points of the auxiliary feature points
  • the target geometry is constructed based on the multiple planes
  • at least three planes are obtained based on the standard scanning rod data
  • each plane is obtained.
  • the position transformation matrix is obtained.
  • the standard scan rod data is transferred to the auxiliary feature point coordinate system and replaced.
  • the target geometry is spliced with point cloud data to obtain target scan data.
  • the auxiliary feature points have corresponding true coordinate points
  • the standard data is standard scan rod data
  • the true value coordinate points of the feature points are spliced to obtain the three-dimensional coordinate points of the target auxiliary feature point in the same coordinate system, and the position transformation matrix of the three-dimensional coordinate point of the target auxiliary feature point and the corresponding true value coordinate point of the auxiliary feature point is obtained, based on the position transformation matrix
  • the standard scanning rod data is transferred to the auxiliary feature point coordinate system and then spliced with the auxiliary feature point three-dimensional coordinate points and point cloud data to obtain target scanning data.
  • the above three methods are only examples of generating target scan data based on the three-dimensional coordinate points, point cloud data and standard data of auxiliary feature points.
  • the embodiments of the present disclosure do not generate targets based on the three-dimensional coordinate points, point cloud data and standard data of auxiliary feature points.
  • the specific method of scanning data is limited.
  • the target scanning data can be generated based on the three-dimensional coordinate points of the auxiliary feature points, point cloud data and standard data.
  • the scanning data processing scheme provided by the embodiment of the present disclosure acquires multiple scanned images; each scanned image includes an auxiliary feature point, and the multiple scanned images are processed to obtain a three-dimensional coordinate point of the auxiliary feature point. Based on the three-dimensional coordinates of the auxiliary feature point Point and point cloud data to generate target scan data.
  • the above technical solution is adopted to improve the alignment accuracy of the designed scanning rod data and real-time scanning rod data, thereby improving the efficiency and accuracy of scanning data processing in oral scanning scenarios.
  • FIG. 3 is a schematic flowchart of another scan data processing method provided by an embodiment of the present disclosure. Based on the above embodiment, this embodiment further optimizes the above scan data processing method. As shown in Figure 3, the method includes:
  • Step 201 Scan the target oral cavity to obtain multiple scan images containing auxiliary feature points.
  • Step 201 is the same as step 101.
  • Step 101 please refer to the description of step 101, which will not be described in detail here.
  • Step 202 Perform three-dimensional reconstruction based on each scanned image to obtain a sequence of reconstructed images. Calculate the sequence of reconstructed images to obtain three-dimensional coordinate points and point cloud data of auxiliary feature points.
  • an oral scan is used to scan the target oral cavity, and scanned images (reconstruction images and texture images) for three-dimensional reconstruction are collected.
  • the reconstructed image sequence is used to calculate the teeth, gums, and scanned images.
  • the three-dimensional data of the rod is used to reconstruct the three-dimensional coordinate points of the auxiliary feature points using the texture map sequence. Since the reconstruction map and the texture map are obtained at the same time, it can be considered that the three-dimensional data and the three-dimensional coordinate points of the auxiliary feature points have a one-to-one correspondence in the same coordinate system.
  • Step 203 Determine whether the three-dimensional coordinate points of the auxiliary feature points and the true value coordinate points of the standard auxiliary feature points are spliced.
  • Step 204 If the three-dimensional coordinate points of the auxiliary feature point and the true value coordinate point of the auxiliary feature point are spliced, obtain the position transformation matrix between the three-dimensional coordinate point of the auxiliary feature point and the true value coordinate point of the auxiliary feature point.
  • the auxiliary feature points are made onto the main body of the scanning rod and its auxiliary feature body, and the relevant coordinates of the auxiliary feature points are defined as the true coordinate points of the auxiliary feature points.
  • the obtained three-dimensional coordinate points of the auxiliary feature points are compared with The auxiliary feature points are matched with the true value coordinate points. If the matching is found to be successful, the scan rod data replacement of the design corresponding to the current scan rod can be completed.
  • the first auxiliary feature point distance between the three-dimensional coordinate point of the auxiliary feature point and the three-dimensional coordinate point of the auxiliary feature point within the standard distance range is obtained, and the true value of the auxiliary feature point matching the three-dimensional coordinate point of the auxiliary feature point is obtained.
  • coordinate point obtain the second auxiliary feature point distance between the true value coordinate point of the auxiliary feature point and the true value coordinate point of the auxiliary feature point within the distance range, and determine the three-dimensional auxiliary feature point based on the first auxiliary feature point distance and the second auxiliary feature point distance.
  • the distance range can be set according to the application scenario.
  • Step 205 Transfer the standard scanning rod data to the auxiliary feature point coordinate system based on the position transformation matrix and then splice it with the auxiliary feature point three-dimensional coordinate points and point cloud data to obtain target scanning data.
  • the collected three-dimensional coordinate points of the auxiliary feature points are spliced with the true value coordinate points of the standard auxiliary feature points. If the splicing is successful, the position transformation matrix can be used to transfer the designed scan rod data to the auxiliary feature point coordinate system and replace Scan bar data acquired in real time.
  • the scanning data processing scheme scans the target oral cavity and obtains multiple scan images; wherein each scan image includes auxiliary feature points, and three-dimensional reconstruction is performed based on each scan image to obtain a sequence of reconstructed images. Calculate the image sequence to obtain the three-dimensional coordinate points and point cloud data of the auxiliary feature points, and determine the three-dimensional coordinate points of the auxiliary feature points. Whether to splice with the true value coordinate point of the standard auxiliary feature point. If the three-dimensional coordinate point of the auxiliary feature point and the true value coordinate point of the auxiliary feature point are spliced, obtain the position between the three-dimensional coordinate point of the auxiliary feature point and the true value coordinate point of the auxiliary feature point. Transformation matrix.
  • the standard scanning rod data is transferred to the auxiliary feature point coordinate system and then spliced with the auxiliary feature point three-dimensional coordinate points and point cloud data to obtain the target scanning data.
  • the alignment accuracy of the designed scan rod data and the real-time scan rod data is improved, thereby improving the efficiency and accuracy of scanning data processing in oral scanning scenarios.
  • FIG. 4 is a schematic flowchart of another scan data processing method provided by an embodiment of the present disclosure. Based on the above embodiment, this embodiment further optimizes the above scan data processing method. As shown in Figure 4, the method includes:
  • Step 301 Scan the target oral cavity to obtain multiple scan images containing auxiliary feature points.
  • Step 301 is the same as step 101.
  • step 101 For details, please refer to the description of step 101, which will not be described in detail here.
  • Step 302 Perform three-dimensional reconstruction based on each scanned image to obtain a sequence of reconstructed images. Calculate the sequence of reconstructed images to obtain three-dimensional coordinate points and point cloud data of auxiliary feature points.
  • an oral scan is used to scan the target oral cavity, and scanned images (reconstruction maps and texture maps) are collected for three-dimensional reconstruction.
  • the reconstruction map sequence is used to calculate the point cloud data including teeth, gums and scanning rods, and the texture map sequence is used for reconstruction assistance.
  • the three-dimensional coordinate point of the feature point Since the reconstruction map and the texture map are obtained at the same time, it can be considered that the three-dimensional data and the three-dimensional coordinate points of the auxiliary feature points have a one-to-one correspondence in the same coordinate system.
  • Step 303 Fit multiple planes based on the three-dimensional coordinate points of the auxiliary feature points, construct the target geometry based on the multiple planes, obtain at least three planes based on the standard scanning rod data, and obtain the normal vector of each plane and the intersection of at least three planes.
  • Step 304 Obtain a position transformation matrix based on the normal vector of each plane and the intersection points of at least three planes.
  • Step 305 Transfer the standard scanning rod data to the auxiliary feature point coordinate system based on the position transformation matrix, replace the target geometry, and splice it with the point cloud data to obtain target scan data.
  • the position transformation matrix refers to the position transformation relationship that converts the standard scanning rod data to the auxiliary feature point coordinate system.
  • the basic geometric features of the scanning rod are fitted through multiple auxiliary feature points.
  • the scanning rod is a cuboid, with auxiliary feature points distributed on the four sides and a top surface of the cuboid, and on the five planes.
  • the coordinates of the auxiliary feature points on each plane can be fitted to a cuboid respectively, so that the target geometry can be obtained.
  • the auxiliary feature points obtained during the scanning process can be fitted to a standard target geometry according to the rules.
  • the target geometry can be directly used as the position of the designed scan rod data or the target geometry can be aligned with the designed scan rod data to determine the design.
  • the position of the scan bar data can be directly used as the position of the designed scan rod data or the target geometry can be aligned with the designed scan rod data to determine the design. The position of the scan bar data.
  • the scanning data processing scheme scans the target oral cavity, obtains multiple scan images containing auxiliary feature points, performs three-dimensional reconstruction based on each scan image, and obtains a reconstructed image sequence, and calculates the reconstructed image sequence to obtain
  • the three-dimensional coordinate points and point cloud data of the auxiliary feature points are used to fit multiple planes based on the three-dimensional coordinate points of the auxiliary feature points, and the target geometry is constructed based on multiple planes.
  • At least three planes are obtained based on the standard scanning rod data, and the normal vector of each plane is obtained. and the intersection of at least three planes. Based on the normal vector of each plane and the intersection of at least three planes, the position transformation matrix is obtained.
  • the standard scan rod data is transferred to the auxiliary feature point coordinate system and then the target geometry is replaced and combined with Point cloud data is spliced to obtain target scan data.
  • the accuracy of the scanning rod positioning information is improved, thereby improving the efficiency and accuracy of scanning data processing in oral scanning scenarios.
  • FIG. 5 is a schematic structural diagram of a scan data processing device provided by an embodiment of the present disclosure.
  • the device can be implemented by software and/or hardware, and can generally be integrated in electronic equipment. As shown in Figure 5, the device includes:
  • the image acquisition module 401 is used to acquire multiple scanned images containing auxiliary feature points;
  • the image processing module 402 is used to process the plurality of scanned images to obtain three-dimensional coordinate points of auxiliary feature points;
  • the generation module 403 is used to generate target scan data based on the three-dimensional coordinate points and point cloud data of the auxiliary feature points.
  • Target scan data is generated based on the three-dimensional coordinate points of the auxiliary feature points, the point cloud data and the standard data.
  • the auxiliary feature points have corresponding true value coordinate points; the image processing Module 402 is specifically used for:
  • the reconstructed image sequence is calculated to obtain three-dimensional coordinate points and point cloud data of auxiliary feature points.
  • the generation module 403 includes:
  • a judgment unit used to judge whether the three-dimensional coordinate points of the auxiliary feature points and the true value coordinate points of the standard auxiliary feature points are spliced
  • An acquisition unit configured to obtain the position between the three-dimensional coordinate point of the auxiliary feature point and the true value coordinate point of the auxiliary feature point if the three-dimensional coordinate point of the auxiliary feature point and the true value coordinate point of the auxiliary feature point are spliced. transformation matrix;
  • a splicing unit configured to transfer the standard scanning rod data to an auxiliary feature point coordinate system based on the position transformation matrix and then splice it with the three-dimensional coordinate points of the auxiliary feature points and the point cloud data to obtain the target scan data.
  • the judgment unit is specifically used for:
  • Determining whether the three-dimensional coordinate points of the auxiliary feature points and the true value coordinate points of the standard auxiliary feature points are spliced includes:
  • the generation module 403 is specifically used for:
  • the standard scan rod data is transferred to the auxiliary feature point coordinate system based on the position transformation matrix, and then the target geometry is replaced and spliced with the point cloud data to obtain the target scan data.
  • auxiliary feature points have corresponding true value coordinate points.
  • the generation module 403 is specifically used for:
  • the standard scanning rod data is transferred to the auxiliary feature point coordinate system and then spliced with the auxiliary feature point three-dimensional coordinate point and the point cloud data to obtain the target scan data.
  • the scan data processing device provided by the embodiments of the present disclosure can execute the scan data processing method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • An embodiment of the present disclosure also provides a computer program product, which includes a computer program/instruction.
  • the computer program/instruction is executed by a processor, the scan data processing method provided by any embodiment of the present disclosure is implemented.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • a schematic structural diagram of an electronic device 500 suitable for implementing an embodiment of the present disclosure is shown.
  • the electronic device 500 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMP (portable multimedia players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 6 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 500 may include a processing device (eg, central processing unit, graphics processor, etc.) 501 , which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 502 or from a storage device 508 .
  • the program in the memory (RAM) 503 executes various appropriate actions and processes.
  • RAM 503 various programs and data required for the operation of the electronic device 500 are also stored.
  • Processing device 501, ROM 502 and RAM 503 They are connected to each other via bus 504.
  • An input/output (I/O) interface 505 is also connected to bus 504.
  • input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration
  • An output device 507 such as a computer
  • a storage device 508 including a magnetic tape, a hard disk, etc.
  • Communication device 509 may allow electronic device 500 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 6 illustrates electronic device 500 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 509, or from storage device 508, or from ROM 502.
  • the processing device 501 When the computer program is executed by the processing device 501, the above-mentioned functions defined in the scan data processing method of the embodiment of the present disclosure are performed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programmd read-only memory (EPROM or flash memory), fiber optics, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. This propagated data signal can be In various forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wire, optical cable, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and server can communicate using any currently known or future developed network protocol such as HTTP (Hyper Text Transfer Protocol), and can communicate with digital data in any form or medium.
  • Data communications e.g., communications network
  • communications networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any currently known or developed in the future network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device acquires multiple scanned images containing auxiliary feature points, and processes the multiple scanned images. , obtain the three-dimensional coordinate points of the auxiliary feature points, and generate target scanning data based on the three-dimensional coordinate points and point cloud data of the auxiliary feature points.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages—such as "C” or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as an Internet service provider through Internet connection
  • FIG. 1 The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operations of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure.
  • Each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more executable functions for implementing the specified logical function. instruction.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of a unit does not constitute a limitation on the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • the present disclosure provides an electronic device, including:
  • memory for storing instructions executable by the processor
  • the processor is configured to read the executable instructions from the memory and execute the instructions to implement any of the scan data processing methods provided by this disclosure.
  • the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, the computer program is used to perform any of the scanning provided by the present disclosure. Data processing methods.
  • the embodiment of the present disclosure also provides a device, which includes:
  • Memory used to store instructions executable by the processor
  • the processor is configured as:
  • Target scan data is generated based on the three-dimensional coordinate points of the auxiliary feature points and the point cloud data.
  • the scanning data processing method provided by the present disclosure can effectively calculate scanning data, improve the alignment accuracy of the designed scanning rod data and real-time scanning rod data, and can well consider the impact on scanning efficiency and accuracy in oral scanning scenarios, and optimize The scanning process has strong industrial applicability.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本公开实施例涉及一种扫描数据处理方法、装置、电子设备及介质,其中该方法包括:获取包含辅助特征点的多张扫描图像;对多张扫描图像进行处理,得到辅助特征点三维坐标点,基于辅助特征点三维坐标点和点云数据,生成目标扫描数据。采用上述技术方案,提高扫描杆定位信息精度,从而提高口扫场景下的扫描数据处理效率和精度。

Description

一种扫描数据处理方法、装置、电子设备及介质
本公开要求于2022年05月02日提交中国专利局、申请号为2022104770842、发明名称为“一种扫描数据处理方法、装置、电子设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及口内扫描技术领域,尤其涉及一种扫描数据处理方法、装置、电子设备及介质。
背景技术
通常,对扫描场景中,通过扫描确定相关目标位置。
相关技术中,由于扫描范围限制,在扫描数据时通常使用的是多数据拼接的方案,由于累计误差的存在,最终导致模型的整体精度不高。
发明内容
(一)要解决的技术问题
本公开要解决的技术问题是解决现有的扫描场景中模型的整体精度不高的问题。
(二)技术方案
为了解决上述技术问题,本公开实施例提供了一种扫描数据处理方法,包括:
获取包含辅助特征点的多张扫描图像;
对所述多张扫描图像进行处理,得到辅助特征点三维坐标点;
基于所述辅助特征点三维坐标点和点云数据,生成目标扫描数据。
第二方面,还提供一种扫描数据处理装置,包括:
获取图像模块,用于获取包含辅助特征点的多张扫描图像;
图像处理模块,用于对所述多张扫描图像进行处理,得到辅助特征点三维坐标点;
生成模块,用于基于所述辅助特征点三维坐标点和点云数据,生成目标扫描数据。
第三方面,还提供一种电子设备,包括:
存储器;
处理器;以及
计算机程序;
其中,所述计算机程序存储在所述存储器中,并被配置为由所述处理器执行以实现上述的扫描数据处理方法。
第四方面,还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述的扫描数据处理方法的步骤。
(三)有益效果
本公开实施例提供的上述技术方案与现有技术相比具有如下优点:
本公开实施例提供的扫描数据处理方案,获取包含辅助特征点的多张扫描图像,对多张扫描图像进行处理,得到辅助特征点三维坐标点,基于辅助特征点三维坐标点和点云数据,生成目标扫描数据。采用上述技术方案,提高扫描杆定位信息精度,从而提高口扫场景下的扫描数据处理效率和精度。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面 将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种扫描数据处理的应用场景图;
图2为本公开实施例提供的一种扫描数据处理方法的流程示意图;
图3为本公开实施例提供的另一种扫描数据处理方法的流程示意图;
图4为本公开实施例提供的又一种扫描数据处理方法的流程示意图;
图5为本公开实施例提供的一种扫描数据处理装置的结构示意图;
图6为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
在实际应用中,对口腔缺失牙齿的修复场景中,由于口扫的扫描范围限制,在扫描口内数据时通常使用的是多数据拼接的方案,由于累计误差的存在,最终导致模型的整体精度不高。
针对上述问题,本公开提出一种扫描数据处理方法,可以应用于如图1所示的应用环境中。图1为本公开实施例提供的一种扫描数据处理的应用场景图,该应用环境包括:在目标口腔安装多个口内扫描杆,口内扫描杆包括:扫描杆部件11,与扫描杆部件11连接的辅助部件12,扫描杆部件11和/或辅助部件12上设置有辅助特征点,扫描杆部件11与安装于目标口腔的种植体适配,通过扫描杆部件11与种植体适配安装,使得口内扫描杆安装于目标口腔。
其中,多个口内扫描杆中任意两个口内扫描杆的辅助部件12相互适配,使得任意两个口内扫描杆10相邻安装于口腔时两辅助部件上的辅助特征点呈连续分布,可以通过比如单反摄影测量系统,三坐标仪, 高精度工业三维扫描仪等获取辅助特征点的真值坐标点,理论上扫描获取图像对应的三维坐标点与预先获取的辅助特征点的真值坐标点一一对应。
作为一种场景举例,在目标口腔安装多个口内扫描杆,口内扫描杆包括用于与种植体连接的扫描杆部件及与扫描杆连接的辅助部件,口内扫描杆设有目标特征,目标特征呈连续地分布于扫描杆和/或辅助部件,目标特征非单面地分布于扫描杆和/或辅助部件。
具体地,口内扫描仪扫描目标口腔,获取多帧图像,并传输给数据处理模块进行数据处理,数据处理模块执行以下方法:
获取多帧图像,并基于多帧图像获取目标口腔的初始三维数据,初始三维数据包括同一坐标系下的目标口腔的初始点集、目标特征的三维坐标测量值;
获取口内扫描杆的预设模型,预设模型包括同一坐标系下目标特征的三维坐标真值和口内扫描杆的真实点集(各个点的三维坐标真值)
基于目标特征的三维坐标测量值与真值的对应关系将目标口腔的初始点集与口内扫描杆的真实点集进行拼接;
基于拼接后的口内扫描杆的真实点集确定口内扫描杆的定位信息,口内扫描杆的定位信息即为种植体的定位信息,基于该定位信息进行牙体设计,使得设计制作的牙体能够与种植体适配安装。
具体地,通过对目标口腔进行扫描,获取包含辅助特征点的多张扫描图像;对多张扫描图像进行处理,得到辅助特征点三维坐标点,基于辅助特征点三维坐标点和点云数据,生成目标扫描数据。采用上述技术方案,提高扫描杆定位信息精度,从而提高口扫场景下的扫描数据处理效率和精度。采用上述技术方案,提高扫描杆定位信息精度,从而提高口扫场景下的扫描数据处理效率和精度。
具体地,图2为本公开实施例提供的一种扫描数据处理方法的流程示意图,该方法可以由扫描数据处理装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图2所示,该方法包括:
步骤101、获取包含辅助特征点的多张扫描图像。
其中,目标口腔指的是需要进行牙齿种植的口腔,需要进行口腔内扫描以对口腔内的具体点坐标进行定位,扫描杆通过辅助特征体连接后进行口内扫描。
在本公开实施例中,扫描杆包括辅助特征体,辅助特征体的形状也可以有很多种(比如球形、方形、长方体,锥形等特征或这些特征的组合)。
在本公开实施例中,扫描杆是一种包含辅助特征点的特征物体,其中,辅助特征点能够唯一标识一个特征,即在扫描杆设置有辅助特征点,每个辅助特征点能够唯一标识扫描杆上对应的位置特征,比如在扫描杆上的位置1和位置2分别设置目标特征a和目标特征b,目标特征a的能够唯一标识扫描杆上位置1的位置特征,目标特征b能够唯一标识扫描杆上位置2的位置特征。
可以理解的是,扫描杆上不同形状、颜色、二维码等具有唯一标识扫描杆上对应的位置特征的都可以作为辅助特征点。
举例而言,比如在扫描杆及其辅助特征体上设置凸起或者凹进去的球形、正方形形状作为辅助特征点,再比如在扫描杆及其辅助特征体上印刷不同颜色,还比如在扫描杆及其辅助特征体上印刷不同二维码图案、不同颜色圆形、正方形等,具体根据应用需要选择设置。
在本公开实施例中,在获取扫描杆数据和辅助特征点数据后,需要通过拼接算法将设计的扫描杆数据跟实时获取的扫描杆数据进行对齐。由于在口扫扫描时单个或者多个扫描杆数据由于口内数据的非刚性特性,会出现精度变差的情况。通过本公开实施例的方式可以提高设计的扫描杆数据对齐到实时扫描杆数据的精度,从而提升整体精度。
其中,可以通过手持式口腔扫描仪(单目或者双目相机)对目标口腔进行扫描,也就是通过拍照的方式获取多张扫描图像,比如一秒钟采集几十张扫描图像,可以循环采集。
其中,辅助特征点指的是设置在扫描杆主体和/或辅助特征体上的点,具体根据应用场景设置。
本公开实施例中,对包括扫描杆的目标口腔进行扫描,获取多张扫描图像的方式有很多种,在一些实施方式中,通过控制相机按照一 定方向旋转的同时按照一定频率对目标口腔进行扫描,得到多张扫描图像。
具体的,在目标口腔连接好扫描杆后,对包括扫描杆的目标口腔进行扫描,获取多张扫描图像。
步骤102、对多张扫描图像进行处理,得到辅助特征点三维坐标点。
其中,辅助特征点三维坐标点指的是目标口腔中辅助特征点对应的三维坐标点。
在本公开实施例中,对多张扫描图像进行处理,得到辅助特征点三维坐标点的方式有很多种,在一些实施方式中,基于每张扫描图像进行三维重建,得到重建图序列,对重建图序列进行计算,得到辅助特征点三维坐标点和点云数据。
在另一些实施方式中,获取每张扫描图像中辅助特征点的二维坐标点,对二维坐标点进行坐标系转换,得到每张扫描图像的辅助特征点三维坐标点,对每张扫描图像的辅助特征点三维坐标点进行拼接,得到所有辅助特征点三维坐标点和点云数据。以上两种方式仅为对多张扫描图像进行处理,得到辅助特征点三维坐标点的示例,本公开实施例不对对多张扫描图像进行处理,得到辅助特征点三维坐标点的具体方式进行限定。
本公开实施例中,当获取多张扫描图像之后,可以对多张扫描图像进行处理,得到辅助特征点三维坐标点。
步骤103、基于辅助特征点三维坐标点和点云数据,生成目标扫描数据。
其中,辅助特征点三维坐标点可以是进行优化处理后的三维坐标点,能够更加精确体现辅助特征点的三维坐标点。
在本公开实施例中,基于辅助特征点三维坐标点和点云数据,生成目标扫描数据,可以理解为基于辅助特征点三维坐标点与牙齿、牙龈等点云数据进行拼接,得到目标扫描数据。
在本公开实施例中,基于辅助特征点三维坐标点和点云数据,生成目标扫描数据,包括:将辅助特征点三维坐标点和点云数据直接拼接,得到目标扫描数据。
在本公开实施例中,基于辅助特征点三维坐标点和点云数据,生成目标扫描数据,包括:基于辅助特征点三维坐标点、点云数据和标准数据,生成目标扫描数据。
其中,标准数据指的是能够与辅助特征点三维坐标点进行拼接的扫描数据,可以预先通过计算机辅助设计的扫描数据,也可以通过比如单反摄影测量系统,三坐标仪和高精度工业三维扫描仪等获取标准扫描数据。可以理解的是,可以预先获取标准扫描数据存在数据库中以在进行处理时直接获取以提高处理效率,也可以根据需要场景实时测量获取标准扫描数据,标准数据在口扫场景中,可以为标准扫描杆数据。
其中,目标扫描数据指的是将计算机辅助设计的扫描杆数据转换到和辅助特征点三维坐标点在同一个坐标系内,替换掉实时扫描的扫描杆数据。
在本公开实施例中,基于辅助特征点三维坐标点、点云数据和标准数据,生成目标扫描数据的方式有很多种,在一些实施方式中,辅助特征点具有对应的真值坐标点,标准数据为标准扫描杆数据,判断辅助特征点三维坐标点和标准辅助特征点真值坐标点是否进行拼接,若辅助特征点三维坐标点和辅助特征点真值坐标点进行拼接,获取辅助特征点三维坐标点和辅助特征点真值坐标点之间的位置变换矩阵,基于位置变换矩阵将标准扫描杆数据转移到辅助特征点坐标系后与辅助特征点三维坐标点和点云数据拼接,得到目标扫描数据。
在另一些实施方式中,标准数据为标准扫描杆数据,基于辅助特征点三维坐标点拟合多个平面,基于多个平面构建目标几何体,基于标准扫描杆数据获取至少三个平面,获取每个平面的法向量以及至少三个平面的交点,基于每个平面的法向量以及至少三个平面的交点,得到位置变换矩阵,基于位置变换矩阵将标准扫描杆数据转移到辅助特征点坐标系后替换目标几何体并与点云数据拼接,得到目标扫描数据。
在还一些实施方式中,辅助特征点具有对应的真值坐标点,标准数据为标准扫描杆数据,将任一辅助特征点三维坐标点和对应的辅助 特征点真值坐标点进行拼接,得到同一坐标系下的目标辅助特征点三维坐标点,获取目标辅助特征点三维坐标点和对应的辅助特征点真值坐标点的位置变换矩阵,基于位置变换矩阵将标准扫描杆数据转移到辅助特征点坐标系后与辅助特征点三维坐标点和点云数据拼接,得到目标扫描数据。
以上三种方式仅为基于辅助特征点三维坐标点、点云数据和标准数据,生成目标扫描数据的示例,本公开实施例不对基于辅助特征点三维坐标点、点云数据和标准数据,生成目标扫描数据的具体方式进行限定。
具体的,获取辅助特征点三维坐标点之后,可以基于辅助特征点三维坐标点、点云数据和标准数据,生成目标扫描数据。
本公开实施例提供的扫描数据处理方案,获取多张扫描图像;其中,每张扫描图像包括辅助特征点,对多张扫描图像进行处理,得到辅助特征点三维坐标点,基于辅助特征点三维坐标点和点云数据,生成目标扫描数据。采用上述技术方案,提高设计的扫描杆数据和实时扫描杆数据进行对齐精度,从而提高口扫场景下的扫描数据处理效率和精度。
基于上述实施例的描述,下面结合图3和图4分别针对不同场景下的扫描数据处理方式进行详细描述。
具体地,图3为本公开实施例提供的另一种扫描数据处理方法的流程示意图,本实施例在上述实施例的基础上,进一步优化了上述扫描数据处理方法。如图3所示,该方法包括:
步骤201、对目标口腔进行扫描,获取包含辅助特征点的多张扫描图像。
步骤201与步骤101相同,具体参见步骤101的描述,此处不再详述。
步骤202、基于每张扫描图像进行三维重建,得到重建图序列,对重建图序列进行计算,得到辅助特征点三维坐标点和点云数据。
具体地,使用口扫扫描目标口腔,采集用于三维重建的扫描图像(重建图和纹理图),利用重建图序列,计算包含牙齿、牙龈及扫描 杆的三维数据,利用纹理图序列重建辅助特征点的三维坐标点。由于重建图和纹理图在同一时刻获取,可认为三维数据和辅助特征点三维坐标点在同一个坐标系内是一一对应的。
步骤203、判断辅助特征点三维坐标点和标准辅助特征点真值坐标点是否进行拼接。
步骤204、若辅助特征点三维坐标点和辅助特征点真值坐标点进行拼接,获取辅助特征点三维坐标点和辅助特征点真值坐标点之间的位置变换矩阵。
具体地,将辅助特征点制作到扫描杆主体及其辅助特征体上,将辅助特征点相关坐标定义为辅助特征点真值坐标点,在扫描过程中将获取到的辅助特征点三维坐标点与辅助特征点真值坐标点进行匹配,如果发现可以匹配成功则可以完成当前扫描杆对应的设计的扫描杆数据替换。
在本公开实施例中,获取辅助特征点三维坐标点与标准距离范围内的辅助特征点三维坐标点之间的第一辅助特征点距离,获取辅助特征点三维坐标点匹配的辅助特征点真值坐标点,获取辅助特征点真值坐标点与距离范围内的辅助特征点真值坐标点的第二辅助特征点距离,基于第一辅助特征点距离和第二辅助特征点距离判断辅助特征点三维坐标点和辅助特征点真值坐标点是否进行拼接。其中,距离范围可以根据应用场景设置。
步骤205、基于位置变换矩阵将标准扫描杆数据转移到辅助特征点坐标系后与辅助特征点三维坐标点和点云数据拼接,得到目标扫描数据。
具体地,利用采集到的辅助特征点三维坐标点跟标准辅助特征点真值坐标点进行拼接,如果拼接成功,可利用位置变换矩阵将设计的扫描杆数据转移到辅助特征点坐标系下,替换实时获取的扫描杆数据。
本公开实施例提供的扫描数据处理方案,对目标口腔进行扫描,获取多张扫描图像;其中,每张扫描图像包括辅助特征点,基于每张扫描图像进行三维重建,得到重建图序列,对重建图序列进行计算,得到辅助特征点三维坐标点和点云数据,判断辅助特征点三维坐标点 和标准辅助特征点真值坐标点是否进行拼接,若辅助特征点三维坐标点和辅助特征点真值坐标点进行拼接,获取辅助特征点三维坐标点和辅助特征点真值坐标点之间的位置变换矩阵,基于位置变换矩阵将标准扫描杆数据转移到辅助特征点坐标系后与辅助特征点三维坐标点和点云数据拼接,得到目标扫描数据。由此,提高设计的扫描杆数据和实时扫描杆数据进行对齐精度,从而提高口扫场景下的扫描数据处理效率和精度。
具体地,图4为本公开实施例提供的又一种扫描数据处理方法的流程示意图,本实施例在上述实施例的基础上,进一步优化了上述扫描数据处理方法。如图4所示,该方法包括:
步骤301、对目标口腔进行扫描,获取包含辅助特征点的多张扫描图像。
步骤301与步骤101相同,具体参见步骤101的描述,此处不再详述。
步骤302,基于每张扫描图像进行三维重建,得到重建图序列,对重建图序列进行计算,得到辅助特征点三维坐标点和点云数据。
具体地,使用口扫扫描目标口腔,采集用于三维重建的扫描图像(重建图和纹理图),利用重建图序列,计算包含牙齿、牙龈及扫描杆的点云数据,利用纹理图序列重建辅助特征点的三维坐标点。由于重建图和纹理图在同一时刻获取,可认为三维数据和辅助特征点三维坐标点在同一个坐标系内是一一对应的。
步骤303、基于辅助特征点三维坐标点拟合多个平面,基于多个平面构建目标几何体,基于标准扫描杆数据获取至少三个平面,获取每个平面的法向量以及至少三个平面的交点。
步骤304、基于每个平面的法向量以及至少三个平面的交点,得到位置变换矩阵。
步骤305、基于位置变换矩阵将标准扫描杆数据转移到辅助特征点坐标系后替换目标几何体并与点云数据拼接,得到目标扫描数据。
其中,位置变换矩阵指的是将标准扫描杆数据转换到辅助特征点坐标系的位置变换关系。
具体地,通过多个辅助特征点拟合出扫描杆的基本几何特征,例如扫描杆为一个长方体,在长方体的4个侧面和一个顶面,5个平面上都分布有辅助特征点,利用每个平面上的辅助特征点坐标可以分别拟合出一个长方体,从而可以得到目标几何体。
因此,在扫描过程中获取的辅助特征点可以根据规则拟合出标准的目标几何体,目标几何体可以直接作为设计的扫描杆数据的位置或者将目标几何体跟设计的扫描杆数据做一次对齐来确定设计的扫描杆数据的位置。
本公开实施例提供的扫描数据处理方案,对目标口腔进行扫描,获取包含辅助特征点的多张扫描图像,基于每张扫描图像进行三维重建,得到重建图序列,对重建图序列进行计算,得到辅助特征点三维坐标点和点云数据,基于辅助特征点三维坐标点拟合多个平面,基于多个平面构建目标几何体,基于标准扫描杆数据获取至少三个平面,获取每个平面的法向量以及至少三个平面的交点,基于每个平面的法向量以及至少三个平面的交点,得到位置变换矩阵,基于位置变换矩阵将标准扫描杆数据转移到辅助特征点坐标系后替换目标几何体并与点云数据拼接,得到目标扫描数据。由此,提高扫描杆定位信息精度,从而提高口扫场景下的扫描数据处理效率和精度。
图5为本公开实施例提供的一种扫描数据处理装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中。如图5所示,该装置包括:
获取图像模块401,用于获取包含辅助特征点的多张扫描图像;
图像处理模块402,用于对所述多张扫描图像进行处理,得到辅助特征点三维坐标点;
生成模块403,用于基于所述辅助特征点三维坐标点和点云数据,生成目标扫描数据。
可选的,生成模块403,具体用于:
基于所述辅助特征点三维坐标点、所述点云数据和标准数据,生成目标扫描数据。
可选的,所述辅助特征点具有对应的真值坐标点;所述图像处理 模块402具体用于:
基于每张所述扫描图像进行三维重建,得到重建图序列;
对所述重建图序列进行计算,得到辅助特征点三维坐标点和点云数据。
可选的,所述生成模块403包括:
判断单元,用于判断所述辅助特征点三维坐标点和标准辅助特征点真值坐标点是否进行拼接;
获取单元,用于若所述辅助特征点三维坐标点和所述辅助特征点真值坐标点进行拼接,获取所述辅助特征点三维坐标点和所述辅助特征点真值坐标点之间的位置变换矩阵;
拼接单元,用于基于所述位置变换矩阵将所述标准扫描杆数据转移到辅助特征点坐标系后与所述辅助特征点三维坐标点和所述点云数据拼接,得到所述目标扫描数据。
可选的,所述判断单元具体用于:
所述判断所述辅助特征点三维坐标点和标准辅助特征点真值坐标点是否进行拼接,包括:
获取所述辅助特征点三维坐标点与标准距离范围内的辅助特征点三维坐标点之间的第一辅助特征点距离;
获取所述辅助特征点三维坐标点匹配的辅助特征点真值坐标点,获取所述辅助特征点真值坐标点与所述距离范围内的辅助特征点真值坐标点的第二辅助特征点距离;
基于所述第一辅助特征点距离和所述第二辅助特征点距离判断所述辅助特征点三维坐标点和所述辅助特征点真值坐标点是否进行拼接。
可选的,所述生成模块403,具体用于:
基于所述辅助特征点三维坐标点拟合多个平面,基于所述多个平面构建目标几何体;
基于所述标准扫描杆数据获取至少三个平面,获取每个平面的法向量以及所述至少三个平面的交点;
基于所述每个平面的法向量以及所述至少三个平面的交点,得到 位置变换矩阵;
基于所述位置变换矩阵将所述标准扫描杆数据转移到辅助特征点坐标系后替换所述目标几何体和所述点云数据拼接,得到所述目标扫描数据。
可选的,辅助特征点具有对应的真值坐标点,所述生成模块403,具体用于:
将任一辅助特征点三维坐标点和对应的辅助特征点真值坐标点进行拼接,得到同一坐标系下的目标辅助特征点三维坐标点,获取目标辅助特征点三维坐标点和对应的辅助特征点真值坐标点的位置变换矩阵,基于位置变换矩阵将标准扫描杆数据转移到辅助特征点坐标系后与辅助特征点三维坐标点和所述点云数据拼接,得到目标扫描数据
本公开实施例所提供的扫描数据处理装置可执行本公开任意实施例所提供的扫描数据处理方法,具备执行方法相应的功能模块和有益效果。
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的扫描数据处理方法。
图6为本公开实施例提供的一种电子设备的结构示意图。下面具体参考图6,其示出了适于用来实现本公开实施例中的电子设备500的结构示意图。本公开实施例中的电子设备500可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(ROM)502中的程序或者从存储装置508加载到随机访问存储器(RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有电子设备500操作所需的各种程序和数据。处理装置501、ROM502以及RAM 503 通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM 502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的扫描数据处理方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采 用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(Hyper Text Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取包含辅助特征点的多张扫描图像,对多张扫描图像进行处理,得到辅助特征点三维坐标点,基于辅助特征点三维坐标点和点云数据,生成目标扫描数据。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点 上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:
处理器;
用于存储所述处理器可执行指令的存储器;
所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开提供的任一所述的扫描数据处理方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开提供的任一所述的扫描数据处理方法。
此外,本公开实施例还提供了一种装置,该装置包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
获取包含辅助特征点的多张扫描图像;
对所述多张扫描图像进行处理,得到辅助特征点三维坐标点和点云数据;
基于所述辅助特征点三维坐标点和所述点云数据,生成目标扫描数据。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖 特点相一致的最宽的范围。
工业实用性
本公开提供的扫描数据处理方法,可有效计算扫描数据,提高了设计的扫描杆数据和实时扫描杆数据的对齐精度,能够很好的考虑口扫场景下对扫描效率和精度的影响,优化了扫描过程,具有很强的工业实用性。

Claims (10)

  1. 一种扫描数据处理方法,其特征在于,包括:
    获取包含辅助特征点的多张扫描图像;
    对所述多张扫描图像进行处理,得到辅助特征点三维坐标点和点云数据;
    基于所述辅助特征点三维坐标点和所述点云数据,生成目标扫描数据。
  2. 根据权利要求1所述的扫描数据处理方法,其特征在于,所述基于所述辅助特征点三维坐标点和所述点云数据,生成目标扫描数据,包括:
    基于所述辅助特征点三维坐标点、所述点云数据和标准数据,生成目标扫描数据。
  3. 根据权利要求1所述的扫描数据处理方法,其特征在于,所述对所述多张扫描图像进行处理,得到辅助特征点三维坐标点和点云数据,包括:
    基于每张所述扫描图像进行三维重建,得到重建图序列;
    对所述重建图序列进行计算,得到所述辅助特征点三维坐标点和所述点云数据。
  4. 根据权利要求2所述的扫描数据处理方法,其特征在于,所述辅助特征点具有对应的真值坐标点,所述标准数据为标准扫描杆数据;所述基于所述辅助特征点三维坐标点、所述点云数据和标准数据,生成目标扫描数据,包括:
    判断所述辅助特征点三维坐标点和标准辅助特征点真值坐标点是否进行拼接;
    若所述辅助特征点三维坐标点和所述辅助特征点真值坐标点进行拼接,获取所述辅助特征点三维坐标点和所述辅助特征点真值坐标点之间的位置变换矩阵;
    基于所述位置变换矩阵将所述标准扫描杆数据转移到辅助特征点坐标系后与所述辅助特征点三维坐标点和所述点云数据拼接,得到所 述目标扫描数据。
  5. 根据权利要求2所述的扫描数据处理方法,其特征在于,所述标准数据为标准扫描杆数据,所述基于所述辅助特征点三维坐标点、所述点云数据和标准数据,生成目标扫描数据,包括:
    基于所述辅助特征点三维坐标点拟合多个平面,基于所述多个平面构建目标几何体;
    基于所述标准扫描杆数据获取至少三个平面,获取每个平面的法向量以及所述至少三个平面的交点;
    基于所述每个平面的法向量以及所述至少三个平面的交点,得到位置变换矩阵;
    基于所述位置变换矩阵将所述标准扫描杆数据转移到辅助特征点坐标系后替换所述目标几何体并与所述点云数据拼接,得到所述目标扫描数据。
  6. 根据权利要求2所述的扫描数据处理方法,其特征在于,所述辅助特征点具有对应的真值坐标点,所述标准数据为标准扫描杆数据;所述基于所述辅助特征点三维坐标点、所述点云数据和标准数据,生成目标扫描数据,包括:
    将任一所述辅助特征点三维坐标点和对应的辅助特征点真值坐标点进行拼接,得到同一坐标系下的目标辅助特征点三维坐标点;
    获取所述目标辅助特征点三维坐标点和对应的辅助特征点真值坐标点的位置变换矩阵;
    基于所述位置变换矩阵将所述标准扫描杆数据转移到辅助特征点坐标系后与所述辅助特征点三维坐标点和所述点云数据拼接,得到所述目标扫描数据。
  7. 一种扫描数据处理装置,其特征在于,包括:
    获取图像模块,用于获取包含辅助特征点的多张扫描图像;
    图像处理模块,用于对所述多张扫描图像进行处理,得到辅助特征点三维坐标点;
    生成模块,用于基于所述辅助特征点三维坐标点和点云数据,生成目标扫描数据。
  8. 根据权利要求7所述的扫描数据处理装置,其特征在于,所述生成模块,具体用于:
    基于每张所述扫描图像进行三维重建,得到重建图序列;
    对所述重建图序列进行计算,得到辅助特征点三维坐标点。
  9. 一种电子设备,其特征在于,所述电子设备包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-6中任一所述的扫描数据处理方法。
  10. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-6中任一所述的扫描数据处理方法。
PCT/CN2023/091808 2022-05-02 2023-04-28 一种扫描数据处理方法、装置、电子设备及介质 WO2023213253A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210477084.2 2022-05-02
CN202210477084.2A CN114708150A (zh) 2022-05-02 2022-05-02 一种扫描数据处理方法、装置、电子设备及介质

Publications (1)

Publication Number Publication Date
WO2023213253A1 true WO2023213253A1 (zh) 2023-11-09

Family

ID=82177070

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/091808 WO2023213253A1 (zh) 2022-05-02 2023-04-28 一种扫描数据处理方法、装置、电子设备及介质

Country Status (2)

Country Link
CN (1) CN114708150A (zh)
WO (1) WO2023213253A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708150A (zh) * 2022-05-02 2022-07-05 先临三维科技股份有限公司 一种扫描数据处理方法、装置、电子设备及介质
CN115512045B (zh) * 2022-09-23 2023-09-26 先临三维科技股份有限公司 三维模型构建方法及装置、口内扫描仪

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110398207A (zh) * 2019-01-17 2019-11-01 重庆交通大学 一种边坡变形监测方法、装置、终端及存储介质
CN111023970A (zh) * 2019-12-17 2020-04-17 杭州思看科技有限公司 多模式三维扫描方法及系统
US20200170760A1 (en) * 2017-05-27 2020-06-04 Medicim Nv Method for intraoral scanning directed to a method of processing and filtering scan data gathered from an intraoral scanner
CN112022387A (zh) * 2020-08-27 2020-12-04 北京大学口腔医学院 一种种植体的定位方法、装置、设备及存储介质
CN113483695A (zh) * 2021-07-01 2021-10-08 先临三维科技股份有限公司 三维扫描系统、辅助件、处理方法、装置、设备及介质
CN113532311A (zh) * 2020-04-21 2021-10-22 广东博智林机器人有限公司 点云拼接方法、装置、设备和存储设备
CN113592989A (zh) * 2020-04-14 2021-11-02 广东博智林机器人有限公司 一种三维场景的重建系统、方法、设备及存储介质
CN114052960A (zh) * 2020-07-31 2022-02-18 先临三维科技股份有限公司 数字化口腔数据的获取方法及装置、牙齿扫描仪控制系统
CN114708150A (zh) * 2022-05-02 2022-07-05 先临三维科技股份有限公司 一种扫描数据处理方法、装置、电子设备及介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200170760A1 (en) * 2017-05-27 2020-06-04 Medicim Nv Method for intraoral scanning directed to a method of processing and filtering scan data gathered from an intraoral scanner
CN110398207A (zh) * 2019-01-17 2019-11-01 重庆交通大学 一种边坡变形监测方法、装置、终端及存储介质
CN111023970A (zh) * 2019-12-17 2020-04-17 杭州思看科技有限公司 多模式三维扫描方法及系统
CN113592989A (zh) * 2020-04-14 2021-11-02 广东博智林机器人有限公司 一种三维场景的重建系统、方法、设备及存储介质
CN113532311A (zh) * 2020-04-21 2021-10-22 广东博智林机器人有限公司 点云拼接方法、装置、设备和存储设备
CN114052960A (zh) * 2020-07-31 2022-02-18 先临三维科技股份有限公司 数字化口腔数据的获取方法及装置、牙齿扫描仪控制系统
CN112022387A (zh) * 2020-08-27 2020-12-04 北京大学口腔医学院 一种种植体的定位方法、装置、设备及存储介质
CN113483695A (zh) * 2021-07-01 2021-10-08 先临三维科技股份有限公司 三维扫描系统、辅助件、处理方法、装置、设备及介质
CN114708150A (zh) * 2022-05-02 2022-07-05 先临三维科技股份有限公司 一种扫描数据处理方法、装置、电子设备及介质

Also Published As

Publication number Publication date
CN114708150A (zh) 2022-07-05

Similar Documents

Publication Publication Date Title
WO2023213253A1 (zh) 一种扫描数据处理方法、装置、电子设备及介质
CN107194962B (zh) 点云与平面图像融合方法及装置
WO2023213254A1 (zh) 一种口内扫描处理方法、系统、电子设备及介质
CN116182878B (zh) 道路曲面信息生成方法、装置、设备和计算机可读介质
CN109754464B (zh) 用于生成信息的方法和装置
WO2024001959A1 (zh) 一种扫描处理方法、装置、电子设备及存储介质
WO2023241704A1 (zh) 牙齿模型的获取方法、装置、设备及介质
WO2023213252A1 (zh) 扫描数据处理方法、装置、设备及介质
CN114494388A (zh) 一种大视场环境下图像三维重建方法、装置、设备及介质
CN115205128A (zh) 基于结构光的深度相机温漂校正方法、系统、设备及介质
WO2024094087A1 (zh) 一种三维扫描方法、装置、系统、设备及介质
WO2023213255A1 (zh) 扫描装置及其连接方法、装置、电子设备及介质
CN112150491B (zh) 图像检测方法、装置、电子设备和计算机可读介质
WO2023237065A1 (zh) 回环检测方法、装置、电子设备及介质
WO2023138467A1 (zh) 虚拟物体的生成方法、装置、设备及存储介质
WO2024109796A1 (zh) 一种扫描头位姿检测方法、装置、设备及介质
CN111354070B (zh) 一种立体图形生成方法、装置、电子设备及存储介质
WO2024109795A1 (zh) 一种扫描处理方法、装置、设备及介质
CN112883757B (zh) 生成跟踪姿态结果的方法
CN117373024B (zh) 标注图像生成方法、装置、电子设备和计算机可读介质
WO2024109268A1 (zh) 一种数字模型比对方法、装置、设备及介质
CN111275813B (zh) 数据处理方法、装置和电子设备
CN116630436B (zh) 相机外参修正方法、装置、电子设备和计算机可读介质
CN116492082B (zh) 基于三维模型的数据处理方法、装置、设备及介质
CN112880675B (zh) 用于视觉定位的位姿平滑方法、装置、终端和移动机器人

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23799239

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023799239

Country of ref document: EP

Effective date: 20240430