CN118429550A - Three-dimensional reconstruction method, system, electronic equipment and storage medium - Google Patents

Three-dimensional reconstruction method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN118429550A
CN118429550A CN202410901847.0A CN202410901847A CN118429550A CN 118429550 A CN118429550 A CN 118429550A CN 202410901847 A CN202410901847 A CN 202410901847A CN 118429550 A CN118429550 A CN 118429550A
Authority
CN
China
Prior art keywords
scanned
dimensional
scanning
scanning module
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410901847.0A
Other languages
Chinese (zh)
Other versions
CN118429550B (en
Inventor
李仁举
施飞
居冰峰
赵晓波
江腾飞
张健
孙安玉
王文斌
黄磊杰
李洲强
朱吴乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining 3D Technology Co Ltd
Original Assignee
Shining 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining 3D Technology Co Ltd filed Critical Shining 3D Technology Co Ltd
Priority to CN202410901847.0A priority Critical patent/CN118429550B/en
Publication of CN118429550A publication Critical patent/CN118429550A/en
Application granted granted Critical
Publication of CN118429550B publication Critical patent/CN118429550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application provides a three-dimensional reconstruction method, a system, electronic equipment and a storage medium, wherein the three-dimensional reconstruction method comprises the following steps: acquiring first point cloud data obtained by scanning an object to be scanned by a scanning module with the farthest working distance; wherein the first point cloud data comprises a plurality of first three-dimensional points; determining characteristic parameters of the first point cloud data; determining a target area on an object to be scanned according to the characteristic parameters; determining a target scanning module from a plurality of scanning modules according to the characteristic parameters in the target area; acquiring second point cloud data obtained by scanning a target area by a target scanning module; wherein the second point cloud data comprises a plurality of second three-dimensional points; determining the weight of each first three-dimensional point and each second three-dimensional point; and fusing the plurality of first three-dimensional points and the plurality of second three-dimensional points corresponding to the at least one target area according to the weights to obtain three-dimensional reconstruction data of the object to be scanned. The application relates to the technical field of three-dimensional reconstruction, which can improve the accuracy of three-dimensional reconstruction of an object to be scanned.

Description

Three-dimensional reconstruction method, system, electronic equipment and storage medium
Technical Field
The present application relates to the field of three-dimensional scanning technologies, and in particular, to a three-dimensional reconstruction method, system, electronic device, and storage medium.
Background
Currently, when performing cross-scale scanning of objects in a large-sized scene, various scanning devices are generally relied on to scan the objects to be scanned to ensure the integrity of the point cloud data. For example, the handheld structure optical scanning module is generally utilized to ensure the integrity of global data, meanwhile, high-precision detail data is acquired by means of high-precision fixed equipment, then different point cloud data from multiple equipment are adopted for splicing processing in the point cloud reconstruction process, and point cloud weight is relied on for merging point clouds in the same area. However, the data after the reconstruction of the point cloud data is easy to generate obvious excessive traces and grid connection fracture risks in the mode, so that the accuracy of the three-dimensional reconstruction data is low, the fixed equipment is difficult to scan a large-scale target due to the fact that the back end of the grid data is processed, and the time efficiency is low.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a three-dimensional reconstruction method, system, electronic device and storage medium, so as to solve the technical problems of low accuracy and scanning efficiency of the three-dimensional reconstruction data.
The application provides a three-dimensional reconstruction method, which is applied to electronic equipment, wherein the electronic equipment is in communication connection with a plurality of scanning modules, the plurality of scanning modules are used for scanning an object to be scanned, and the working distance range of each scanning module is different, and the method comprises the following steps: acquiring first point cloud data obtained by scanning the object to be scanned by a scanning module with the farthest working distance; wherein the first point cloud data comprises a plurality of first three-dimensional points; determining characteristic parameters of the first point cloud data; determining a target area on the object to be scanned according to the characteristic parameters; determining a target scanning module from the plurality of scanning modules according to the characteristic parameters in the target area; acquiring second point cloud data obtained by scanning the target area by the target scanning module; wherein the second point cloud data includes a plurality of second three-dimensional points; determining the weight of each first three-dimensional point and each second three-dimensional point; and fusing the plurality of first three-dimensional points and the plurality of second three-dimensional points corresponding to at least one target area according to the weights to obtain three-dimensional reconstruction data of the object to be scanned.
In some embodiments, the characteristic parameters include: the category of the object to be scanned; and curvature data of the first point cloud data.
In some embodiments, the determining a target scan module from the plurality of scan modules according to the characteristic parameter in the target region comprises: traversing the category of each scanning module; and when the category of the traversed scanning module is the same as the category of the object to be scanned, determining the scanning module corresponding to the category as a target scanning module.
In some embodiments, the determining the target scan module from the plurality of scan modules according to the characteristic parameters in the target region includes: determining a physical resolution required for scanning the target area according to the curvature data; traversing the point cloud physical resolution corresponding to each scanning module, and determining the difference between the point cloud physical resolution and the physical resolution required by the target area; and determining the scanning module corresponding to the smallest difference value as a target scanning module from the scanning modules with the point cloud physical resolution larger than the physical resolution required by the target area.
In some embodiments, a fusion resolution in the electronic device is set according to a point cloud physical resolution of the target scanning module, where the fusion resolution is used to characterize a resolution of three-dimensional reconstruction data of the target region.
In some embodiments, the amount of light emitted by the target scanning module when scanning the object to be scanned is determined based on a distance between the target scanning module and the object to be scanned.
In some embodiments, determining the weight of each first three-dimensional point according to the position of the ray of each first three-dimensional point in the corresponding scanning module; determining the weight of each second three-dimensional point according to the position of the ray of each second three-dimensional point in the target scanning module; and/or determining the weight of each first three-dimensional point according to the distance between the scanning module corresponding to each first three-dimensional point and the object to be scanned; determining the weight of each second three-dimensional point according to the distance between the target scanning module and the object to be scanned; and/or determining the weight of each first three-dimensional point according to the incident angle of the light of each first three-dimensional point to irradiate the object to be scanned; and determining the weight of each second three-dimensional point according to the incidence angle of the light rays of each second three-dimensional point to irradiate the object to be scanned.
The embodiment of the application also provides a three-dimensional reconstruction system, which comprises: a plurality of scanning modules and electronic equipment; the plurality of scanning modules are used for scanning the object to be scanned, and the distance between each scanning module and the object to be scanned is different when each scanning module scans; the electronic equipment is used for acquiring first point cloud data acquired by the scanning module corresponding to the largest distance; determining characteristic parameters of the first point cloud data; determining a target scanning module from the plurality of scanning modules according to the characteristic parameters; acquiring second point cloud data obtained by scanning the object to be scanned by the target scanning module; and fusing the first point cloud data and the second point cloud data to obtain three-dimensional reconstruction data of the object to be scanned.
The embodiment of the application also provides electronic equipment, which comprises: a memory storing at least one instruction; and the processor executes the instructions stored in the memory to realize the three-dimensional reconstruction method.
The embodiment of the application also provides a computer readable storage medium, wherein at least one instruction is stored in the computer readable storage medium, and the at least one instruction is executed by a processor in electronic equipment to realize the three-dimensional reconstruction method.
According to the technical scheme, the scanning modules with different distances are arranged in the scanning system, the scanning module with the farthest distance is preferentially adopted to obtain first point cloud data in the real-time scanning process, then the characteristic parameters of the object to be scanned are determined according to the first point cloud data, the physical resolution required by the object to be scanned is determined according to the characteristic parameters, and finally the adaptive scanning module is adjusted to carry out three-dimensional reconstruction on the object to be scanned. Therefore, the accuracy of analyzing the characteristics of different structures and shapes can be improved on the basis of ensuring the integrity of the point cloud data, and the accuracy of three-dimensional reconstruction data is improved.
Drawings
Fig. 1 is a schematic diagram of a three-dimensional reconstruction system according to an embodiment of the present application.
Fig. 2 is a flowchart of a three-dimensional reconstruction method according to an embodiment of the present application.
Fig. 3 is a flowchart of a method for determining a target scan module according to an embodiment of the application.
Fig. 4 is a flowchart of a method for determining a target scan module according to another embodiment of the present application.
Fig. 5 is a flowchart of a method for determining weights of a first three-dimensional point and a second three-dimensional point according to an embodiment of the present application.
Fig. 6 is a flowchart of a method of determining weights of a first three-dimensional point and a second three-dimensional point according to another embodiment of the present application.
Fig. 7 is a flowchart of a method of determining weights of a first three-dimensional point and a second three-dimensional point according to still another embodiment of the present application.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The application will be described in detail below with reference to the drawings and the specific embodiments thereof in order to more clearly understand the objects, features and advantages of the application. It should be noted that, without conflict, embodiments of the present application and features in the embodiments may be combined with each other. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, the described embodiments are merely some, rather than all, embodiments of the present application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
The embodiment of the application provides a three-dimensional reconstruction method, which can be applied to one or more electronic devices, wherein the electronic devices are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware comprises, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital Processor (DIGITAL SIGNAL Processor, DSP), an embedded device and the like.
The electronic device may be any electronic product that can interact with a customer in a human-computer manner, such as a Personal computer, a tablet computer, a smart phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a game console, an interactive internet protocol television (Internet Protocol Television, IPTV), a smart wearable device, etc.
The electronic device may also include a network device and/or a client device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers.
The network in which the electronic device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
As shown in fig. 1, the three-dimensional reconstruction system provided by the present application includes an electronic device 100 and a plurality of scanning modules, where the electronic device 100 is communicatively connected to a database 200. The electronic device is also communicatively connected to a plurality of scan modules, such as the first scan module 300, the second scan module 400, and the third scan module 500 shown in fig. 1. The plurality of scanning modules are used for scanning the object 600 to be scanned so as to reconstruct the object 600 to be scanned in three dimensions. Wherein, the distance between each scanning module and the object 600 to be scanned is different when scanning. The database 200 may be a data storage device built in the electronic device 100, or an external data storage device communicatively connected to the electronic device 100, which is not limited in this regard. The database 200 is used for storing point cloud data obtained by scanning the object 600 to be scanned by a plurality of scanning modules.
In an embodiment of the present application, the electronic device 100 obtains first point cloud data obtained by scanning the object to be scanned by the scanning module corresponding to the maximum distance from the database 200, where the first point cloud data includes a plurality of first three-dimensional points. The scan module corresponding to the maximum distance may be the scan module 300 shown in fig. 1. The electronic device 100 is further configured to determine a characteristic parameter of the first point cloud data, and determine a target scan module from a plurality of scan modules according to the characteristic parameter, where the target scan module may be the second scan module 400 or the third scan module 500 shown in fig. 1. The electronic device 100 is further configured to obtain second point cloud data obtained by the target scanning module (for example, the second scanning module 400 or the third scanning module 500 shown in fig. 1) scanning the object 600 to be scanned, where the second point cloud data includes a plurality of second three-dimensional points. The electronic device 100 is further configured to determine weights of each first three-dimensional point and each second three-dimensional point, and fuse the plurality of first three-dimensional points and the plurality of second three-dimensional points according to the weights to obtain three-dimensional reconstruction data of the object 600 to be scanned, where the three-dimensional reconstruction data is used to characterize a structure and a shape of the object 600 to be scanned.
Fig. 2 is a flowchart of a three-dimensional reconstruction method according to an embodiment of the present application. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs. The three-dimensional reconstruction method provided by the embodiment of the application comprises the following steps of.
S20, acquiring first point cloud data obtained by scanning the object to be scanned by a scanning module with the farthest working distance; wherein the first point cloud data includes a plurality of first three-dimensional points.
In an embodiment of the present application, the object to be scanned may be any object that needs to measure geometry, for example, an automobile, a medical device, a product part, etc., and the attribute and the shape of the object to be scanned are not limited in the present application. A small number of object marker points, which may be retroreflective marker points or retroreflective textures, may be affixed to the object to be scanned prior to measuring the object to be scanned.
In an embodiment of the application, a camera and a light source emitter are disposed in the scanning module. Wherein the light source emitter is used for projecting structural light to the surface of the object to be scanned; the camera is used for capturing the structural light reflected by the surface of the object to be scanned and determining point cloud data corresponding to the object to be scanned according to the reflected light. In order to ensure the integrity of the point cloud data, the object to be scanned can be scanned based on a plurality of scanning modules, wherein the working distance range of each scanning module is different, and the distance between the scanning module and the object to be scanned is different when the scanning module is used. The method and the device can acquire the point cloud data of different depths and different angles of the object to be scanned, and do not limit the specific scanning posture of the scanning module and the position of each scanning module;
the light source can be a light device, a DLP projector, an LED and the like, and the projected structured light can be single light, multiple parallel light, multiple groups of crossed parallel lines, speckles, gratings and the like.
In an embodiment of the present application, the first point cloud data may be a set of point cloud data obtained when the scanning module with the farthest working distance scans the object to be scanned. The first point cloud data comprises a plurality of first three-dimensional points, and each first three-dimensional point corresponds to one three-dimensional coordinate. The scan module having the greatest distance from the object to be scanned may be, for example, the scan module 300 shown in fig. 1.
In an embodiment of the present application, the three-dimensional coordinates of the first three-dimensional point are used to characterize the position of the first three-dimensional point in a preset coordinate system. The origin of coordinates of the preset coordinate system may be a position of the scanning module in the target space where the object to be scanned is located, or may be a preset point in the target space, which is not limited in the present application.
S21, determining characteristic parameters of the first point cloud data.
In an embodiment of the present application, the feature parameter of the first point cloud data is used to describe a local attribute of each first three-dimensional point. The characteristic parameters include: the category of the object to be scanned; and curvature data of the first point cloud data. Wherein the class of the object to be scanned is used for representing the structure and the shape of the object to be scanned. For example, when the class of the object to be scanned is a sphere, it indicates that the surface structure of the object to be scanned is a sphere structure; when the category of the object to be scanned is a cube, the surface structure of the object to be scanned is indicated to be a cube structure. The application does not limit the specific category of the object to be scanned. Wherein the curvature data of the first point cloud data is used for representing the roughness degree of the surface of the object to be scanned or the complexity degree of the surface shape. When the curvature data is larger, the surface roughness or the complexity of the surface shape of the object to be scanned is indicated to be higher; the smaller the curvature data, the lower the surface roughness or complexity of the surface shape of the object to be scanned is indicated.
In some embodiments, the characteristic parameters of the first point cloud data further comprise: the normal vector is used for representing the normal direction of the surface where each first three-dimensional point is located; the surface roughness is used for representing the smoothness degree or the roughness degree of the point cloud surface and can be used for analyzing the texture of the object surface; color values, intensity values, incident directions of light, etc. of the first point cloud data.
S22, determining a target area on the object to be scanned according to the characteristic parameters, and determining a target scanning module from the plurality of scanning modules according to the characteristic parameters in the target area.
In an embodiment of the present application, in order to reconstruct three-dimensional of an object to be scanned according to point cloud data obtained by scanning the object to be scanned by a plurality of scanning modules, thereby improving accuracy of three-dimensional reconstruction, a target scanning module may be determined from the plurality of scanning modules according to characteristic parameters of the first point cloud data.
In an embodiment of the present application, during scanning, in order to improve accuracy of three-dimensional reconstruction of a region with a complex structure and shape on an object to be scanned, a target region may be first determined on the object to be scanned according to a feature parameter. For example, the characteristic parameter may be curvature data of the first point cloud data, and when curvature data of point cloud data corresponding to an arbitrary region on the object to be scanned is larger, it indicates that the structure and shape of the region on the object to be scanned are more complex, and the region may be determined to be the target region.
In an embodiment of the present application, each scanning module is adapted to scan objects to be scanned with different distances, different structures and shapes. For example, the light emitted by some scanning modules can be used for scanning an object to be scanned of the sphere structure; the light emitted by other scanning modules can be used for scanning the object to be scanned of the cube structure. The application is not limited to the type of the object to be scanned, which is applicable to the scanning module. Specifically, when the characteristic parameter includes a category of an object to be scanned, the method for determining the target scanning module may refer to the detailed description corresponding to fig. 3.
In an embodiment of the application, curvature data of the first point cloud data is used to characterize a local geometry of the first point cloud data surface, which can be used to describe a degree of curvature or a change in curvature of the first three-dimensional point. When the curvature data is larger, the surface roughness degree or the surface shape complexity degree of the object to be scanned is higher, the object to be scanned can be scanned according to the scanning module with higher physical resolution, so that the accuracy of three-dimensional reconstruction of the object to be scanned is improved; when the curvature data is smaller, the surface roughness or the complexity of the surface shape of the object to be scanned is lower, the object to be scanned can be scanned according to the scanning module with lower physical resolution, and therefore the efficiency of three-dimensional reconstruction of the object to be scanned is improved on the basis of ensuring the accuracy of three-dimensional reconstruction. Specifically, when the feature parameter includes curvature data of the first point cloud data, the method for determining the target scan module may refer to the detailed description corresponding to fig. 4.
In an embodiment of the present application, in order to ensure accuracy of three-dimensional reconstruction of an object to be scanned and improve efficiency of three-dimensional reconstruction, the method further includes: and setting fusion resolution in the electronic equipment according to the point cloud physical resolution of the target scanning module, wherein the fusion resolution is used for representing the resolution of the three-dimensional reconstruction data of the target area. The point cloud physical resolution of the scanning module can be the optical resolution of the scanning module, and is used for measuring the precision degree of the photosensitive device in the scanning module. Specifically, the physical resolution of the point cloud may be the actual number of points of light captured per square inch of the area of the optical component of the scanning module, and is generally expressed by dpi (dots per inch), which can reflect the ability of the scanning module to capture details of the point cloud data. The higher the physical resolution of the point cloud, the more abundant the details of the point cloud data captured by the scanning module.
In an embodiment of the present application, in order to improve accuracy of three-dimensional reconstruction of an object to be scanned, a target space where the object to be scanned is located may be divided into a plurality of voxels, and data support is provided for voxel processing of point cloud data. The point cloud data corresponding to each voxel can be obtained by dividing the point cloud data based on a plurality of voxels, and the point cloud data corresponding to each voxel is fused, so that the accuracy of three-dimensional reconstruction is improved. Wherein the number of voxels in the target space. Illustratively, the size of the voxels may be 1mm by 1mm, or may be 0.3mm by 0.3mm, as the application is not limited in this regard.
In an embodiment of the present application, in the process of performing three-dimensional reconstruction on an object to be scanned, if the size of a voxel is too high, the fine granularity of dividing a target space is insufficient, so that the problem of information loss is caused; if the voxel size is too low, the number of voxels in the target space is too large, and then the problems of long three-dimensional reconstruction time consumption and high computing resource occupancy rate of the electronic equipment are caused. In order to avoid the above problem, the fusion resolution in the electronic device may be set according to the point cloud physical resolution of the target scanning module, where the fusion resolution is used to characterize the resolution of the three-dimensional reconstruction data corresponding to the object to be scanned.
In one embodiment of the present application, a laser source is taken as an example; when the camera in the scanning module performs light alignment, the alignment error may be caused due to dense light projected by the scanning module and consistent geometric shape, so that the error of the point cloud data is higher. In order to improve the accuracy of the acquired point cloud data, the number of light rays emitted by the scanning module can be reduced. Specifically, the scanning module emits a plurality of light rays to an object to be scanned in the three-dimensional reconstruction process, wherein the plurality of light rays comprise a central light ray and other light rays which are distributed around the central light ray in a parallel and scattered mode. Since the surface of the rest of the projected light plane is seriously curved, the point cloud data generated by the rest of the light rays are greatly deformed, and therefore, the accuracy of the point cloud data generated by the rest of the light rays is gradually reduced.
In an embodiment of the present application, in the process of performing three-dimensional reconstruction on the object to be scanned, the method further includes: and determining the quantity of light rays emitted by the target scanning module when the target scanning module scans the object to be scanned based on the distance between the target scanning module and the object to be scanned. Specifically, when the target scanning module is used for scanning the object to be scanned, at least one pair of light rays in the scanning module can be canceled, wherein the number of at least one light ray can be determined according to the distance between the scanning module and the object to be scanned, the number of canceled light rays is smaller when the distance is longer, and the number of canceled light rays is larger when the distance is shorter. For example, when the distance between the scanning module furthest from the object to be scanned (e.g., the scanning module 300 shown in fig. 1) and the object to be scanned (e.g., the object to be scanned 600 shown in fig. 1) is 10cm, if the target scanning module (e.g., the object to be scanned shown in fig. 1) emits 7 light rays to scan the object to be scanned, at least one pair of the 7 light rays located on the outer side may be canceled. The quantity of light rays can be reduced, so that the complexity of point cloud data is reduced, and the efficiency of point cloud data processing can be improved; the number of outside light rays can be reduced, so that point cloud data corresponding to the edge of an object to be scanned is reduced, deformation of the point cloud data can be reduced, and accuracy of processing the point cloud data can be improved.
S23, acquiring second point cloud data obtained by scanning the target area by the target scanning module; wherein the second point cloud data includes a plurality of second three-dimensional points.
In an embodiment of the present application, the second point cloud data may be a set of point cloud data obtained when the target scanning module scans the object to be scanned. The second point cloud data comprises a plurality of second three-dimensional points, and each second three-dimensional point corresponds to one three-dimensional coordinate. The target scan module may be, for example, the scan module 400 or the scan module 500 shown in fig. 1.
In an embodiment of the present application, the three-dimensional coordinates of the second three-dimensional point are used to characterize the position of the second three-dimensional point in a preset coordinate system. The origin of coordinates of the preset coordinate system may be a position of the scanning module in the target space where the object to be scanned is located, or may be a preset point in the target space, which is not limited in the present application.
And S24, determining the weight of each first three-dimensional point and each second three-dimensional point.
In an embodiment of the present application, in order to improve accuracy of three-dimensional reconstruction of an object to be scanned, a distance between a plurality of scanning modules and the object to be scanned, a position of a light ray irradiating to form a first three-dimensional point and a second three-dimensional point in a corresponding scanning module, and an incident angle of the light ray irradiating the module to be scanned may be combined to determine weights of each first three-dimensional point and each second three-dimensional point.
In an embodiment of the present application, an incident angle of a light beam corresponding to a first three-dimensional point when the light beam irradiates a module to be scanned may be determined according to a three-dimensional coordinate of the first three-dimensional point in a preset coordinate system and a three-dimensional coordinate of a scanning module in the preset coordinate system; and determining the incident angle of the light corresponding to the second three-dimensional point when the light irradiates the module to be scanned according to the three-dimensional coordinates of the second three-dimensional point in the preset coordinate system and the three-dimensional coordinates of the scanning module in the preset coordinate system. Specifically, a coordinate connecting line between the first three-dimensional point and the scanning module can be determined, a normal vector of the first point cloud data is determined, and an included angle between the coordinate connecting line and the normal vector of the first point cloud data is calculated to obtain an incident angle when the light corresponding to the first three-dimensional point irradiates the module to be scanned; and determining a coordinate connecting line between the second three-dimensional point and the scanning module, determining a normal vector of the second point cloud data, calculating an included angle between the coordinate connecting line and the normal vector of the second point cloud data, and obtaining an incident angle when the light corresponding to the second three-dimensional point irradiates the module to be scanned.
In an embodiment of the present application, the method for determining the weights of the first three-dimensional point and the second three-dimensional point is shown in the detailed description corresponding to fig. 5, the detailed description corresponding to fig. 6, and/or the detailed description corresponding to fig. 7.
And S25, fusing the plurality of first three-dimensional points and the plurality of second three-dimensional points corresponding to at least one target area according to the weights to obtain three-dimensional reconstruction data of the object to be scanned.
In an embodiment of the present application, in order to fit an approximate plane of a surface of an object to be scanned in each voxel according to point cloud data in each voxel, a fitting may be performed on a first three-dimensional point in each voxel and a second three-dimensional point corresponding to at least one target area based on a least square method, so as to obtain a target point corresponding to each voxel and a target normal vector, and then the approximate plane of the surface of the object to be scanned in each voxel may be determined according to the target point and the target normal vector. When fitting the first three-dimensional point in each voxel and the second three-dimensional point corresponding to at least one target area based on the least square method, the confidence level of the first three-dimensional point and the second three-dimensional point can be adjusted in the fitting process according to the weight of the first three-dimensional point and the second three-dimensional point. Thus, the accuracy of the three-dimensional reconstruction data of the object to be scanned can be improved.
In an embodiment of the present application, after the target point and the target normal vector corresponding to each voxel are obtained, an approximate plane of the object to be scanned in the voxels may be determined according to the target point and the corresponding target normal vector, and then the approximate plane of each voxel is fused to obtain three-dimensional reconstruction data of the object to be scanned.
For example, when the curve C1 and the curve C2 exist on the surface of the object to be scanned, and the curve C1 and the curve C2 intersect at the three-dimensional point P, if the tangent vector of the curve C1 at the three-dimensional point P is T1 and the tangent vector of the curve C2 at the three-dimensional point P is T2, the tangent vector T1 and the tangent vector T2 are both on the tangent plane of the surface at the three-dimensional point P. From the vectors T1 and T2, the normal vector of the three-dimensional point P may be determined, and the direction of the tangential plane may be determined from the three-dimensional coordinates of the three-dimensional point P and the normal vector, and the surface of the object to be scanned at the three-dimensional point P may be characterized based on the tangential plane. In order to obtain an approximate plane corresponding to the plane of the object to be scanned, a local approximate plane of each voxel may be determined according to the above method, and the plane of the object to be scanned may be constructed according to the local approximate plane of each voxel.
In an embodiment of the present application, when the three-dimensional coordinate of the three-dimensional point P in the preset coordinate system is (x 0,y0,z0) and the target normal vector n is (a, B, C), the plane equation where the three-dimensional point P is located can be determined according to the following relation:
; wherein (A, B, C) represents the normal vector of the plane in which the three-dimensional point P is located, and (x 0,y0,z0) represents the three-dimensional coordinates of the three-dimensional point P.
According to the technical scheme, the scanning modules with different distances are arranged in the scanning system, the scanning module with the farthest distance is preferentially adopted to obtain first point cloud data in the real-time scanning process, then the characteristic parameters of the object to be scanned are determined according to the first point cloud data, the physical resolution required by the object to be scanned is determined according to the characteristic parameters, and finally the adaptive scanning module is adjusted to carry out three-dimensional reconstruction on the object to be scanned. Therefore, the accuracy of analyzing the characteristics of different structures and shapes can be improved on the basis of ensuring the integrity of the point cloud data, and the accuracy of three-dimensional reconstruction data is improved.
Fig. 3 is a flowchart of a method for determining a target scan module according to an embodiment of the application. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs. The method for determining the target scanning module provided by the embodiment of the application comprises the following steps.
S30, traversing the category of each scanning module.
In an embodiment of the present application, each scanning module is adapted to scan objects to be scanned with different structures and shapes. For example, the light emitted by some scanning modules can be used for scanning an object to be scanned of the sphere structure; the light emitted by other scanning modules can be used for scanning the object to be scanned of the cube structure. To determine the scan module that is suitable for scanning an object to be scanned, a category of each scan module of the plurality of scan modules may be traversed. The application does not limit the sequence of traversing the plurality of scanning modules.
And S31, when the type of the traversed scanning module is the same as the type of the object to be scanned, determining the scanning module corresponding to the type as a target scanning module.
In an embodiment of the present application, when the type of the traversed scanning module is the same as the type of the object to be scanned, it indicates that the traversed scanning module is suitable for scanning the object to be scanned to perform three-dimensional reconstruction. For example, when the traversed scanning module is a sphere and the structure of the object to be scanned is also a sphere, the traversed scanning module may be determined to be the target scanning module. And further, second point cloud data obtained by scanning the object to be scanned by the target scanning module can be obtained. The method can provide data support for subsequent multipoint cloud fusion to obtain three-dimensional reconstruction data.
Fig. 4 is a flowchart of a method for determining a target scan module according to another embodiment of the present application. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs. The method for determining the target scanning module provided by the embodiment of the application comprises the following steps.
S40, determining the physical resolution required by scanning the target area according to the curvature data.
In an embodiment of the present application, curvature data of the first three-dimensional point may be determined according to three-dimensional coordinates corresponding to the first three-dimensional point, and a physical resolution required when scanning the object to be scanned may be determined according to the curvature data, where the physical resolution is greater when the curvature data is greater, and the physical resolution is smaller when the curvature data is smaller. Specifically, when the curvature data is larger, the structure of the surface of the object to be scanned is more complex, the size of the voxels required by the object to be scanned when being scanned can be determined to be smaller, and the physical resolution is larger, so that the fine granularity of voxelization processing of the point cloud data can be improved, and the accuracy of three-dimensional reconstruction can be improved; when the curvature data is smaller, the surface of the object to be scanned is smoother, the size of the voxels required by the object to be scanned when being scanned can be determined to be larger, and the physical resolution is smaller, so that the efficiency of point cloud data processing is improved.
S41, traversing the point cloud physical resolution corresponding to each scanning module, and determining the difference between the point cloud physical resolution and the physical resolution required by the target area.
In an embodiment of the present application, the physical resolution of the point cloud of the scanning module may be an optical resolution of the scanning module, which is used to measure the precision degree of the photosensitive device in the scanning module. Specifically, the physical resolution of the point cloud may be the actual number of points of light captured per square inch of the area of the optical component of the scanning module, and is generally expressed by dpi (dots per inch), which can reflect the ability of the scanning module to capture details of the point cloud data. The higher the physical resolution of the point cloud, the more abundant the details of the point cloud data captured by the scanning module. In order to determine a scanning module suitable for scanning an object to be scanned, the physical resolution of the point cloud corresponding to each scanning module in the plurality of scanning modules can be traversed. The application does not limit the sequence of traversing the plurality of scanning modules.
In one embodiment of the present application, in order to determine a point cloud physical resolution that is similar to the physical resolution required by the target area, a difference between the point cloud physical resolution and the physical resolution required by the target area is also determined. And determining the point cloud physical resolution similar to the required physical resolution and the corresponding target scanning module according to the difference value.
S42, determining the scanning module corresponding to the smallest difference value as a target scanning module from the scanning modules with the point cloud physical resolution larger than the physical resolution required by the target area.
In an embodiment of the present application, when the physical resolution of the point cloud of the traversed scanning module is greater than the physical resolution required by the target area, and the difference between the physical resolution of the point cloud and the physical resolution required by the target area is the minimum, it indicates that the scanning module is suitable for scanning the object to be scanned, and the accuracy of the second point cloud data scanned by the scanning module conforms to the corresponding physical resolution, so that the scanning module can be determined to be the target scanning module.
As shown in fig. 5, a flowchart of a method for determining weights of a first three-dimensional point and a second three-dimensional point according to an embodiment of the present application is provided. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs. The method for determining the weights of the first three-dimensional point and the second three-dimensional point provided by the embodiment of the application comprises the following steps.
S50, determining the weight of each first three-dimensional point according to the position of the light ray of each first three-dimensional point in the corresponding scanning module.
In an embodiment of the application, the scanning module emits a plurality of light rays to irradiate the object to be scanned, and receives the light rays reflected by the object to be scanned to obtain the point cloud data of the object to be scanned. When the position of the light ray irradiating the object to be scanned in the corresponding scanning module is far from the center of the scanning module, the position of the light ray irradiating the object to be scanned is far outside, so that the weight of the first three-dimensional point obtained by scanning the object to be scanned by the light ray is lower. When the position of the light ray irradiating the object to be scanned in the corresponding scanning module is closer to the center of the scanning module, the position of the light ray irradiating the object to be scanned is closer to the center of the object to be scanned, so that the weight of the first three-dimensional point obtained by scanning the object to be scanned by the light ray is higher.
S51, determining the weight of each second three-dimensional point according to the position of the ray of each second three-dimensional point in the target scanning module.
In an embodiment of the present application, when the position of the light beam irradiating the object to be scanned in the target scanning module is farther from the center of the target scanning module, the position of the light beam irradiating the object to be scanned is more outside, so that the weight of the second three-dimensional point obtained by scanning the object to be scanned by the light beam is lower. When the position of the light ray irradiating the object to be scanned in the target scanning module is closer to the center of the target scanning module, the position of the light ray irradiating the object to be scanned is closer to the center of the object to be scanned, so that the weight of a second three-dimensional point obtained by scanning the object to be scanned by the light ray is higher.
As shown in fig. 6, a flowchart of a method for determining weights of a first three-dimensional point and a second three-dimensional point according to another embodiment of the present application is provided. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs. The method for determining the weights of the first three-dimensional point and the second three-dimensional point provided by the embodiment of the application comprises the following steps.
S60, determining the weight of each first three-dimensional point according to the distance between the scanning module corresponding to each first three-dimensional point and the object to be scanned.
In an embodiment of the present application, the farther the distance between the scanning module and the object to be scanned is, the rougher the characteristic of the surface of the object to be scanned, which is represented by the point cloud data obtained by the scanning module, the lower the weight of the point cloud data obtained by the scanning module for scanning the object to be scanned is; the closer the distance between the scanning module and the object to be scanned is, the higher the accuracy of the characteristics of the surface of the object to be scanned, which is represented by the point cloud data obtained by the scanning module, the higher the weight of the point cloud data obtained by the scanning module for scanning the object to be scanned.
And S61, determining the weight of each second three-dimensional point according to the distance between the target scanning module and the object to be scanned.
In an embodiment of the present application, the farther the distance between the target scanning module and the object to be scanned is, the rougher the feature of the surface of the object to be scanned, which is represented by the second point cloud data obtained by the target scanning module is, the lower the weight of the second three-dimensional point is; the closer the distance between the scanning module and the object to be scanned is, the higher the accuracy of the characteristic of the surface of the object to be scanned, which is represented by the point cloud data obtained by the scanning module is, the higher the weight of the second three-dimensional point is.
As shown in fig. 7, a flowchart of a method for determining weights of a first three-dimensional point and a second three-dimensional point according to still another embodiment of the present application is provided. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs. The method for determining the weights of the first three-dimensional point and the second three-dimensional point provided by the embodiment of the application comprises the following steps.
And S70, determining the weight of each first three-dimensional point according to the incident angle of the light rays of each first three-dimensional point to irradiate the object to be scanned.
In an embodiment of the present application, when determining an incident angle corresponding to a light ray to which a first three-dimensional point belongs, a coordinate connection line between the first three-dimensional point and a scanning module may be determined, a normal vector of first point cloud data is determined, and an included angle between the coordinate connection line and the normal vector of the first point cloud data is calculated, so as to obtain the incident angle corresponding to the first three-dimensional point when the light ray irradiates the module to be scanned. When the incident angle is larger, the light rays are closer to the direction vertical to the surface of the object to be scanned, and therefore the weight of the first point cloud data is higher; when the angle of incidence is smaller, the closer the ray is to the direction tangential to the surface of the object to be scanned, and therefore the lower the weight of the first point cloud data.
And S71, determining the weight of each second three-dimensional point according to the incident angle of the light rays of each second three-dimensional point to irradiate the object to be scanned.
In an embodiment of the application, a coordinate connection line between the second three-dimensional point and the scanning module can be determined, a normal vector of the second point cloud data is determined, and an included angle of the coordinate connection line and the normal vector between the second point cloud data is calculated to obtain an incident angle when the light corresponding to the second three-dimensional point irradiates the module to be scanned. When the incident angle is larger, the light rays are closer to the direction vertical to the surface of the object to be scanned, and therefore the weight of the second point cloud data is higher; when the angle of incidence is smaller, this means that the light ray is closer to a direction tangential to the surface of the object to be scanned, and thus the weight of the second point cloud data is lower.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 100 comprises a memory 12 and a processor 13. The memory 12 is used for storing computer readable instructions, and the processor 13 is used to execute the computer readable instructions stored in the memory to implement a three-dimensional reconstruction method according to any of the above embodiments.
In an embodiment of the application the electronic device 100 further comprises a bus, a computer program stored in said memory 12 and executable on said processor 13, for example a three-dimensional reconstruction program.
Fig. 8 shows only an electronic device 100 having a memory 12 and a processor 13, and it will be understood by those skilled in the art that the structure shown in fig. 4 is not limiting of the electronic device 100 and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
In connection with fig. 2, the memory 12 in the electronic device 100 stores a plurality of computer readable instructions to implement a three-dimensional reconstruction method, the processor 13 being executable to implement: acquiring first point cloud data obtained by scanning the object to be scanned by a scanning module with the farthest working distance; wherein the first point cloud data comprises a plurality of first three-dimensional points; determining characteristic parameters of the first point cloud data; determining a target area on the object to be scanned according to the characteristic parameters; determining a target scanning module from the plurality of scanning modules according to the characteristic parameters in the target area; acquiring second point cloud data obtained by scanning the target area by the target scanning module; wherein the second point cloud data includes a plurality of second three-dimensional points; determining the weight of each first three-dimensional point and each second three-dimensional point; and fusing the plurality of first three-dimensional points and the plurality of second three-dimensional points corresponding to at least one target area according to the weights to obtain three-dimensional reconstruction data of the object to be scanned.
Specifically, the specific implementation method of the above instructions by the processor 13 may refer to the description of the relevant steps in the corresponding embodiment of fig. 2, which is not repeated herein.
Those skilled in the art will appreciate that the schematic diagram is merely an example of the electronic device 100, and is not meant to limit the electronic device 100, and the electronic device 100 may be a bus-type structure, a star-type structure, other hardware or software, or a different arrangement of components than illustrated, where the electronic device 100 may include more or less hardware or software, and where the electronic device 100 may include an input/output device, a network access device, etc.
It should be noted that the electronic device 100 is only an example, and other electronic products that may be present in the present application or may be present in the future are also included in the scope of the present application by way of reference.
The memory 12 includes at least one type of readable storage medium, which may be non-volatile or volatile. The readable storage medium includes flash memory, a removable hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 12 may in some embodiments be an internal storage unit of the electronic device 100, such as a removable hard disk of the electronic device 100. The memory 12 may also be an external storage device of the electronic device 100 in other embodiments, such as a plug-in mobile hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), etc. that are provided on the electronic device 100. The memory 12 may be used not only for storing application software installed in the electronic device 100 and various types of data, such as a code of a three-dimensional reconstruction program, etc., but also for temporarily storing data that has been output or is to be output.
The processor 13 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, various control chips, and the like. The processor 13 is a Control Unit (Control Unit) of the electronic device 100, connects the respective components of the entire electronic device 100 using various interfaces and lines, and executes various functions of the electronic device 100 and processes data by running or executing programs or modules (for example, executing a three-dimensional reconstruction program or the like) stored in the memory 12, and calling data stored in the memory 12.
The processor 13 executes the operating system of the electronic device 100 and various types of applications installed. The processor 13 executes the application program to implement the steps of each of the above-described embodiments of a three-dimensional reconstruction method, such as the steps shown in fig. 2.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to complete the present application. The one or more modules/units may be a series of computer readable instruction segments capable of performing particular functions for describing the execution of the computer program in the electronic device 100.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a computer device, or a network device, etc.) or a Processor (Processor) to perform portions of a three-dimensional reconstruction method according to various embodiments of the present application.
The modules/units integrated with the electronic device 100 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as a stand alone product. Based on this understanding, the present application may also be implemented by a computer program for instructing a relevant hardware device to implement all or part of the procedures of the above-mentioned embodiment method, where the computer program may be stored in a computer readable storage medium and the computer program may be executed by a processor to implement the steps of each of the above-mentioned method embodiments.
Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory, other memories, and the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The bus may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one arrow is shown in FIG. 4, but only one bus or one type of bus is not shown. The bus is arranged to enable a connection communication between the memory 12 and at least one processor 13 or the like.
The embodiment of the present application further provides a computer readable storage medium (not shown), where computer readable instructions are stored, where the computer readable instructions are executed by a processor in an electronic device to implement a three-dimensional reconstruction method according to any one of the foregoing embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. Several of the elements or devices described in the specification may be embodied by one and the same item of software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present application without departing from the spirit and scope of the technical solution of the present application.

Claims (10)

1. A three-dimensional reconstruction method applied to an electronic device, wherein the electronic device is communicatively connected to a plurality of scanning modules, the plurality of scanning modules are used for scanning an object to be scanned, and each scanning module has a different working distance range, the method comprising:
Acquiring first point cloud data obtained by scanning the object to be scanned by a scanning module with the farthest working distance; wherein the first point cloud data comprises a plurality of first three-dimensional points;
Determining characteristic parameters of the first point cloud data;
Determining a target area on the object to be scanned according to the characteristic parameters;
determining a target scanning module from the plurality of scanning modules according to the characteristic parameters in the target area;
acquiring second point cloud data obtained by scanning the target area by the target scanning module; wherein the second point cloud data includes a plurality of second three-dimensional points;
Determining the weight of each first three-dimensional point and each second three-dimensional point;
And fusing the plurality of first three-dimensional points and the plurality of second three-dimensional points corresponding to at least one target area according to the weights to obtain three-dimensional reconstruction data of the object to be scanned.
2. The three-dimensional reconstruction method according to claim 1, wherein the characteristic parameters include:
The category of the object to be scanned; and curvature data of the first point cloud data.
3. The three-dimensional reconstruction method according to claim 2, wherein the determining a target scan module from the plurality of scan modules according to the characteristic parameter in the target region comprises:
Traversing the category of each scanning module;
And when the category of the traversed scanning module is the same as the category of the object to be scanned, determining the scanning module corresponding to the category as a target scanning module.
4. The three-dimensional reconstruction method according to claim 2, wherein the determining a target scan module from the plurality of scan modules according to the characteristic parameters in the target region comprises:
Determining a physical resolution required for scanning the target area according to the curvature data;
Traversing the point cloud physical resolution corresponding to each scanning module, and determining the difference between the point cloud physical resolution and the physical resolution required by the target area;
and determining the scanning module corresponding to the smallest difference value as a target scanning module from the scanning modules with the point cloud physical resolution larger than the physical resolution required by the target area.
5. The three-dimensional reconstruction method according to claim 4, further comprising:
And setting fusion resolution in the electronic equipment according to the point cloud physical resolution of the target scanning module, wherein the fusion resolution is used for representing the resolution of the three-dimensional reconstruction data of the target area.
6. The three-dimensional reconstruction method according to claim 1, further comprising:
and determining the quantity of light rays emitted by the target scanning module when the target scanning module scans the object to be scanned based on the distance between the target scanning module and the object to be scanned.
7. The three-dimensional reconstruction method of claim 1, wherein the determining the weights for each first three-dimensional point and each second three-dimensional point comprises:
determining the weight of each first three-dimensional point according to the position of the ray of each first three-dimensional point in the corresponding scanning module; determining the weight of each second three-dimensional point according to the position of the ray of each second three-dimensional point in the target scanning module; and/or
Determining the weight of each first three-dimensional point according to the distance between the scanning module corresponding to each first three-dimensional point and the object to be scanned; determining the weight of each second three-dimensional point according to the distance between the target scanning module and the object to be scanned; and/or
Determining the weight of each first three-dimensional point according to the incidence angle of the light rays of each first three-dimensional point to irradiate the object to be scanned; and determining the weight of each second three-dimensional point according to the incidence angle of the light rays of each second three-dimensional point to irradiate the object to be scanned.
8. A three-dimensional reconstruction system, the system comprising: a plurality of scanning modules and electronic equipment;
the plurality of scanning modules are used for scanning the object to be scanned, and the distance between each scanning module and the object to be scanned is different;
The electronic equipment is used for acquiring first point cloud data acquired by the scanning module corresponding to the largest distance; determining characteristic parameters of the first point cloud data; determining a target scanning module from the plurality of scanning modules according to the characteristic parameters; acquiring second point cloud data obtained by scanning the object to be scanned by the target scanning module; and fusing the first point cloud data and the second point cloud data to obtain three-dimensional reconstruction data of the object to be scanned.
9. An electronic device comprising a processor and a memory, wherein the processor is configured to implement the three-dimensional reconstruction method according to any one of claims 1 to 7 when executing a computer program stored in the memory.
10. A computer storage medium having a computer program stored thereon, which, when executed by a processor, implements the three-dimensional reconstruction method according to any one of claims 1 to 7.
CN202410901847.0A 2024-07-05 2024-07-05 Three-dimensional reconstruction method, system, electronic equipment and storage medium Active CN118429550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410901847.0A CN118429550B (en) 2024-07-05 2024-07-05 Three-dimensional reconstruction method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410901847.0A CN118429550B (en) 2024-07-05 2024-07-05 Three-dimensional reconstruction method, system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN118429550A true CN118429550A (en) 2024-08-02
CN118429550B CN118429550B (en) 2024-09-03

Family

ID=92307369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410901847.0A Active CN118429550B (en) 2024-07-05 2024-07-05 Three-dimensional reconstruction method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118429550B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150128300A (en) * 2014-05-09 2015-11-18 한국건설기술연구원 method of making three dimension model and defect analysis using camera and laser scanning
CN112146564A (en) * 2019-06-28 2020-12-29 先临三维科技股份有限公司 Three-dimensional scanning method, three-dimensional scanning device, computer equipment and computer readable storage medium
US20210056716A1 (en) * 2019-08-23 2021-02-25 Leica Geosystems Ag Combined point cloud generation using a stationary laser scanner and a mobile scanner
CN117173424A (en) * 2023-11-01 2023-12-05 武汉追月信息技术有限公司 Point cloud slope surface edge line identification method, system and readable storage medium
CN117579754A (en) * 2024-01-16 2024-02-20 思看科技(杭州)股份有限公司 Three-dimensional scanning method, three-dimensional scanning device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150128300A (en) * 2014-05-09 2015-11-18 한국건설기술연구원 method of making three dimension model and defect analysis using camera and laser scanning
CN112146564A (en) * 2019-06-28 2020-12-29 先临三维科技股份有限公司 Three-dimensional scanning method, three-dimensional scanning device, computer equipment and computer readable storage medium
US20210056716A1 (en) * 2019-08-23 2021-02-25 Leica Geosystems Ag Combined point cloud generation using a stationary laser scanner and a mobile scanner
CN117173424A (en) * 2023-11-01 2023-12-05 武汉追月信息技术有限公司 Point cloud slope surface edge line identification method, system and readable storage medium
CN117579754A (en) * 2024-01-16 2024-02-20 思看科技(杭州)股份有限公司 Three-dimensional scanning method, three-dimensional scanning device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN118429550B (en) 2024-09-03

Similar Documents

Publication Publication Date Title
US11455746B2 (en) System and methods for extrinsic calibration of cameras and diffractive optical elements
CN107680124B (en) System and method for improving three-dimensional attitude score and eliminating three-dimensional image data noise
US9576389B2 (en) Method and apparatus for generating acceleration structure in ray tracing system
US10452949B2 (en) System and method for scoring clutter for use in 3D point cloud matching in a vision system
US20130038696A1 (en) Ray Image Modeling for Fast Catadioptric Light Field Rendering
CN112771573A (en) Depth estimation method and device based on speckle images and face recognition system
US9147279B1 (en) Systems and methods for merging textures
CN109884793B (en) Method and apparatus for estimating parameters of virtual screen
CN103562934B (en) Face location detection
KR101572618B1 (en) Apparatus and method for simulating lidar
CN110998671B (en) Three-dimensional reconstruction method, device, system and storage medium
EP3916677A1 (en) Three-dimensional measurement device
CN111142514B (en) Robot and obstacle avoidance method and device thereof
JP2015225673A (en) Acceleration structure search device and acceleration structure search method in ray tracing system
EP3430595A1 (en) Determining the relative position between a point cloud generating camera and another camera
KR20160125172A (en) Ray tracing apparatus and method
CN111145264B (en) Multi-sensor calibration method and device and computing equipment
CN118429550B (en) Three-dimensional reconstruction method, system, electronic equipment and storage medium
CN110726534B (en) Visual field range testing method and device for visual device
US11143499B2 (en) Three-dimensional information generating device and method capable of self-calibration
KR20110099412A (en) Apparatus and method for rendering according to ray tracing using reflection map and transparency map
WO2022254854A1 (en) Three-dimensional measurement device
CN112487893B (en) Three-dimensional target identification method and system
JP3931701B2 (en) Image generating apparatus and program
CN118429551B (en) Multi-source point cloud fusion method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant