CN114981845A - Image scanning method and device, equipment and storage medium - Google Patents

Image scanning method and device, equipment and storage medium Download PDF

Info

Publication number
CN114981845A
CN114981845A CN202080093762.4A CN202080093762A CN114981845A CN 114981845 A CN114981845 A CN 114981845A CN 202080093762 A CN202080093762 A CN 202080093762A CN 114981845 A CN114981845 A CN 114981845A
Authority
CN
China
Prior art keywords
curved surface
initial
data
surface data
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080093762.4A
Other languages
Chinese (zh)
Inventor
张洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN114981845A publication Critical patent/CN114981845A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

An image scanning method, an image scanning device, equipment and a storage medium, wherein the method comprises the following steps: acquiring point cloud data of a scanned scene and an initial scanned image of the scanned scene (101); carrying out curved surface detection on the point cloud data to obtain initial curved surface data (102); optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data (103); and correcting the pixel coordinates of the pixel points in the initial scanning image according to the three-dimensional coordinates in the target curved surface data to obtain a target scanning image (104).

Description

Image scanning method and device, equipment and storage medium Technical Field
The embodiments of the present application relate to image processing methods, and in particular, to an image scanning method, an image scanning apparatus, a device, and a storage medium.
Background
The document scanning technology based on the image information can be integrated in mobile terminals such as mobile phones and the like, and thus, the document scanning technology has the characteristics of convenience in carrying and use. The document scanning technique based on image information requires information such as texture and boundary to calculate a transformation matrix, and therefore the technique cannot be applied to document scanning with no boundary and little texture.
The Time of flight (TOF) sensor has the characteristics of being free from illumination change and object texture influence, and can reduce the cost on the premise of meeting the precision requirement. The three-dimensional information of the target object is acquired by means of TOF three-dimensional data in an auxiliary mode, and a transformation matrix is calculated, so that document scanning does not depend on picture information, and the application range of document scanning can be greatly enlarged.
However, in the current image scanning technology based on TOF data, it is impossible to cope with the situation that the target object in the scanned scene is curved.
Disclosure of Invention
In view of this, embodiments of the present application provide an image scanning method and apparatus, a device, and a storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image scanning method, where the method includes: acquiring point cloud data of a scanning scene and an initial scanning image of the scanning scene; carrying out curved surface detection on the point cloud data to obtain initial curved surface data; optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data; and correcting the pixel coordinates of the pixel points in the initial scanning image according to the three-dimensional coordinates in the target curved surface data to obtain a target scanning image.
In a second aspect, an embodiment of the present application provides an image scanning apparatus, including: the system comprises a data acquisition module, a data acquisition module and a data acquisition module, wherein the data acquisition module is used for acquiring point cloud data of a scanning scene and an initial scanning image of the scanning scene; the curved surface detection module is used for carrying out curved surface detection on the point cloud data to obtain initial curved surface data; the curved surface optimization module is used for optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data; and the image correction module is used for correcting the pixel coordinates of the pixel points in the initial scanning image according to the three-dimensional coordinates in the target curved surface data to obtain a target scanning image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor implements, when executing the computer program, the steps in the image scanning method according to any one of the embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the image scanning method according to any one of the embodiments of the present application.
In the embodiment of the application, after acquiring point cloud data of a scanning scene and an initial scanning image of the scanning scene, the electronic equipment performs curved surface detection on the point cloud data, and optimizes the initial curved surface data obtained by detection to obtain target curved surface data; therefore, on one hand, the electronic equipment can correct the pixel coordinates in the initial scanning image according to the three-dimensional coordinates in the target curved surface data, so that a more accurate and fine scanning result, namely a target scanning image, is obtained; on the other hand, curved surface detection on point cloud data can be applied to more scanning scenes than plane detection, for example, the surfaces of a cylinder and a near cylinder can be scanned.
Drawings
FIG. 1 is a schematic diagram illustrating an implementation process of an image scanning method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a comparison between an initial scan image and a target scan image according to an embodiment of the present application;
FIG. 3 is a schematic diagram of pillar features detected in an embodiment of the present application;
FIG. 4 is a diagram illustrating a result of mesh partitioning according to an embodiment of the present application;
FIG. 5 is a schematic reference plane view of an embodiment of the present application;
FIG. 6 is a schematic diagram of radial optimization based on a cube bounding box according to an embodiment of the present application;
fig. 7 is a schematic diagram of radial optimization based on Poisson Surface Reconstruction (PSR) in the embodiment of the present application;
FIG. 8 is a schematic view of radial optimization based on bi-camera data fusion according to an embodiment of the present application;
FIG. 9 is a schematic view of a flowchart illustrating another implementation of an image scanning method according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of an image scanning apparatus according to an embodiment of the present application;
fig. 11 is a hardware entity diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the following detailed descriptions of specific technical solutions of the present application are made with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
It should be noted that the terms "first \ second \ third" referred to in the embodiments of the present application are only used for distinguishing similar objects and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged under specific ordering or sequence if allowed, so that the embodiments of the present application described herein can be implemented in other orders than illustrated or described herein.
The embodiment of the application provides an image scanning method, which can be applied to electronic equipment, wherein the electronic equipment can be equipment with information processing capability, such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, an unmanned aerial vehicle and the like. The functions implemented by the image scanning method may be implemented by a processor in the electronic device calling program code, which may be stored in a computer storage medium.
Fig. 1 is a schematic flow chart of an implementation of the image scanning method in the embodiment of the present application, and as shown in fig. 1, the method at least includes the following steps 101 to 104:
step 101, an initial scanning image of a scanning scene and point cloud data of the scanning scene are obtained.
The target object scanned by the electronic device, such as a label on the surface of a bottle, a poster on a cylinder, a curved book surface, etc., will be understood to be a curved surface itself.
During implementation, the electronic device can acquire data of a scanned scene through the TOF sensor, then perform preliminary filtering on sensor data output by the TOF sensor through the processor, and transform the filtered sensor data into three-dimensional coordinates under a camera coordinate system to obtain the point cloud data.
In some embodiments, the preliminary filtering includes denoising characteristics of the sensor data output by the TOF sensor. Denoising methods, e.g., point cloud removal based on a location threshold, i.e., removing points in the sensor data whose location is greater than a threshold (e.g., 7 meters); alternatively, point cloud removal based on mutual distance, i.e. removing points whose average distance to the surrounding point cloud is greater than the surrounding average of other points.
For the initial scan image, the electronic device may capture a scan scene through a Red Green Blue (RGB) sensor, so as to obtain the initial scan image.
In summary, the electronic device includes at least a TOF sensor for acquiring point cloud data of a scanned scene and an RGB sensor for photographing the scanned scene to obtain an initial scanned image.
And 102, carrying out curved surface detection on the point cloud data to obtain initial curved surface data.
In some embodiments, the electronic device may determine a rough shape in the point cloud data by cylinder detection using a near cylinder as a fitting model to obtain initial surface data. The near cylinder includes a cylinder having an equal upper and lower bottom surfaces and a circular truncated cone having an unequal upper and lower bottom surfaces.
And 103, optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data.
In order to obtain a finer curved surface, in the embodiment of the present application, the electronic device optimizes the curved surface represented by the initial curved surface data, for example, in a case that the curved surface is a cylindrical side surface, the curved surface is subjected to mesh division to obtain a mesh curved surface, and then a vertex of each mesh on the mesh curved surface is subjected to radial optimization to obtain target curved surface data.
And 104, correcting the pixel coordinates of the pixel points in the initial scanning image according to the three-dimensional coordinates in the target curved surface data to obtain a target scanning image.
It is understood that in most cases, the initial scanned image scanned by the electronic device is an image that does not satisfy the application condition. For example, as shown in fig. 2, the initial scan image, i.e., the image before rectification, is the image 20 in which the target object 201 is tilted. After correcting this, an orthographic scan result of the target object 201, i.e., the target scan image 202 shown in fig. 2, can be obtained.
In some embodiments, the electronic device may implement step 104 by steps 307 and 308 of the following embodiments to obtain the target scan image.
In the embodiment of the application, after acquiring point cloud data of a scanning scene and an initial scanning image of the scanning scene, the electronic equipment performs curved surface detection on the point cloud data, and optimizes the initial curved surface data obtained by detection to obtain target curved surface data; therefore, on one hand, the electronic equipment can correct the pixel coordinates in the initial scanning image according to the three-dimensional coordinates in the target curved surface data, so that a more accurate and fine scanning result, namely a target scanning image, is obtained; on the other hand, curved surface detection on point cloud data can be applied to more scanning scenes than plane detection, for example, the surfaces of a cylinder and a near cylinder can be scanned.
An embodiment of the present application further provides an image scanning method, where the method at least includes the following steps 201 to 205:
step 201, point cloud data of a scanning scene and an initial scanning image of the scanning scene are obtained;
step 202, performing cylinder shape detection on the point cloud data to obtain a plurality of characteristic parameter values of the target object.
In some embodiments, the target object may be a cylinder. For example, as shown in fig. 3, the characteristic parameter values of the cylinder 30 include: center position p of the bottom surface of the column 0 Axial direction n 0 Radius r, tangential extent (θ) 12 ) And an axial extent (h); wherein, theta 1 Is the angle between the tangential line 31 and the reference line, theta 2 Is the angle between the tangential line 32 and the reference line; the axial extent h is the height of the column. In some embodiments, the electronic device may use, for example, a RANdom SAmple Consensus (RANSAC) cylinder detection algorithm to obtain the parameter values.
Step 203, determining the initial curved surface data from the point cloud data according to the plurality of characteristic parameter values;
step 204, optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data;
step 205, correcting the pixel coordinates of the pixel points in the initial scanned image according to the three-dimensional coordinates in the target curved surface data to obtain a target scanned image.
In the embodiment of the application, the electronic equipment detects the shape of the cylinder of point cloud data to obtain a plurality of characteristic parameter values of the cylinder; determining the initial curved surface data from the point cloud data according to the plurality of characteristic parameter values; therefore, compared with a complex fitting model, the approximate cylinder is adopted as the fitting model, so that the algorithm can be greatly simplified, the operation amount is reduced, and the algorithm overhead cost is reduced while the loss of the precision is low; compared with a plane detection method, the method can cover more target objects and is suitable for more user scenes.
The embodiment of the present application further provides an image scanning method, which at least includes the following steps 301 to 308:
step 301, acquiring point cloud data of a scanning scene and an initial scanning image of the scanning scene;
302, performing cylinder shape detection on the point cloud data to obtain a plurality of characteristic parameter values of a target object; wherein the target object is in the shape of a cylinder, and the plurality of characteristic parameter values of the target object, as shown in fig. 3, include a central position p of the bottom surface of the cylinder 0 Axial direction n 0 Radius r, tangential extent (θ) 12 ) And an axial extent (h).
Step 303, determining initial curved surface data from the point cloud data according to the plurality of characteristic parameter values.
Taking the image 20 shown in fig. 2 as an example, the initial curved surface data is point cloud data of the target object 201 in the image 20.
And 304, according to the axial direction, the axial range, the tangential direction, the tangential range and a specific mesh division interval, carrying out mesh division on the curved surface represented by the initial curved surface data to obtain N meshes, wherein N is an integer larger than 0.
The specific meshing interval includes an axial meshing interval and a tangential meshing interval, and the meshing intervals in the two directions may be the same or different. As a result of the mesh division, for example, a mesh surface 401 shown in fig. 4, the mesh surface 401 has N meshes 402. It will be appreciated that the size of the meshing interval to some extent determines the mesh density of the mesh surface, i.e., the value of N. The grid division interval is about large, and the less the obtained grids are, the coarser the optimization result is; the grid division interval is about small, the more grids are obtained, the finer the optimization result is, but the algorithm complexity is also higher.
And 305, performing optimal solution search on each vertex of each grid according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain the optimal three-dimensional coordinates of the corresponding vertices.
In some embodiments, each vertex is searched for the optimal solution along the radial direction in which the vertex is located, so as to obtain the optimal three-dimensional coordinates of the corresponding vertex. In the embodiment of the application, three optimal solution searching modes are provided. For example, with steps 405 to 407 in the following embodiments, an optimal solution is searched in a search space in which the radial vector of the vertex is located; for another example, steps 505 to 508 in the following embodiments are adopted, that is, the optimal solution is obtained by adopting a PSR-based radial optimization manner; as another example, steps 605 to 608 in the following embodiment are adopted, i.e., the optimal solution is obtained by using a radial optimization manner based on at least bi-focal data fusion.
Step 306, determining the optimal three-dimensional coordinate of each vertex as the target curved surface data.
It will be appreciated that what results from step 306 is an optimized mesh surface that is optimized from the coordinates of each vertex of each mesh to the optimal three-dimensional coordinates. That is, the target surface data includes the optimal three-dimensional coordinates of each vertex of each mesh.
Step 307, determining a transformation relation between the target curved surface data and the reference surface data according to the optimal three-dimensional coordinates of each vertex in the target curved surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data.
In practice, the electronic device determines 307 the position transformation relationship between the optimized mesh surface and the reference surface. The reference plane data may be preconfigured. For example, the reference plane data represents a front view plane, such as reference plane 50 shown in FIG. 5, wherein the reference plane data includes three-dimensional coordinates for each vertex of each mesh in reference plane 50.
In the embodiment of the present application, the transformation relationship may be characterized by a transformation matrix, a transformation matrix group, or a free mapping relationship.
And 308, correcting the pixel coordinates of the pixel points in the initial scanning image according to the transformation relation to obtain a target scanning image.
In the embodiment of the application, the mesh division is carried out on the curved surface represented by the initial curved surface data, and only the top point of the mesh is subjected to optimal solution search, so that the calculation amount of curved surface optimization can be greatly reduced.
An embodiment of the present application further provides an image scanning method, where the method at least includes the following steps 401 to 410:
step 401, acquiring point cloud data of a scanning scene and an initial scanning image of the scanning scene;
step 402, performing cylinder shape detection on the point cloud data to obtain a plurality of characteristic parameter values of a target object; the shape of the target object can be a cylinder, and the plurality of characteristic parameter values of the target object comprise the central position p of the bottom surface of the cylinder 0 Axial direction n 0 Radius r, tangential extent (θ) 12 ) And an axial extent (h).
Step 403, determining the initial curved surface data from the point cloud data according to the plurality of characteristic parameter values;
step 404, according to the axial direction, the axial range, the tangential direction range and a specific mesh division interval, performing mesh division on the curved surface to obtain N meshes, wherein N is an integer greater than 0;
step 405, determining a radial vector of a jth vertex according to the position of the curved surface of the jth vertex of the ith mesh, the radius of the cylinder and the central position of the bottom surface of the cylinder; wherein i is an integer greater than 0 and less than or equal to N, and j is an integer greater than 0 and less than or equal to 4;
step 406, determining a search space from the initial curved surface data according to the radial vector.
In some embodiments, the search space may employ a cube bounding box as shown in equation (1):
L<[R T]p<U (1);
in the formula (1), L and U are respectively a lower limit boundary point and an upper limit boundary point in a bounding box coordinate system, p is a three-dimensional coordinate of a midpoint in the initial curved surface data, and [ R T ] is an augmentation transformation matrix from world coordinates to bounding box coordinates.
When the electronic device implements step 406, the cubic bounding box may be determined from the initial curved surface data by using the jth vertex as a central point of the cubic bounding box according to the length in the axial direction, the mesh division interval in the tangential direction, and the size of the radial vector.
For example, as shown in FIG. 6, the center point of the bounding box 60 is the jth vertex, the bounding box has a length of the radius r of the cylinder, an axial length (i.e., height) of r/2, and a width of the grid division interval in the tangential direction.
Step 407, determining the optimal three-dimensional coordinate of the jth vertex according to the three-dimensional coordinates of the sampling points in the search space.
In some embodiments, the electronic device may determine three-dimensional coordinates of a center of gravity of the search space from three-dimensional coordinates of sample points in the search space; and projecting the three-dimensional coordinate of the gravity center to the radial vector to obtain the optimal three-dimensional coordinate of the jth vertex.
For example, the search space { p is calculated according to the following formula (2) i |p i C is the center of gravity p of BOX g Three-dimensional coordinates of (a):
Figure PCTCN2020073038-APPB-000001
in the formula (2), p i K is a constant for the three-dimensional coordinates of a point in the search space.
Then, the calculated center of gravity p is calculated according to the following formula (3) g Is projected to the jth vertexRadial vector
Figure PCTCN2020073038-APPB-000002
To obtain the optimal three-dimensional coordinate p of the jth vertex *
Figure PCTCN2020073038-APPB-000003
In the formula, p 0 Representing the three-dimensional coordinates of said jth vertex before optimization.
It should be noted that, the determination of the optimal three-dimensional coordinates of each vertex can be implemented through the steps 405 to 407.
Step 408, determining the optimal three-dimensional coordinate of each vertex as the target curved surface data;
step 409, determining a transformation relation between the target curved surface data and the reference surface data according to the three-dimensional coordinates of each vertex in the target curved surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data;
and step 410, correcting the pixel coordinates of the pixel points in the initial scanning image according to the transformation relation to obtain a target scanning image.
The embodiment of the present application further provides an image scanning method, which at least includes the following steps 501 to 511:
step 501, point cloud data of a scanning scene and an initial scanning image of the scanning scene are obtained;
step 502, performing cylinder shape detection on the point cloud data to obtain a plurality of characteristic parameter values of a target object; the shape of the target object can be a cylinder, and the plurality of characteristic parameter values of the target object comprise the central position p of the bottom surface of the cylinder 0 Axial direction n 0 Radius r, tangential extent (θ) 12 ) And an axial extent (h).
Step 503, determining the initial curved surface data from the point cloud data according to the plurality of characteristic parameter values;
step 504, according to the axial direction, the axial range, the tangential direction range and a specific mesh division interval, performing mesh division on the curved surface to obtain N meshes, wherein N is an integer greater than 0;
and 505, reconstructing a Poisson surface according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain an equivalent surface.
In some embodiments, the electronic device may employ a PSR algorithm to obtain the iso-surface. The PSR algorithm can eliminate the influence of point cloud measurement errors on results to a certain extent and restore the surface of a real object based on the assumption that the surface of the real world object is continuous to estimate the surface of the object.
Step 506, determining a radial vector of a jth vertex of the ith mesh according to the position of the curved surface of the jth vertex, the radius of the cylinder and the central position of the bottom surface of the cylinder; wherein i is an integer greater than 0 and less than or equal to N, and j is an integer greater than 0 and less than or equal to 4;
step 507, determining the intersection point of the radial vector and the isosurface;
and step 508, determining the three-dimensional coordinates of the intersection point on the isosurface as the optimal three-dimensional coordinates of the jth vertex.
As shown in fig. 7, before radial optimization, the electronic device first processes the initial curved surface data by using a PSR algorithm to generate a corresponding Mesh (Mesh), then detects an intersection point of a radial line of the generated Mesh and a jth vertex, and takes a three-dimensional coordinate of the Mesh corresponding to the intersection point as an optimization result, that is, an optimal three-dimensional coordinate of the jth vertex.
The PSR algorithm has the characteristic of recovering the surface of an object which accords with the real world from point clouds with much noise, and the PSR algorithm is derived by utilizing the characteristic that the surface of the real object is smooth and continuous, so the PSR algorithm accords with the scanning characteristic of the real object, and the recovered surface is closer to the real value. Therefore, the PSR algorithm can be used for well removing the influence of noise or false detection, so that the influence of deviation values in the point cloud can be reduced.
It should be noted that the input of the PSR algorithm is the initial curved surface data, and the output is the Mesh. The PSR algorithm implementation steps may include the following steps S1 to S4: s1, carrying out point cloud normal estimation on the initial curved surface data; s2, space mesh division is carried out on the initial curved surface data, such as an octree method can be adopted; s3, finding the optimal surface to accord with the surface continuity and the estimated normal; and S4, outputting the optimized optimal curved surface to form a Mesh grid.
It should be noted that the determination of the optimal three-dimensional coordinates of each vertex is performed through the above steps 506 to 508.
509, determining the optimal three-dimensional coordinate of each vertex as the target curved surface data;
step 510, determining a transformation relation between the target curved surface data and the reference surface data according to the three-dimensional coordinates of each vertex in the target curved surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data;
and 511, correcting the pixel coordinates of the pixel points in the initial scanned image according to the transformation relation to obtain a target scanned image.
The embodiment of the present application further provides an image scanning method, which at least includes the following steps 601 to 611:
601, acquiring point cloud data of a scanning scene and an initial scanning image of the scanning scene;
step 602, performing cylinder shape detection on the point cloud data to obtain a plurality of characteristic parameter values of a target object; wherein the target object is in the shape of a cylinder, and the plurality of characteristic parameter values of the target object include: center position p of the bottom surface of the column 0 Axial direction n 0 Radius r, tangential extent (θ) 12 ) And an axial extent (h).
Step 603, determining the initial curved surface data from the point cloud data according to the plurality of characteristic parameter values;
step 604, meshing the curved surface according to the axial range, the tangential range and a specific meshing interval to obtain N meshes, wherein N is an integer greater than 0;
605, respectively back-projecting the three-dimensional coordinates of the kth sampling point on the radial vector where the jth vertex of the ith grid is located onto the imaging plane of each camera to obtain corresponding pixel coordinates; wherein i is an integer greater than 0 and less than or equal to N, j is an integer greater than 0 and less than or equal to 4, and k is an integer greater than 0;
step 606, according to a specific sampling window, determining an area block of each pixel coordinate on the image acquired by the corresponding camera;
step 607, determining the correlation degree between each of the region blocks;
step 608, respectively back-projecting the three-dimensional coordinates of the next sampling point of the radial vector onto the imaging plane of each camera under the condition that the correlation degree does not satisfy a specific condition until the determined correlation degree satisfies the specific condition, and determining the three-dimensional coordinates of the corresponding sampling point as the optimal three-dimensional coordinates of the jth vertex.
Steps 605 to 608 provide another optimization method, namely a radial optimization method based on at least bi-camera data fusion, so that the robustness can be further improved.
In some embodiments, cost function optimization may be performed with the jth vertex as a starting point, so as to obtain an optimal three-dimensional coordinate of the jth vertex. Taking the radial optimization mode of the bi-shot data fusion as an example, as shown in fig. 8, the three-dimensional coordinate p before the jth vertex optimization is taken as j As a starting point, a gradient descent method or an algorithm such as LM is adopted to optimize the cost function shown in the formula (4):
p * =argmin p (R(p-p j )+α*C(W(π 0 (p j )),W(π 1 (p j )))) (4);
in the formula, p * For the optimal solution, i.e. the optimal three-dimensional coordinate of the jth vertex, p is the position of the search point in the initial surface data, i.e. the three-dimensional coordinate of the kth sampling point, R is a regular function, e.g. L is adopted 2 Canonical function, pi 0 And pi 1 The correlation function may be a function such as NCC or ZNCC, and α is a scaling factor for adjusting the dependency on TOF data.
It should be noted that, for the case of three shots or more, as shown in formula (5), the form of the cost function can be kept consistent, and the difference is the calculation of the cross-correlation function:
p * =argmin p (R(p-p j )+α*C(W(π 0 (p j )),W(π 1 (p j )),W(π 2 (p j )...)) (5);
that is, projection items of the third, fourth, etc. cameras are added in equation (4), and the input of the function C of calculating the cross-correlation becomes plural, and the calculation method of the cross-correlation function may adopt the sum, or the average, of the results of the two-camera correlation.
Step 609, determining the optimal three-dimensional coordinate of each vertex as the target curved surface data;
step 610, determining a transformation relation between the target curved surface data and the reference surface data according to the three-dimensional coordinates of each vertex in the target curved surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data;
and 611, correcting the pixel coordinates of the pixel points in the initial scanned image according to the transformation relation to obtain a target scanned image.
An exemplary application of the embodiments of the present application in a practical application scenario will be described below.
In the shape detection algorithm based on TOF data, a near cylinder is adopted as a fitting model for the situation of a target object with a curved surface, the approximate shape in point cloud data is determined through cylinder detection, and then optimization processing is performed on the radial direction of the cylinder, so that a more precise curved surface result is obtained.
Compared with complex model fitting, the method has the advantages that the algorithm can be greatly simplified, the operation amount is reduced, and the algorithm overhead cost is reduced while the loss of precision is small; compared with a plane detection method, the method can cover more target objects and is suitable for more user scenes.
In some embodiments, the document scanning system includes a TOF sensor, an RGB sensor, and a transformation parameter generation module and an image transformation module, the functions of which may be performed by a processor; wherein, the first and the second end of the pipe are connected with each other,
a transformation parameter generation module, configured to calculate sensor data output by the TOF sensor, and generate a transformation parameter required by the image transformation module, where the transformation parameter may be one of: transformation matrix, transformation matrix group, free mapping relation.
And the image conversion module is used for converting the RGB data (one example of the initial scanning image) obtained from the RGB sensor according to the conversion parameters generated by the conversion parameter generation module so as to obtain a converted orthographic scanning result, namely the target scanning image.
In some embodiments, the transformation parameter generation module includes a cylinder detection unit, a mesh division unit, a cylinder radial optimization unit, a free-form surface generation unit, and a transformation parameter generation unit; wherein the content of the first and second substances,
the cylinder detection unit can adopt RANSAC cylinder detection algorithm;
the mesh division unit can be used for carrying out mesh division on a target area of the cylindrical surface at regular intervals in a tangential direction and an axial direction;
the cylinder radial optimization unit can be used for searching the optimal solution of each vertex of the divided grids in the radial direction of the cylinder;
the optimal solution search may adopt one of the following methods:
(1) a method for searching the point with the most point cloud aggregation in the tangential range in the radial direction as the optimal solution;
(2) calculating the intersection point of the radial line and the grid reconstructed by the surface of the integral point cloud in the radial direction as an optimal solution;
(3) and calculating the back projection cost function optimization by using a double shot data fusion mode.
And the free-form surface generating unit is used for taking the optimal solutions in all radial directions as vertexes and carrying out topological connection according to the topological structure divided by the mesh to form a free-form surface mesh.
And the transformation parameter generating unit is used for generating transformation parameters according to the free-form surface meshes and the correction target parameters. And generating transformation parameters such as a transformation matrix, a transformation matrix group, a free mapping relation and the like according to the space position of the free-form surface mesh and the space position of the set correction result (namely the three-dimensional coordinates of the vertex in the reference surface data).
Fig. 9 is a schematic flow chart of an implementation of the image scanning method in the embodiment of the present application, and as shown in fig. 9, the method at least includes the following steps 901 to 907:
step 901, performing preliminary filtering on sensor data output by TOF, and then converting the sensor data subjected to preliminary filtering into three-dimensional coordinates under a camera coordinate system to obtain three-dimensional point cloud data;
in some embodiments, the preliminary filtering includes denoising characteristics of the TOF output sensor data, denoising methods, e.g., point cloud removal based on a location threshold, or point cloud removal based on a mutual distance;
step 902, performing cylinder detection on the three-dimensional point cloud data to obtain cylinder fitting parameters, namely a plurality of characteristic parameter values of the cylinder, wherein the parameters include: center position p of bottom surface of column 0 Axial direction n 0 Radius r, tangential extent (θ) 12 ) Andaxial extent (h). In some embodiments, these parameters may be obtained using, for example, a RANSAC cylinder detection algorithm.
903, regularly meshing the target area on the surface of the cylinder at equal intervals in a tangential direction and an axial direction;
step 904, searching the optimal solution of each vertex of the divided grids along the radial direction of the cylinder;
step 905, according to the original mesh topology, updating the coordinate position to be the optimal position obtained by radial optimization, and forming an optimized free mesh curved surface;
step 906, generating corresponding transformation parameters according to the corresponding relation of the coordinates;
transformation parameters, for example, a single Homography matrix (homograph) may be used in the case of more uniform transformations; under the condition that the transformation is complex, the coordinate pair relation can be directly established, and an interpolation function is generated to be used for subsequent image transformation.
Step 907, transforming the input image (i.e. the initial scanned image) by using the generated transformation parameters to obtain a corrected front view result, i.e. the target scanned image.
Based on the foregoing embodiments, the present application provides an image scanning apparatus, which includes modules that can be implemented by a processor in a computer device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 10 is a schematic structural diagram of an image scanning apparatus according to an embodiment of the present disclosure, and as shown in fig. 10, the apparatus 100 includes a data obtaining module 101, a curved surface detecting module 102, a curved surface optimizing module 103, and an image rectification module 104, where:
a data acquisition module 101, configured to acquire point cloud data of a scanned scene and an initial scanned image of the scanned scene;
a curved surface detection module 102, configured to perform curved surface detection on the point cloud data to obtain initial curved surface data;
the curved surface optimization module 103 is configured to optimize a curved surface represented by the initial curved surface data to obtain target curved surface data;
and the image correction module 104 is configured to correct the pixel coordinates of the pixel points in the initial scanned image according to the three-dimensional coordinates in the target curved surface data, so as to obtain a target scanned image.
In some embodiments, the curved surface detection module 102 is configured to: detecting the shape of the cylinder of the point cloud data to obtain a plurality of characteristic parameter values of a target object; and determining the initial curved surface data from the point cloud data according to the plurality of characteristic parameter values.
In some embodiments, the target object is shaped as a cylinder, the plurality of characteristic parameter values including axial, tangential and tangential extents of the cylinder; a surface optimization module 103 configured to: meshing the curved surface represented by the initial curved surface data according to the axial direction, the axial range, the tangential direction, the tangential range and a specific meshing interval to obtain N meshes, wherein N is an integer greater than 0; performing optimal solution search on each vertex of each grid according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain the optimal three-dimensional coordinates of the corresponding vertex; and determining the optimal three-dimensional coordinate of each vertex as the target curved surface data.
In some embodiments, the plurality of characteristic parameter values further includes a radius of the cylinder and a center position of a cylinder bottom surface; a surface optimization module 103 configured to: determining a radial vector of a jth vertex according to the position of a curved surface of the jth vertex of the ith mesh, the radius of the cylinder and the central position of the bottom surface of the cylinder; wherein i is an integer greater than 0 and less than or equal to N, and j is an integer greater than 0 and less than or equal to 4; determining a search space from the initial curved surface data according to the radial vector; and determining the optimal three-dimensional coordinate of the jth vertex according to the three-dimensional coordinates of the sampling points in the search space.
In some embodiments, the search space employs a cube bounding box, and the surface optimization module 103 is configured to:
and determining the cube bounding box from the initial curved surface data by taking the jth vertex as the central point of the cube bounding box according to the axial length, the grid division interval in the tangential direction and the size of the radial vector.
In some embodiments, the surface optimization module 103 is configured to: determining the three-dimensional coordinate of the gravity center of the search space according to the three-dimensional coordinate of the sampling point in the search space; and projecting the three-dimensional coordinate of the gravity center to the radial vector to obtain the optimal three-dimensional coordinate of the jth vertex.
In some embodiments, the surface optimization module 103 is configured to: performing Poisson surface reconstruction according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain an equivalent surface; determining a radial vector of a jth vertex according to the position of a curved surface of the jth vertex of the ith mesh, the radius of the cylinder and the central position of the bottom surface of the cylinder; wherein i is an integer greater than 0 and less than or equal to N, and j is an integer greater than 0 and less than or equal to 4; determining the intersection of the radial vector and the iso-surface; and determining the three-dimensional coordinates of the intersection point on the isosurface as the optimal three-dimensional coordinates of the jth vertex.
In some embodiments, the surface optimization module 103 is configured to: respectively back projecting the three-dimensional coordinates of the kth sampling point on the radial vector of the jth vertex of the ith grid to the imaging plane of each camera to obtain corresponding pixel coordinates; wherein i is an integer greater than 0 and less than or equal to N, and j is an integer greater than 0 and less than or equal to 4; determining an area block of each pixel coordinate on an image acquired by a corresponding camera according to a specific sampling window; determining a degree of correlation between each of the blocks of area; and under the condition that the correlation degree does not meet a specific condition, respectively back projecting the three-dimensional coordinates of the next sampling point of the radial vector onto the imaging plane of each camera until the determined correlation degree meets the specific condition, and determining the three-dimensional coordinates of the corresponding sampling point as the optimal three-dimensional coordinates of the jth vertex.
In some embodiments, the image rectification module 104 is configured to: determining a transformation relation between the target curved surface data and the reference surface data according to the optimal three-dimensional coordinate of each vertex in the target curved surface data and the three-dimensional coordinate of the corresponding vertex in the reference surface data; and correcting the pixel coordinates of the pixel points in the initial scanning image according to the transformation relation to obtain a target scanning image.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the image scanning method is implemented in the form of a software functional module and sold or used as a standalone product, the image scanning method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for enabling an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, a drone, or the like) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, an embodiment of the present application provides an electronic device, fig. 11 is a schematic diagram of a hardware entity of the electronic device according to the embodiment of the present application, and as shown in fig. 11, the hardware entity of the electronic device 110 includes: comprising a memory 111 and a processor 112, said memory 111 storing a computer program operable on the processor 112, said processor 112 implementing the steps in the image scanning method provided in the above embodiments when executing said program.
The Memory 111 is configured to store instructions and applications executable by the processor 112, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 112 and modules in the electronic device 110, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
Correspondingly, the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the image scanning method provided in the above embodiment.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the modules is only one logical functional division, and there may be other division ways in actual implementation, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be electrical, mechanical or in other forms.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules; can be located in one place or distributed on a plurality of network units; some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may be separately regarded as one unit, or two or more modules may be integrated into one unit; the integrated module can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit described above may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for enabling an electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, a drone, or the like) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program code, such as removable storage devices, ROMs, magnetic or optical disks, etc.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided herein may be combined in any combination to arrive at a new method or apparatus embodiment without conflict.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Industrial applicability
In the embodiment of the application, after acquiring point cloud data of a scanning scene and an initial scanning image of the scanning scene, the electronic equipment performs curved surface detection on the point cloud data, and optimizes the initial curved surface data obtained by detection to obtain target curved surface data; therefore, on one hand, the electronic equipment can correct the pixel coordinates in the initial scanning image according to the three-dimensional coordinates in the target curved surface data, so that a more accurate and more precise scanning result (namely a target scanning image) is obtained; on the other hand, curved surface detection on point cloud data can be applied to more scanning scenes than plane detection, for example, the surfaces of a cylinder and a near cylinder can be scanned.

Claims (12)

  1. An image scanning method, the method comprising:
    acquiring an initial scanning image of a scanning scene and point cloud data of the scanning scene;
    carrying out curved surface detection on the point cloud data to obtain initial curved surface data;
    optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data;
    and correcting the pixel coordinates of the pixel points in the initial scanning image according to the three-dimensional coordinates in the target curved surface data to obtain a target scanning image.
  2. The method of claim 1, wherein performing surface detection on the point cloud data to obtain initial surface data comprises:
    carrying out cylinder shape detection on the point cloud data to obtain a plurality of characteristic parameter values of a target object;
    and determining the initial curved surface data from the point cloud data according to the plurality of characteristic parameter values.
  3. The method of claim 2, wherein the target object is in the shape of a cylinder, the plurality of characteristic parameter values include axial, tangential and tangential extents of the cylinder, and the optimizing the surface represented by the initial surface data to obtain target surface data comprises:
    meshing the curved surface represented by the initial curved surface data according to the axial direction, the axial range, the tangential direction, the tangential range and a specific meshing interval to obtain N meshes, wherein N is an integer greater than 0;
    performing optimal solution search on each vertex of each grid according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain the optimal three-dimensional coordinates of the corresponding vertex;
    and determining the optimal three-dimensional coordinate of each vertex as the target curved surface data.
  4. The method according to claim 3, wherein the plurality of characteristic parameter values further include a radius and a center position of the cylinder, and the performing an optimal solution search on each vertex of each mesh according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain the optimal three-dimensional coordinates of the corresponding vertex comprises:
    determining a radial vector of a jth vertex according to the position of a curved surface of the jth vertex of the ith mesh, the radius of the cylinder and the central position of the bottom surface of the cylinder; wherein i is an integer greater than 0 and less than or equal to N, and j is an integer greater than 0 and less than or equal to 4;
    determining a search space from the initial curved surface data according to the radial vector;
    and determining the optimal three-dimensional coordinate of the jth vertex according to the three-dimensional coordinates of the sampling points in the search space.
  5. The method of claim 4, the search space employing a cube bounding box, the determining a search space from the initial surface data according to the radial vector comprising:
    and determining the cube bounding box from the initial curved surface data by taking the jth vertex as the central point of the cube bounding box according to the axial length, the grid division interval in the tangential direction and the size of the radial vector.
  6. The method of claim 4, said determining optimal three-dimensional coordinates of said jth vertex from the three-dimensional coordinates of the sample points in the search space, comprising:
    determining the three-dimensional coordinate of the gravity center of the search space according to the three-dimensional coordinate of the sampling point in the search space;
    and projecting the three-dimensional coordinate of the gravity center to the radial vector to obtain the optimal three-dimensional coordinate of the jth vertex.
  7. The method according to claim 3, wherein the plurality of characteristic parameter values further include a radius and a center position of the cylinder, and the performing an optimal solution search on each vertex of each mesh according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain the optimal three-dimensional coordinates of the corresponding vertex comprises:
    performing Poisson surface reconstruction according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain an equivalent surface;
    determining a radial vector of a jth vertex according to the position of a curved surface of the jth vertex of the ith mesh, the radius of the cylinder and the central position of the bottom surface of the cylinder; wherein i is an integer greater than 0 and less than or equal to N, and j is an integer greater than 0 and less than or equal to 4;
    determining the intersection of the radial vector and the iso-surface;
    and determining the three-dimensional coordinates of the intersection point on the isosurface as the optimal three-dimensional coordinates of the jth vertex.
  8. The method according to claim 3, wherein the performing an optimal solution search on each vertex of each mesh according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain the optimal three-dimensional coordinates of the corresponding vertex comprises:
    respectively back projecting the three-dimensional coordinates of the kth sampling point on the radial vector of the jth vertex of the ith grid to the imaging plane of each camera to obtain corresponding pixel coordinates; wherein i is an integer greater than 0 and less than or equal to N, j is an integer greater than 0 and less than or equal to 4, and k is an integer greater than 0;
    determining an area block of each pixel coordinate on an image acquired by a corresponding camera according to a specific sampling window;
    determining a degree of correlation between each of the blocks of regions;
    and under the condition that the correlation degree does not meet a specific condition, respectively back projecting the three-dimensional coordinates of the next sampling point of the radial vector onto the imaging plane of each camera until the determined correlation degree meets the specific condition, and determining the three-dimensional coordinates of the corresponding sampling point as the optimal three-dimensional coordinates of the jth vertex.
  9. The method according to any one of claims 3 to 8, wherein the correcting pixel coordinates of pixel points in the initial scanned image according to the three-dimensional coordinates in the target curved surface data to obtain a target scanned image includes:
    determining a transformation relation between the target curved surface data and the reference surface data according to the optimal three-dimensional coordinate of each vertex in the target curved surface data and the three-dimensional coordinate of the corresponding vertex in the reference surface data;
    and correcting the pixel coordinates of the pixel points in the initial scanning image according to the transformation relation to obtain a target scanning image.
  10. An image scanning device, comprising:
    the system comprises a data acquisition module, a data acquisition module and a data acquisition module, wherein the data acquisition module is used for acquiring point cloud data of a scanning scene and an initial scanning image of the scanning scene;
    the curved surface detection module is used for carrying out curved surface detection on the point cloud data to obtain initial curved surface data;
    the curved surface optimization module is used for optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data;
    and the image correction module is used for correcting the pixel coordinates of the pixel points in the initial scanning image according to the three-dimensional coordinates in the target curved surface data to obtain a target scanning image.
  11. An electronic device comprising a memory and a processor, the memory storing a computer program operable on the processor, the processor implementing the steps in the image scanning method of any of claims 1 to 9 when executing the program.
  12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image scanning method of any one of claims 1 to 9.
CN202080093762.4A 2020-01-19 2020-01-19 Image scanning method and device, equipment and storage medium Pending CN114981845A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073038 WO2021142843A1 (en) 2020-01-19 2020-01-19 Image scanning method and device, apparatus, and storage medium

Publications (1)

Publication Number Publication Date
CN114981845A true CN114981845A (en) 2022-08-30

Family

ID=76863441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080093762.4A Pending CN114981845A (en) 2020-01-19 2020-01-19 Image scanning method and device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114981845A (en)
WO (1) WO2021142843A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116500379A (en) * 2023-05-15 2023-07-28 珠海中瑞电力科技有限公司 Accurate positioning method for voltage drop of STS device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115270378B (en) * 2022-09-28 2022-12-30 中国空气动力研究与发展中心计算空气动力研究所 Method for generating bow-shaped shock wave external field grid
CN115661104B (en) * 2022-11-04 2023-08-11 广东杰成新能源材料科技有限公司 Method, device, equipment and medium for evaluating overall integrity of power battery

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489218B (en) * 2013-09-17 2016-06-29 中国科学院深圳先进技术研究院 Point cloud data quality automatic optimization method and system
WO2017113260A1 (en) * 2015-12-30 2017-07-06 中国科学院深圳先进技术研究院 Three-dimensional point cloud model re-establishment method and apparatus
CN107767442B (en) * 2017-10-16 2020-12-25 浙江工业大学 Foot type three-dimensional reconstruction and measurement method based on Kinect and binocular vision
CN109344786A (en) * 2018-10-11 2019-02-15 深圳步智造科技有限公司 Target identification method, device and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116500379A (en) * 2023-05-15 2023-07-28 珠海中瑞电力科技有限公司 Accurate positioning method for voltage drop of STS device
CN116500379B (en) * 2023-05-15 2024-03-08 珠海中瑞电力科技有限公司 Accurate positioning method for voltage drop of STS device

Also Published As

Publication number Publication date
WO2021142843A1 (en) 2021-07-22

Similar Documents

Publication Publication Date Title
EP3279803B1 (en) Picture display method and device
WO2020001168A1 (en) Three-dimensional reconstruction method, apparatus, and device, and storage medium
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
WO2020206903A1 (en) Image matching method and device, and computer readable storage medium
US10311595B2 (en) Image processing device and its control method, imaging apparatus, and storage medium
CN112771573A (en) Depth estimation method and device based on speckle images and face recognition system
CN109472820B (en) Monocular RGB-D camera real-time face reconstruction method and device
CN114981845A (en) Image scanning method and device, equipment and storage medium
JP2007000205A (en) Image processing apparatus, image processing method, and image processing program
EP3326156B1 (en) Consistent tessellation via topology-aware surface tracking
CN112083403B (en) Positioning tracking error correction method and system for virtual scene
CN113129249B (en) Depth video-based space plane detection method and system and electronic equipment
CN113592706B (en) Method and device for adjusting homography matrix parameters
CN114202632A (en) Grid linear structure recovery method and device, electronic equipment and storage medium
JP6086491B2 (en) Image processing apparatus and database construction apparatus thereof
KR102249381B1 (en) System for generating spatial information of mobile device using 3D image information and method therefor
CN112862736A (en) Real-time three-dimensional reconstruction and optimization method based on points
CN109166176B (en) Three-dimensional face image generation method and device
JP6080424B2 (en) Corresponding point search device, program thereof, and camera parameter estimation device
CN116342802A (en) Three-dimensional reconstruction method and device
US11908096B2 (en) Stereoscopic image acquisition method, electronic device and storage medium
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
CN115546027A (en) Image stitching line determining method, device and storage medium
CN112288817B (en) Three-dimensional reconstruction processing method and device based on image
CN114677439A (en) Camera pose determination method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination