CN115187664A - Position adjusting method, device, equipment and storage medium - Google Patents

Position adjusting method, device, equipment and storage medium Download PDF

Info

Publication number
CN115187664A
CN115187664A CN202210770998.8A CN202210770998A CN115187664A CN 115187664 A CN115187664 A CN 115187664A CN 202210770998 A CN202210770998 A CN 202210770998A CN 115187664 A CN115187664 A CN 115187664A
Authority
CN
China
Prior art keywords
scanning
frame
pairs
adjacent
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210770998.8A
Other languages
Chinese (zh)
Inventor
赵斌涛
林忠威
张健
江腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining 3D Technology Co Ltd
Original Assignee
Shining 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining 3D Technology Co Ltd filed Critical Shining 3D Technology Co Ltd
Priority to CN202210770998.8A priority Critical patent/CN115187664A/en
Publication of CN115187664A publication Critical patent/CN115187664A/en
Priority to PCT/CN2023/102126 priority patent/WO2024001960A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10008Still image; Photographic image from scanner, fax or copier

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Input (AREA)

Abstract

The disclosure relates to a position adjusting method, a position adjusting device, a position adjusting apparatus, and a storage medium. Acquiring all scanning frames acquired by a camera in a target scanner at the current position; calculating the global errors of all the scanning frames at the current position based on the feature point pairs in the adjacent scanning frames and the barycentric point pairs in each scanning frame, wherein the barycentric point pairs comprise the real barycenter of each scanning frame and the theoretical barycenter obtained by an inertial acquisition unit in the target scanner; and if the global error does not meet the preset optimization condition, adjusting the current position of the camera until the global error corresponding to the adjusted current position meets the preset optimization condition, and obtaining the adjusted current position of the camera. Therefore, in the process of adjusting the position of the camera, the global error is calculated by considering the feature information of the adjacent scanning frames and the gravity center of each scanning frame, and the position of the camera is adjusted based on the global error, so that the position adjusting precision of the camera can be ensured.

Description

Position adjusting method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of three-dimensional scanning technologies, and in particular, to a method, an apparatus, a device, and a storage medium for position adjustment.
Background
In the process of scanning by using the scanner, the position of the camera in the scanner needs to be adjusted constantly, so that the subsequent splicing of the scanning frames based on the adjusted position of the camera is facilitated, a more accurate three-dimensional model is constructed, and the adjustment of the position of the camera in the scanner becomes an important link in the scanning process. Therefore, it is an urgent technical problem to provide a method for adjusting the position of a camera in a scanner.
Disclosure of Invention
In order to solve the technical problem, the present disclosure provides a position adjustment method, apparatus, device and storage medium.
In a first aspect, the present disclosure provides a position adjustment method, including:
acquiring all scanning frames acquired by a camera in a target scanner at the current position;
calculating the global errors of all the scanning frames at the current position based on the feature point pairs in the adjacent scanning frames and the barycentric point pairs in each scanning frame, wherein the barycentric point pairs comprise the real barycenter of each scanning frame and the theoretical barycenter obtained by an inertial acquisition unit in the target scanner;
and if the global error does not meet the preset optimization condition, adjusting the current position of the camera until the global error corresponding to the adjusted current position meets the preset optimization condition, and obtaining the adjusted current position of the camera.
In a second aspect, the present disclosure provides a position adjustment apparatus, comprising:
the scanning frame acquisition module is used for acquiring all scanning frames acquired by a camera in the current position in the target scanner;
the global error calculation module is used for calculating global errors of all scanning frames at the current position based on the feature point pairs in the adjacent scanning frames and the barycentric point pair in each scanning frame, wherein the barycentric point pair comprises a real barycenter of each scanning frame and a theoretical barycenter obtained by an inertial acquisition unit in the target scanner;
and the position adjusting module is used for adjusting the current position of the camera if the global error does not meet the preset optimization condition until the global error corresponding to the adjusted current position meets the preset optimization condition, so as to obtain the adjusted current position of the camera.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method provided by the first aspect.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method provided by the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
the position method, the position device, the position equipment and the storage medium of the embodiment of the disclosure acquire all scanning frames acquired by a camera in a current position in a target scanner; calculating the global error of all scanning frames at the current position based on the feature point pairs in the adjacent scanning frames and the barycentric point pairs in each scanning frame, wherein the barycentric point pairs comprise the real barycenter of each scanning frame and the theoretical barycenter obtained through an inertial acquisition unit in the target scanner; and if the global error does not meet the preset optimization condition, adjusting the current position of the camera until the global error corresponding to the adjusted current position meets the preset optimization condition, and obtaining the adjusted current position of the camera. By the method, in the process of adjusting the position of the camera, the feature information of adjacent scanning frames and the gravity center of each scanning frame are considered to calculate the global error, and the position of the camera is adjusted based on the global error.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a position adjustment method according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of another position adjustment method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a position adjustment apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
The embodiment of the disclosure provides a position adjusting method, a position adjusting device, position adjusting equipment and a storage medium.
The position adjustment method provided by the embodiment of the present disclosure is described below with reference to fig. 1 to 2. In the embodiment of the present disclosure, the position adjustment method may be performed by an electronic device or a server. The electronic device may include a tablet computer, a desktop computer, a notebook computer, and other devices having a communication function, and may also include a virtual machine or a simulator-simulated device. The server may be a cluster of servers or a cloud server. The present embodiment specifically explains the electronic device as an execution subject.
Fig. 1 shows a schematic flow chart of a position adjustment method provided by an embodiment of the present disclosure.
As shown in fig. 1, the position adjustment method may include the following steps.
And S110, acquiring all scanning frames acquired by the camera in the target scanner at the current pose.
In practical application, in the working process of the target scanner, the target scanner may scan the scanned object, and shoot the scanned object by using the camera to obtain all the scanning frames acquired by the camera at the current position, and send all the scanning frames at the current position to the electronic device, so that the electronic device can acquire all the scanning frames at the current position, where each scanning frame includes feature information of the scanned object.
In the embodiment of the present disclosure, the target scanner may be a handheld scanner, which is used for movably scanning the scanned object.
In the embodiment of the present disclosure, the feature information of the scanned object may include a landmark feature, a texture feature, a geometric feature, and the like.
The camera may include a black and white camera or a texture camera, among others. The black and white camera can be used for acquiring the geometric features and the mark point features of the scanned object, and the texture camera can be used for acquiring the texture features, so that all scanning frames at the current position are obtained.
In the embodiments of the present disclosure, the current position refers to a position of the camera at which the scan frame is acquired. Specifically, the current position can be used as a preliminary position for position adjustment, and if the current position is not appropriate, the current position needs to be adjusted, so that the camera is adjusted to a standard position, and a more accurate three-dimensional model can be constructed by using the standard position conveniently.
And S120, calculating the global error of all the scanning frames at the current position based on the characteristic point pairs in the adjacent scanning frames and the barycentric point pairs in each scanning frame, wherein the barycentric point pairs comprise the real barycentric of each scanning frame and the theoretical barycentric acquired through an inertial acquisition unit in the target scanner.
In practical application, the electronic device searches feature point pairs from adjacent scanning frames, searches barycentric point pairs from each scanning frame, and calculates global errors of all scanning frames based on each feature point pair and each barycentric point pair, so that whether the current position of the camera is an accurate position or not is judged based on the global errors, and if not, the current position of the camera needs to be adjusted.
In the embodiment of the present disclosure, the characteristic point pair is constituted by actual characteristic points on the scanned object. Optionally, the feature point pairs may include one or more of landmark feature point pairs, texture feature point pairs, and geometric feature point pairs. Accordingly, the feature points in the scan frame may include any one of a landmark point, a texture feature point, and a geometric feature point.
In the embodiment of the present disclosure, the true center of gravity of each scan frame may be determined by the distribution of the feature points in the scan frame.
In the embodiment of the present disclosure, an Inertial Measurement Unit (IMU) may track, in real time, a pose of a camera, which may be a center of gravity of each scan frame acquired by the camera, i.e., a theoretical center of gravity.
In some embodiments, local errors of feature point pairs in adjacent scanning frames and barycentric point pairs in each scanning frame are calculated respectively, and then the obtained local errors are subjected to weighted summation to obtain global errors of all scanning frames.
In other embodiments, local errors of pairs of feature points in adjacent scan frames are calculated, and then the local errors are corrected based on pairs of barycentric points in each scan frame, resulting in a global error for all scan frames.
In still other embodiments, the local errors of pairs of barycentric points in each scan frame are calculated, and then the local errors are corrected based on pairs of feature points in adjacent scan frames, resulting in a global error for all scan frames.
Therefore, in the embodiment of the disclosure, the global error is calculated by considering the feature information of the adjacent scanning frames and the gravity center of each scanning frame, and the calculation accuracy of the global error is ensured.
And S130, if the global error does not meet the preset optimization condition, adjusting the current position of the camera until the global error corresponding to the adjusted current position meets the preset optimization condition, and obtaining the adjusted current position of the camera.
In practical application, after obtaining the global error corresponding to the current position, the electronic device judges whether the global error meets a preset optimization condition, if not, the electronic device continuously adjusts the current position of the target scanner to adjust the camera until the global error corresponding to the adjusted current position meets the preset optimization condition, and then the adjusted current position of the camera is obtained.
In the embodiment of the present disclosure, the preset optimization condition is a preset condition for determining whether to adjust the camera position.
Specifically, the preset optimization conditions include: the global error is smaller than or equal to a preset error threshold value, and/or the change value of the global error within the preset adjustment times is smaller than a preset value.
The preset error threshold is a preset error value used for judging whether to adjust the position of the camera. The preset adjustment times are continuous adjustment times, and the preset value is a position change value used for judging whether to adjust the position of the camera.
The position adjusting method of the embodiment of the disclosure acquires all scanning frames acquired by a camera in a target scanner at a current position; calculating the global error of all scanning frames at the current position based on the feature point pairs in the adjacent scanning frames and the barycentric point pairs in each scanning frame, wherein the barycentric point pairs comprise the real barycenter of each scanning frame and the theoretical barycenter obtained through an inertial acquisition unit in the target scanner; if the global error does not meet the preset optimization condition, adjusting the current position of the camera if the global error does not meet the preset optimization condition until the global error corresponding to the adjusted current position meets the preset optimization condition, and obtaining the adjusted current position of the camera. By the method, in the process of adjusting the position of the camera, the feature information of adjacent scanning frames and the gravity center of each scanning frame are considered to calculate the global error, and the current position of the camera is adjusted based on the global error.
Further, after obtaining the adjusted current position of the camera, the method may further include the following steps:
and splicing the scanning frames based on the current position adjusted by the camera to generate a target three-dimensional model.
It should be noted that the adjusted current position of the camera may also refer to the adjusted current positions of all the scanning frames.
In an actual scene, if the position of the camera at the current position when acquiring each scanning frame is a, where a is first coordinate data of each scanning frame in a camera coordinate system, after the electronic device acquires the position a, the electronic device converts the position a from the camera coordinate system to a world coordinate system by using a conversion relationship between the camera coordinate system and the world coordinate system to obtain a position a ' of the camera at the world coordinate system when acquiring each scanning frame, where a ' = a × RT, and further, each scanning frame may be spliced based on the position a ' of each scanning frame to obtain an initial model.
And optimizing based on the steps S120 and S130 to obtain a position a ″ of the camera in the optimized world coordinate system when acquiring each scan frame, wherein a "= a × RT 'or a" = a' × RT, and finally, stitching all scan frames based on the position a ″ of the camera in the optimized world coordinate system when acquiring each scan frame to obtain the target three-dimensional model.
Wherein the target three-dimensional model is a virtual model of the scanned object as presented on the electronic device. Therefore, the three-dimensional model with high precision is obtained by splicing the scanning frames by using the camera position after the position adjustment.
In another embodiment of the present disclosure, corresponding local errors may be calculated according to the feature point pairs and the center point pairs, and then the two local errors are fused to obtain global errors of all the scanning frames.
Fig. 2 is a schematic flow chart illustrating another position adjustment method according to an embodiment of the present disclosure.
As shown in fig. 2, the position adjustment method may include the following steps.
S210, acquiring all scanning frames acquired by a camera in the target scanner at the current position.
S210 is similar to S110, and is not described herein.
S220, calculating a first local error of the feature point pairs in the adjacent scanning frames, and calculating a second local error of the barycentric point pairs in each scanning frame, wherein the barycentric point pairs comprise a real barycenter of each scanning frame and a theoretical barycenter obtained by an inertial acquisition unit in the target scanner.
In some embodiments, the feature point pairs include landmark point feature point pairs, texture feature point pairs, and geometric feature point pairs; correspondingly, the method for calculating the first local error specifically comprises the following steps:
s2201, calculating a first distance of a mark point characteristic point pair in adjacent scanning frames;
s2202, calculating a second distance of the texture feature point pair in the adjacent scanning frames;
s2203, calculating a third distance of the geometric feature point pair in the adjacent scanning frames;
s2204, performing weighted summation on the first distance, the second distance, and the third distance to obtain a first local error.
Wherein, the mark point characteristic point pair is formed by mark points which are pasted on the scanned object in advance.
Wherein, the texture feature point pair is composed of texture feature points on the scanned object. Alternatively, the texture feature may be a color feature.
Wherein, the geometric characteristic point pair is composed of geometric characteristic points on the scanned object. Alternatively, the geometric feature may be a point cloud feature of the scanned object.
Wherein the first distance, the second distance, and the third distance may be a squared euclidean distance.
Wherein the first local error is used for representing the distance between the characteristic point pairs on the adjacent scanning frames.
In other embodiments, the feature point pairs include any one of landmark point feature point pairs, texture feature point pairs, and geometric feature point pairs, and the distance corresponding to each feature point pair may be calculated, and the calculated distance is taken as the first local error.
In still other embodiments, the feature point pairs include any two of a landmark feature point pair, a texture feature point pair, and a geometric feature point pair, and distances corresponding to the two feature point pairs may be calculated, and then weighted sums of the distances corresponding to the two feature point pairs, respectively, to obtain the first local error.
In this embodiment of the present disclosure, optionally, the method for calculating the second local error specifically includes the following steps:
s2205, calculating a fourth distance between the barycentric point pair in each scanning frame to obtain a second local error.
Wherein the fourth distance may be a squared euclidean distance. It can be understood that, because the accuracy of the data acquired by the IMU is low, if the fourth distance is small, a small weight may be set for the fourth distance, so as to obtain a small second local error, thereby preventing the data acquired by the IMU from affecting the position adjustment accuracy of the camera.
And S230, carrying out weighted summation on the first local error and the second local error to obtain the global error of all the scanning frames at the current position.
In practical application, the electronic device may obtain weights corresponding to the first local error and the second local error respectively, perform weighted summation on the first local error and the second local error according to the weights corresponding to each other, and use a result of the weighted summation as a global error.
Therefore, in the embodiment of the disclosure, when a global error is calculated, at least one feature point pair and a barycentric point pair are used for distance calculation and two distances are fused, and the calculation method can ensure the calculation accuracy of the global error and improve the flexibility of the calculation method.
And S240, if the global error does not meet the preset optimization condition, adjusting the current position of the camera until the global error corresponding to the adjusted current position meets the preset optimization condition, and obtaining the adjusted current position of the camera.
S240 is similar to S130, and is not described herein.
In yet another embodiment of the present disclosure, for a feature point in one of the adjacent scanning frames, a corresponding feature point may be searched from another frame in a different manner to form a feature point pair in the adjacent scanning frame.
In some embodiments of the present disclosure, the pairs of characteristic points comprise pairs of landmark characteristic points; accordingly, the determination of the feature point pairs in the adjacent scanning frames includes:
s1, aiming at each mark point in one frame of adjacent scanning frames, searching a corresponding mark point from the other frame of the adjacent scanning frames according to a first preset radius to obtain a feature point pair in the adjacent scanning frames.
In practical application, for each mark point in one of the adjacent scanning frames, the electronic device may search for a corresponding mark point from another one of the adjacent scanning frames by using the mark point as a center of a circle and a first preset radius as a search range in a radius search manner, so as to obtain a mark point feature point pair in the adjacent scanning frame.
The first preset radius is a search range for searching for the mark point.
In other embodiments of the present disclosure, the pairs of feature points comprise pairs of texture feature points; accordingly, the determination of the feature point pairs in the adjacent scanning frames includes:
and S2, aiming at each texture feature point in one frame of the adjacent scanning frames, searching a corresponding texture feature point from the other frame of the adjacent scanning frames according to a second preset radius to obtain a texture feature point pair in the adjacent scanning frames.
In practical application, for each texture feature point in one of the adjacent scanning frames, the electronic device may search for a corresponding texture feature point from another frame of the adjacent scanning frame by using the texture feature point as a center of a circle and a second preset radius as a search range in a radius search manner, so as to obtain a texture feature point pair in the adjacent scanning frame.
The second preset radius is a search range for searching the texture feature point.
In some embodiments, if a texture feature point is searched from another frame, each texture feature point in one frame and a corresponding texture feature point in another frame form a texture feature point pair in the adjacent scanning frames.
In other embodiments, if multiple texture feature points are searched from another frame, the most similar texture feature points are searched from another frame to form texture feature point pairs in adjacent scanning frames. For this case, S2 may specifically include the following steps:
s21, if a plurality of corresponding texture feature points are searched from another frame of the adjacent scanning frames, a plurality of texture feature point pairs in the adjacent scanning frames are obtained;
s22, calculating feature similarity of each texture feature point pair in adjacent scanning frames;
and S23, taking the texture feature point pair with the maximum feature similarity in the adjacent scanning frames as the final texture feature point pair in the adjacent scanning frames.
Wherein, the texture feature point pair in S21 is a preliminarily obtained feature point pair.
Wherein the feature similarity is used for characterizing the distance of each texture feature point pair. The larger the feature similarity is, the smaller the distance of each texture feature point pair is, and conversely, the smaller the feature similarity is, the larger the distance of each texture feature point pair is.
Optionally, the feature similarity may be a hamming distance, a euclidean distance, or the like.
In still other embodiments of the present disclosure, the pairs of feature points comprise pairs of geometric feature points; accordingly, the determination of the feature point pairs in the adjacent scanning frames includes:
and S3, searching the geometric feature point with the closest distance from the other frame of the adjacent scanning frame aiming at each geometric feature point in one frame of the adjacent scanning frames to obtain the geometric feature point pair in the adjacent scanning frames.
In practical application, for each geometric feature point in one of the adjacent scanning frames, the electronic device may search for a geometric feature point closest to a circle center in another frame of the adjacent scanning frame by using the geometric feature point as the circle center in a nearest neighbor search manner, so as to obtain a geometric feature point pair in the adjacent scanning frame.
Therefore, in the embodiment of the present disclosure, for different feature points, corresponding feature points can be searched in another frame in different manners to form feature point pairs in adjacent scanning frames, and it is ensured that various feature point pair determination manners are in line with actual situations.
The embodiment of the present disclosure further provides a position adjustment device for implementing the position adjustment method, which is described below with reference to fig. 3. In the embodiment of the present disclosure, the position adjusting apparatus may be an electronic device or a server. The electronic device may include a tablet computer, a desktop computer, a notebook computer, and other devices having a communication function, and may also include a virtual machine or a simulator-simulated device. The server may be a cluster of servers or a cloud server.
Fig. 3 shows a schematic structural diagram of a position adjustment apparatus provided in an embodiment of the present disclosure.
As shown in fig. 3, the position adjustment apparatus 300 may include:
a scan frame acquiring module 310, configured to acquire all scan frames acquired by a camera in a target scanner at a current position;
a global error calculation module 320, configured to calculate global errors of all the scanning frames at the current position based on the feature point pairs in the adjacent scanning frames and the barycentric point pair in each scanning frame, where the barycentric point pair includes a real barycenter of each scanning frame and a theoretical barycenter obtained through an inertial acquisition unit in the target scanner;
the position adjusting module 330 is configured to adjust the current position of the camera if the global error does not satisfy the preset optimization condition, until the global error corresponding to the adjusted current position satisfies the preset optimization condition, to obtain the adjusted current position of the camera.
The position adjusting device of the embodiment of the disclosure acquires all scanning frames acquired by a camera in a target scanner at a current position; calculating the global errors of all the scanning frames at the current position based on the feature point pairs in the adjacent scanning frames and the barycentric point pairs in each scanning frame, wherein the barycentric point pairs comprise the real barycenter of each scanning frame and the theoretical barycenter obtained by an inertial acquisition unit in the target scanner; and if the global error does not meet the preset optimization condition, adjusting the current position of the camera until the global error corresponding to the adjusted current position meets the preset optimization condition, and obtaining the adjusted current position of the camera. By the method, in the process of adjusting the position of the camera, the feature information of adjacent scanning frames and the gravity center of each scanning frame are considered to calculate the global error, and the position of the camera is adjusted based on the global error.
In some embodiments of the present disclosure, the global error calculation module 320 may include:
a local error calculation unit for calculating a first local error of a pair of feature points in adjacent scan frames and calculating a second local error of a pair of barycentric points in each scan frame;
and the global error calculation unit is used for carrying out weighted summation on the first local error and the second local error to obtain the global error of all the scanning frames at the current position.
In some embodiments of the present disclosure, the feature point pairs include landmark point feature point pairs, texture feature point pairs, and geometric feature point pairs;
correspondingly, the local error calculation unit is specifically configured to calculate a first distance between a marker point feature point pair in adjacent scanning frames;
calculating a second distance of the texture feature point pair in the adjacent scanning frames;
calculating a third distance of the geometric feature point pair in the adjacent scanning frames;
and carrying out weighted summation on the first distance, the second distance and the third distance to obtain a first local error.
In some embodiments of the present disclosure, the local error calculation unit is specifically configured to calculate a fourth distance between the barycentric point pair in each scanning frame, so as to obtain a second local error.
In some embodiments of the present disclosure, the pairs of characteristic points comprise pairs of landmark characteristic points;
correspondingly, the device also comprises: a mark point characteristic point pair determining module;
and the mark point characteristic point pair determining module is used for searching a corresponding mark point from another frame of the adjacent scanning frames according to the first preset radius aiming at each mark point in one frame of the adjacent scanning frames to obtain a mark point characteristic point pair in the adjacent scanning frames.
In some embodiments of the present disclosure, the feature point pairs comprise texture feature point pairs;
correspondingly, the device also comprises: a texture feature point pair determining module;
and the texture feature point pair determining module is used for searching a corresponding texture feature point from another frame of the adjacent scanning frames according to a second preset radius aiming at each texture feature point in one frame of the adjacent scanning frames to obtain a texture feature point pair in the adjacent scanning frames.
In some embodiments of the present disclosure, the texture feature point pair determining module includes:
the texture feature point searching unit is used for obtaining a plurality of texture feature point pairs in the adjacent scanning frames if a plurality of corresponding texture feature points are searched from another frame of the adjacent scanning frames;
the characteristic similarity calculation unit is used for calculating the characteristic similarity of each texture characteristic point pair in the adjacent scanning frames;
and the texture feature point pair determining unit is used for taking the texture feature point pair with the maximum feature similarity in the adjacent scanning frames as the final texture feature point pair in the adjacent scanning frames.
In some embodiments of the present disclosure, the pairs of feature points comprise pairs of geometric feature points;
correspondingly, the device also comprises: a geometric characteristic point pair determining module;
and the geometric feature point pair determining module is used for searching the geometric feature point with the closest distance from the other frame of the adjacent scanning frames aiming at each geometric feature point in one frame of the adjacent scanning frames to obtain the geometric feature point pair in the adjacent scanning frames.
In some embodiments of the present disclosure, the preset optimization conditions include: the global error is smaller than or equal to a preset error threshold value, and/or the change value of the global error within the preset adjustment times is smaller than a preset value.
In some embodiments of the present disclosure, the apparatus further comprises:
and the model generation module is used for splicing all the scanning frames based on the current position adjusted by the camera to generate a target three-dimensional model.
It should be noted that the position adjustment apparatus 300 shown in fig. 3 may perform each step in the method embodiment shown in fig. 1 to fig. 2, and implement each process and effect in the method embodiment shown in fig. 1 to fig. 2, which are not described herein again.
Fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
As shown in fig. 4, the electronic device may include a processor 401 and a memory 402 having computer program instructions stored therein.
In particular, the processor 401 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 402 may include a mass storage for information or instructions. By way of example, and not limitation, memory 402 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, magnetic tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 402 may include removable or non-removable (or fixed) media, where appropriate. The memory 402 may be internal or external to the integrated gateway device, where appropriate. In a particular embodiment, the memory 402 is non-volatile solid-state memory. In a particular embodiment, the Memory 402 includes a Read-Only Memory (ROM). The ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (Electrically Erasable PROM, EPROM), electrically Erasable PROM (Electrically Erasable PROM, EEPROM), electrically Alterable ROM (Electrically Alterable ROM, EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The processor 401 reads and executes the computer program instructions stored in the memory 402 to execute the steps of the position adjustment method provided by the embodiments of the present disclosure.
In one example, the electronic device can also include a transceiver 403 and a bus 404. As shown in fig. 4, the processor 401, the memory 402 and the transceiver 403 are connected via a bus 404 to complete communication therebetween.
Bus 404 comprises hardware, software, or both. By way of example and not limitation, a BUS may include an Accelerated Graphics Port (AGP) or other Graphics BUS, an Enhanced Industrial Standard Architecture (EISA) BUS, a Front Side BUS (Front Side BUS, FSB), a Hyper Transport (HT) Interconnect, an Industrial Standard Architecture (ISA) BUS, an infinite bandwidth Interconnect, a Low Pin Count (LPC) BUS, a memory BUS, a microchannel Architecture (MCA) BUS, a Peripheral Control Interconnect (PCI) BUS, a PCI-Express (PCI-X) BUS, a Serial Advanced Technology Attachment (Attachment Technology), an Attachment BUS, a Video Electronics Standard Local Association (vldeo Electronics Association), or a combination of two or more of these buses, or other suitable combinations thereof. Bus 404 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The following is an embodiment of a computer-readable storage medium provided in an embodiment of the present disclosure, the computer-readable storage medium and the position adjusting method in the foregoing embodiments belong to the same inventive concept, and details that are not described in detail in the embodiment of the computer-readable storage medium may refer to the embodiment of the position adjusting method.
The present embodiments provide a storage medium containing computer-executable instructions which, when executed by a computer processor, are operable to perform a method of position adjustment, the method comprising:
acquiring all scanning frames acquired by a camera in a target scanner at the current position;
calculating the global errors of all the scanning frames at the current position based on the feature point pairs in the adjacent scanning frames and the barycentric point pairs in each scanning frame, wherein the barycentric point pairs comprise the real barycenter of each scanning frame and the theoretical barycenter obtained by an inertial acquisition unit in the target scanner;
and if the global error does not meet the preset optimization condition, adjusting the current position of the camera until the global error corresponding to the adjusted current position meets the preset optimization condition, and obtaining the adjusted current position of the camera.
Of course, the storage medium provided by the embodiments of the present disclosure contains computer-executable instructions, and the computer-executable instructions are not limited to the above method operations, and may also perform related operations in the position adjustment method provided by any embodiments of the present disclosure.
From the above description of the embodiments, it is obvious for a person skilled in the art that the present disclosure can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, and includes several instructions to enable a computer cloud platform (which may be a personal computer, a server, or a network cloud platform, etc.) to execute the position adjustment method provided in the embodiments of the present disclosure.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present disclosure and the technical principles employed. Those skilled in the art will appreciate that the present disclosure is not limited to the specific embodiments illustrated herein and that various obvious changes, adaptations, and substitutions are possible, without departing from the scope of the present disclosure. Therefore, although the present disclosure has been described in greater detail with reference to the above embodiments, the present disclosure is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present disclosure, the scope of which is determined by the scope of the appended claims.

Claims (13)

1. A position adjustment method, comprising:
acquiring all scanning frames acquired by a camera in a target scanner at the current position;
calculating global errors of all the scanning frames at the current position based on feature point pairs in adjacent scanning frames and barycentric point pairs in each scanning frame, wherein the barycentric point pairs comprise a real barycenter of each scanning frame and a theoretical barycenter obtained by an inertial acquisition unit in the target scanner;
and if the global error does not meet the preset optimization condition, adjusting the current position of the camera until the global error corresponding to the adjusted current position meets the preset optimization condition, and obtaining the adjusted current position of the camera.
2. The method of claim 1, wherein the calculating global errors for all the scan frames at the current position based on pairs of feature points in neighboring scan frames and pairs of barycentric points in each scan frame comprises:
calculating a first local error of the feature point pairs in adjacent scanning frames, and calculating a second local error of the barycentric point pair in each scanning frame;
and performing weighted summation on the first local error and the second local error to obtain a global error of all the scanning frames at the current position.
3. The method of claim 2, wherein the pairs of feature points comprise pairs of landmark feature points, pairs of texture feature points, and pairs of geometric feature points;
accordingly, the calculating a first local error adjacent to the pair of feature points in the scan frame includes:
calculating a first distance between the mark point feature point pairs in the adjacent scanning frames;
calculating a second distance between the texture feature point pairs in the adjacent scanning frames;
calculating a third distance between the geometric feature point pairs in the adjacent scanning frames;
and carrying out weighted summation on the first distance, the second distance and the third distance to obtain the first local error.
4. The method of claim 2, wherein said calculating a second local error for a pair of barycentric points in each of said scan frames comprises:
and calculating a fourth distance of the barycentric point pair in each scanning frame to obtain the second local error.
5. The method of claim 1, wherein the pairs of characteristic points comprise pairs of landmark characteristic points;
accordingly, the determination of the pairs of feature points in adjacent scan frames includes:
and aiming at each mark point in one frame adjacent to the scanning frame, searching a corresponding mark point from the other frame adjacent to the scanning frame according to a first preset radius to obtain a mark point feature point pair in the adjacent scanning frame.
6. The method of claim 1, wherein the pairs of feature points comprise pairs of texture feature points;
accordingly, the determination of the pairs of feature points in adjacent scan frames includes:
and aiming at each texture feature point in one frame adjacent to the scanning frame, searching a corresponding texture feature point from the other frame adjacent to the scanning frame according to a second preset radius to obtain a texture feature point pair in the adjacent scanning frame.
7. The method according to claim 6, wherein the searching for the corresponding texture feature point from another frame adjacent to the scanning frame according to a second preset radius for each texture feature point in one frame adjacent to the scanning frame to obtain a pair of texture feature points in the adjacent scanning frame comprises:
if a plurality of corresponding textural feature points are searched from another frame adjacent to the scanning frame, obtaining a plurality of textural feature point pairs adjacent to the scanning frame;
calculating feature similarity for each texture feature point pair in the adjacent scanning frames;
and taking the texture feature point pair with the maximum feature similarity in the adjacent scanning frames as the final texture feature point pair in the adjacent scanning frames.
8. The method of claim 1, wherein the pairs of characteristic points comprise pairs of geometric characteristic points;
accordingly, the determination of the pairs of feature points in adjacent scan frames includes:
and searching the geometric feature point with the nearest distance from the other frame adjacent to the scanning frame aiming at each geometric feature point in one frame adjacent to the scanning frame to obtain the geometric feature point pair in the adjacent scanning frame.
9. The method according to claim 1, wherein the preset optimization condition comprises: the global error is smaller than or equal to a preset error threshold value, and/or the change value of the global error within the preset adjustment times is smaller than a preset value.
10. The method according to any one of claims 1 to 9, further comprising:
and splicing the scanning frames based on the current position adjusted by the camera to generate a target three-dimensional model.
11. A position adjustment device, comprising:
the scanning frame acquisition module is used for acquiring all scanning frames acquired by a camera in the current position in the target scanner;
a global error calculation module, configured to calculate global errors of all the scanning frames at the current position based on feature point pairs in adjacent scanning frames and barycentric point pairs in each scanning frame, where the barycentric point pairs include a true barycenter of each scanning frame and a theoretical barycenter obtained through an inertial acquisition unit in the target scanner;
and the position adjusting module is used for adjusting the current position of the camera if the global error does not meet the preset optimization condition until the global error corresponding to the adjusted current position meets the preset optimization condition, so as to obtain the adjusted current position of the camera.
12. An electronic device, comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the method of any of claims 1-10.
13. A computer-readable storage medium, on which a computer program is stored, characterized in that the storage medium stores the computer program, which, when executed by a processor, causes the processor to carry out the method of any of the preceding claims 1-10.
CN202210770998.8A 2022-06-30 2022-06-30 Position adjusting method, device, equipment and storage medium Pending CN115187664A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210770998.8A CN115187664A (en) 2022-06-30 2022-06-30 Position adjusting method, device, equipment and storage medium
PCT/CN2023/102126 WO2024001960A1 (en) 2022-06-30 2023-06-25 Position adjustment method and apparatus, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210770998.8A CN115187664A (en) 2022-06-30 2022-06-30 Position adjusting method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115187664A true CN115187664A (en) 2022-10-14

Family

ID=83515885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210770998.8A Pending CN115187664A (en) 2022-06-30 2022-06-30 Position adjusting method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115187664A (en)
WO (1) WO2024001960A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024001960A1 (en) * 2022-06-30 2024-01-04 先临三维科技股份有限公司 Position adjustment method and apparatus, and device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8437497B2 (en) * 2011-01-27 2013-05-07 Seiko Epson Corporation System and method for real-time image retensioning and loop error correction
CN106651942B (en) * 2016-09-29 2019-09-17 苏州中科广视文化科技有限公司 Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN113313763B (en) * 2021-05-26 2023-06-23 珠海深圳清华大学研究院创新中心 Monocular camera pose optimization method and device based on neural network
CN115187664A (en) * 2022-06-30 2022-10-14 先临三维科技股份有限公司 Position adjusting method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024001960A1 (en) * 2022-06-30 2024-01-04 先临三维科技股份有限公司 Position adjustment method and apparatus, and device and storage medium

Also Published As

Publication number Publication date
WO2024001960A1 (en) 2024-01-04

Similar Documents

Publication Publication Date Title
WO2020107326A1 (en) Lane line detection method, device and computer readale storage medium
CN113256718B (en) Positioning method and device, equipment and storage medium
CN111833447A (en) Three-dimensional map construction method, three-dimensional map construction device and terminal equipment
CN115170893B (en) Training method of common-view gear classification network, image sorting method and related equipment
CN114862828A (en) Light spot searching method and device, computer readable medium and electronic equipment
CN115187664A (en) Position adjusting method, device, equipment and storage medium
US20230005216A1 (en) Three-dimensional model generation method and three-dimensional model generation device
CN115131437A (en) Pose estimation method, and training method, device, equipment and medium of relevant model
CN112258647B (en) Map reconstruction method and device, computer readable medium and electronic equipment
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
CN114283089A (en) Jump acceleration based depth recovery method, electronic device, and storage medium
CN110929644B (en) Heuristic algorithm-based multi-model fusion face recognition method and device, computer system and readable medium
CN109741370B (en) Target tracking method and device
CN111814811A (en) Image information extraction method, training method and device, medium and electronic equipment
CN115187663A (en) Scanner attitude positioning method, device, equipment and storage medium
JP2004028811A (en) Device and method for correcting distance for monitoring system
CN115471808A (en) Processing method and device for reconstructing target point cloud based on reference image
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN110866535B (en) Disparity map acquisition method and device, computer equipment and storage medium
CN113887290A (en) Monocular 3D detection method and device, electronic equipment and storage medium
CN115249407A (en) Indicating lamp state identification method and device, electronic equipment, storage medium and product
CN112734797A (en) Image feature tracking method and device and electronic equipment
CN114783041B (en) Target object recognition method, electronic device, and computer-readable storage medium
CN113657311B (en) Identification region ordering method, identification region ordering system, electronic equipment and storage medium
TW201927608A (en) Obstacle detection reliability assessment method capable of timely providing reliability of obstacle detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination