CN117635702A - Positioning method of image acquisition equipment, projection system and storage medium - Google Patents

Positioning method of image acquisition equipment, projection system and storage medium Download PDF

Info

Publication number
CN117635702A
CN117635702A CN202210981960.5A CN202210981960A CN117635702A CN 117635702 A CN117635702 A CN 117635702A CN 202210981960 A CN202210981960 A CN 202210981960A CN 117635702 A CN117635702 A CN 117635702A
Authority
CN
China
Prior art keywords
image
excessive
image acquisition
data
pose change
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210981960.5A
Other languages
Chinese (zh)
Inventor
赵振宇
李屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Appotronics Corp Ltd
Original Assignee
Appotronics Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Appotronics Corp Ltd filed Critical Appotronics Corp Ltd
Priority to CN202210981960.5A priority Critical patent/CN117635702A/en
Publication of CN117635702A publication Critical patent/CN117635702A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a positioning method, a projection system and a storage medium of image acquisition equipment, wherein the positioning method comprises the following steps: acquiring a first image, a second image and inertial measurement data, wherein the inertial measurement data represents the inertial change condition of the image acquisition equipment in the process of transforming the pose of the image acquisition equipment from the image acquisition view angle of the first image into the pose of the image acquisition view angle of the second image; acquiring excessive pose change data of the image acquisition equipment from a first position to a second position based on the inertial measurement data; generating an excessive image based on the excessive pose change data and the first image; obtaining target pose change data of the image acquisition equipment from a first position to a second position based on the excessive image and the second image; based on the target pose change data, a relative position between the first position and the second position is determined to complete positioning of the image acquisition device at the second position. Based on the above mode, the accuracy of the positioning method can be ensured and the efficiency can be improved.

Description

Positioning method of image acquisition equipment, projection system and storage medium
Technical Field
The present disclosure relates to the field of device positioning technologies, and in particular, to a positioning method, a projection system, and a storage medium for an image capturing device.
Background
In the prior art, due to the need of adopting an image acquisition device to perform three-dimensional measurement or other related operations requiring multi-angle shooting, the image acquisition device is usually required to be moved to a plurality of positions for image acquisition.
The prior art has the defects that when the related operation is carried out, the image acquisition equipment is moved to a plurality of positions to acquire corresponding images, the relative position relation among the plurality of positions is required to be determined, in the existing method for determining the relative position relation, the positioning method adopting inertial measurement data is higher in efficiency but lower in accuracy, and the positioning method adopting a digital image related algorithm is higher in accuracy but lower in efficiency.
Disclosure of Invention
The technical problem that this application mainly solves is how to ensure the rate of accuracy of positioning method and improvement efficiency simultaneously.
In order to solve the technical problem, a first technical scheme adopted in the application is as follows: a method of positioning an image acquisition device, comprising: acquiring a first image, a second image and inertial measurement data, wherein the first image and the second image are images formed by projecting a source image to a reference surface by projection equipment, the image acquisition view angles corresponding to the first image and the second image are different, and the inertial measurement data represent the inertial change condition of the image acquisition equipment in the process of transforming the pose of the image acquisition view angle of the first image into the pose of the second image; acquiring excessive pose change data of the image acquisition device from a first position to a second position based on the inertial measurement data, wherein the first position is the position of the image acquisition device under the image acquisition view angle of the first image, and the second position is the position of the image acquisition device under the image acquisition view angle of the second image; generating an excessive image based on the excessive pose change data and the first image; obtaining target pose change data of the image acquisition equipment from a first position to a second position based on the excessive image and the second image; based on the target pose change data, a relative position between the first position and the second position is determined to complete positioning of the image acquisition device at the second position.
In order to solve the technical problem, a second technical scheme adopted by the application is as follows: a projection system, comprising: an acquisition module for: acquiring a first image, a second image and inertial measurement data, wherein the first image and the second image are images formed by projecting a source image to a reference surface by projection equipment, the image acquisition view angles corresponding to the first image and the second image are different, and the inertial measurement data represent the inertial change condition of the image acquisition equipment in the process of transforming the pose of the image acquisition view angle of the first image into the pose of the second image; a processing module for: acquiring excessive pose change data of the image acquisition device from a first position to a second position based on the inertial measurement data, wherein the first position is the position of the image acquisition device under the image acquisition view angle of the first image, and the second position is the position of the image acquisition device under the image acquisition view angle of the second image; generating an excessive image based on the excessive pose change data and the first image; obtaining target pose change data of the image acquisition equipment from a first position to a second position based on the excessive image and the second image; based on the target pose change data, a relative position between the first position and the second position is determined to complete positioning of the image acquisition device at the second position.
In order to solve the technical problem, a third technical scheme adopted in the application is as follows: a computer readable storage medium storing program instructions which, when executed by a processor, implement the above-described method.
The beneficial effects of this application lie in: in the technical scheme of the application, through acquiring a first image and a second image formed by projecting a source image to a reference surface by a projection device, acquiring inertial measurement data corresponding to a pose of the image acquisition device under an image acquisition view angle of the first image, converting the inertial measurement data into the inertial measurement data corresponding to a pose of the image acquisition view angle of the second image, and acquiring excessive pose change data from a first position corresponding to the image acquisition view angle of the first image to a second position corresponding to the image acquisition view angle of the second image by the image acquisition device based on the inertial measurement data. The excessive image based on the excessive pose change data is similar to the second image, so that the calculation amount corresponding to the digital image correlation algorithm is reduced in the process of obtaining the target pose change data based on the excessive image and the second image, and the calculation efficiency of the digital image correlation algorithm can be improved on the premise of improving the positioning method based on the digital image correlation algorithm, thereby ensuring the accuracy of the positioning method and improving the efficiency at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a first embodiment of a positioning method of an image acquisition device of the present application;
FIG. 2 is a flow chart of a second embodiment of a positioning method of an image acquisition device of the present application;
FIG. 3 is a schematic diagram of an embodiment of a projection system of the present application;
FIG. 4 is a schematic diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is specifically noted that the following examples are only for illustration of the present application, but do not limit the scope of the present application. Likewise, the following embodiments are only some, but not all, of the embodiments of the present application, and all other embodiments obtained by one of ordinary skill in the art without inventive effort are within the scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the description of the present application, it is to be understood that the terms "mounted," "configured," "connected," and "connected" are to be interpreted broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected, unless explicitly stated and defined otherwise; the connection can be mechanical connection or electric connection; may be directly connected or may be connected via an intermediate medium. It will be apparent to those skilled in the art that the foregoing is in the specific sense of this application.
Before describing the positioning method of the image acquisition device in detail, it should be noted in advance that the positioning method is a positioning method based on projection technology, and a positioning environment including a reference plane (such as a wall surface and a background board), a projection device and an image acquisition device (such as a camera) needs to be constructed in advance, the projection device is used for projecting a source image (such as a speckle image) to the reference plane, and a lens of the image acquisition device needs to be set towards the reference plane for acquiring a corresponding image.
The application first discloses a positioning method of an image capturing device, referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of the positioning method of the image capturing device of the application, as shown in fig. 1, the positioning method includes:
step S11: a first image, a second image, and inertial measurement data are acquired.
The first image and the second image are images formed by projecting a source image to a reference plane by the projection device, the image acquisition visual angles corresponding to the first image and the second image are different, and the inertial measurement data represent the inertial change condition of the image acquisition device in the process of transforming the pose of the image acquisition device from the image acquisition visual angle of the first image to the pose of the image acquisition visual angle of the second image.
The reference surface may be a plane, such as a wall surface, a background cardboard, or the like, and may be a non-plane, that is, the reference surface may be a plane with higher gain, single color, and better flatness, or may be a non-plane with smaller undulation, specifically, may be determined according to the required measurement accuracy, if the measurement accuracy requirement is higher, the plane may be selected as the reference surface, and if the measurement accuracy requirement is lower, any one of the plane or the non-plane with smaller undulation may be selected as the reference surface, which is not limited herein.
The source image can be a black-and-white speckle image, a black-and-white gray speckle image, or other types of speckle images, and the patterns in the speckle images are speckles with completely random size, position and brightness. When the speckle image is generated, the correlation coefficients of random speckle patterns of different areas in the speckle image can be calculated in advance, and two random speckle patterns with higher correlation numbers are removed from one random speckle pattern, so that the correlation coefficients of the random speckle patterns of any two areas in the speckle image do not exceed a preset correlation coefficient threshold value, and the speckle image with stronger randomness is obtained. In addition, the speckle image can be designed to have a pattern with brightness covering 0-255 gray scale with the maximum resolution (such as 1920 x 1080) that the projection device can project, so as to facilitate the subsequent three-dimensional measurement.
The image acquisition device may be a mobile phone, a single lens or any other device capable of recording images, and the imaging resolution is not lower than the resolution of the projection picture, so that the subsequent three-dimensional measurement can achieve the best effect. The image acquisition device can be moved to different positions so that the image acquisition device can perform image acquisition operation at different image acquisition visual angles relative to the reference surface and/or the object to be detected.
The first image can be acquired by the image acquisition equipment at the first position, the second image can be acquired by the image acquisition equipment at the second position, and the whole course inertial measurement data can be recorded based on the inertial measurement module in the image acquisition equipment in the process of moving the image acquisition equipment.
The first image and the second image may be two frames of images in video data from the first position to the second position, and a manner of acquiring the first image and the second image is not limited herein.
It should be noted that, when step S12 is performed, the positional relationship between the projection apparatus and the reference surface is maintained or the positional relationship between the projection apparatus and the reference surface is ensured to be changed to a certain extent.
Step S12: and obtaining excessive pose change data of the image acquisition equipment from the first position to the second position based on the inertial measurement data.
The first position is the position of the image acquisition device under the image acquisition view angle of the first image, and the second position is the position of the image acquisition device under the image acquisition view angle of the second image.
The position and posture change process of the image acquisition equipment in the process can be determined based on the inertial measurement data corresponding to the image acquisition equipment from the first position to the second position, and corresponding excessive posture change data can be obtained based on the position and posture change process.
Step S13: an excessive image is generated based on the excessive pose change data and the first image.
The method can generate an excessive image which is estimated to be acquired by the image acquisition device at the second position based on the first image and excessive pose change data corresponding to the image acquisition device when the image acquisition device moves from the first position for acquiring the first image to the second position for acquiring the second image.
Step S14: and obtaining target pose change data of the image acquisition equipment from the first position to the second position based on the excessive image and the second image.
The target pose change data of the image acquisition equipment from the first position to the second position can be obtained based on a digital image correlation algorithm, an excessive image and the second image.
The digital image correlation method (digital image correlation, DIC), also called digital speckle correlation method, is to obtain deformation information of the region of interest by correlation calculation from two digital images before and after deformation of a test piece. And the displacement of each pixel point can be determined by carrying out feature matching on each pixel point in the two images, so that deformation information between the two images is obtained.
When the excessive image and the second image are matched based on the digital image correlation algorithm to obtain deformation information, a sub-pixel searching method in an interpolation algorithm can be adopted to search feature matching pixel points so as to obtain sub-pixel level measurement accuracy. For example, when searching for a feature matching pixel in the first image, if the pixel value is a non-integer, a new pixel value of the pixel may be determined based on interpolation of luminance values of pixels in the neighboring domain of the pixel in the second image, and a pixel matching the feature of the pixel in the second image may be searched for based on the new pixel value.
The excessive image is an image which is estimated based on inertial measurement data and can be acquired by the image acquisition device at the second position, and the second image is an image which is actually acquired by the image acquisition device at the second position, and the two images can be the same or similar, so that when the excessive image and the second image are matched based on a digital image correlation algorithm to determine the target pose change data of the image acquisition device from the first position to the second position, the calculation amount involved by the digital image correlation algorithm is very little, and the processing speed of the whole positioning method is improved.
Step S15: based on the target pose change data, a relative position between the first position and the second position is determined to complete positioning of the image acquisition device at the second position.
The position change condition of the image acquisition device when acquiring the first image at the first position and acquiring the second image at the second position can be determined based on the target position change data corresponding to the first position and the second position, and further the relative position of the first position and the second position can be determined based on the position change condition, so that positioning is completed.
In the technical scheme of the application, through acquiring a first image and a second image formed by projecting a source image to a reference surface by a projection device, acquiring inertial measurement data corresponding to a pose of the image acquisition device under an image acquisition view angle of the first image, converting the inertial measurement data into the inertial measurement data corresponding to a pose of the image acquisition view angle of the second image, and acquiring excessive pose change data from a first position corresponding to the image acquisition view angle of the first image to a second position corresponding to the image acquisition view angle of the second image by the image acquisition device based on the inertial measurement data. The excessive image based on the excessive pose change data is similar to the second image, so that the calculation amount corresponding to the digital image correlation algorithm is reduced in the process of obtaining the target pose change data based on the excessive image and the second image, and the calculation efficiency of the digital image correlation algorithm can be improved on the premise of improving the positioning method based on the digital image correlation algorithm, thereby ensuring the accuracy of the positioning method and improving the efficiency at the same time.
The present application further proposes a positioning method of an image capturing device, referring to fig. 2, fig. 2 is a schematic flow chart of a second embodiment of the positioning method of an image capturing device of the present application, as shown in fig. 2, where the positioning method includes:
step S21: a first image, a second image, and inertial measurement data are acquired.
Step S22: and obtaining excessive pose change data of the image acquisition equipment from the first position to the second position based on the inertial measurement data.
Step S23: an excessive image is generated based on the excessive pose change data and the first image.
Step S24: and obtaining target pose change data of the image acquisition equipment from the first position to the second position based on the excessive image and the second image.
Step S25: based on the target pose change data, a relative position between the first position and the second position is determined to complete positioning of the image acquisition device at the second position.
Steps S21 to S25 in the second embodiment correspond to steps S11 to S15 in the first embodiment, and are not described here.
In one embodiment, step S23 may specifically include:
based on the first image and the excessive pose change data, an excessive image is generated and a pixel matching relationship between the first image and the excessive image is obtained.
Step S24 may specifically include:
and obtaining a pixel matching relationship between the excessive image and the second image based on the excessive image and the second image.
And obtaining the pixel matching relationship between the first image and the second image based on the pixel matching relationship between the first image and the excessive image and the pixel matching relationship between the excessive image and the second image.
And obtaining target pose change data of the image acquisition equipment from the first position to the second position based on the pixel matching relation between the first image and the second image.
Specifically, in step S23, based on the first image and the excessive pose change data corresponding to the image capturing device when moving from the first position where the first image is captured to the second position where the second image is captured, an excessive image predicted to be captured by the image capturing device at the second position may be generated, so as to obtain a pixel matching relationship between the first image and the excessive image.
In step S24, feature matching may be performed on the excessive image and the second image based on the digital image correlation algorithm, so as to obtain a pixel matching relationship between the excessive image and the second image.
After obtaining the pixel matching relationship between the first image and the excessive image and the pixel matching relationship between the excessive image and the second image, the pixel matching relationship between the first image and the second image can be obtained by combining the two pixel matching relationships, namely, the positions of a corresponding group of pixel points in the first image and the second image respectively are determined, and then the pose change condition corresponding to the image acquisition equipment when acquiring the first image and the second image can be determined based on the pixel matching relationship between the first image and the second image and is recorded as target pose change data.
Based on the above mode, the excessive pose change data can be obtained based on the inertial measurement data, further, the excessive image which is the same as or similar to the second image is generated based on the excessive pose change data and the first image, further, when the characteristics of the excessive image and the second image are matched, the calculated amount is reduced, then, the pixel matching relationship between the first image and the second image is obtained through the two groups of pixel matching relationships, so that the required target pose change data is obtained, the pose change condition of the image acquisition equipment in the process of moving from the first position to the second position is reflected, the technical effect that the target pose change data is obtained in a high-accuracy and high-efficiency mode by adopting a digital image correlation algorithm is achieved, and meanwhile, the accuracy and the efficiency of the positioning method are improved.
In addition, based on the pixel matching relationship between the first image and the second image, the step of obtaining the target pose change data of the image acquisition device from the first position to the second position may specifically include:
and screening the pixel matching relation between the first image and the second image to delete the matched pixel points which do not meet the preset error condition in the pixel matching relation between the first image and the second image.
And obtaining target pose change data of the image acquisition equipment from the first position to the second position based on the pixel matching relation between the first image and the second image after screening processing.
Based on the mode, the matched pixel points which do not meet the preset error condition in the corresponding pixel matching relation can be deleted, and in the process of determining the target pose change data in the follow-up, the data processing is carried out only according to the matched pixel points which meet the preset error condition in the corresponding pixel matching relation, so that the situation that the data processing is carried out on the basis of the pixel points which are obviously matched with errors to obtain the target pose change data can be avoided, and the accuracy of the positioning method is improved.
For example, when the target pose change data includes a target rotation matrix and a target translation matrix, the specific flow of obtaining the target pose change data of the image acquisition device from the first position to the second position based on the pixel matching relationship between the first image and the second image is as follows:
after the pixel matching relation and/or deformation information between the first image and the second image are obtained, a plurality of pixel points in the first image can be determined, a plurality of positions corresponding to the pixel points in the second image can be determined, and based on the positions of the pixel points in the first image and the second image, a first coordinate and a second coordinate of each pixel point can be determined, wherein the first coordinate is a two-dimensional coordinate of the pixel point in the first image, and the second coordinate is a two-dimensional coordinate of the pixel point in the second image.
The solving formulas for solving the target rotation matrix and the target translation matrix corresponding to the group of corresponding pixel points are as follows:
z 1 x 1 =K(R 1 X+T 1 ) (1)
z 2 x 2 =K(R 2 X+T 2 ) (2)
wherein R is 1 For a target rotation matrix of a first coordinate relative to a second coordinate, R 2 For a target rotation matrix of the second coordinate relative to the first coordinate, T 1 For a target translation matrix of a first coordinate relative to a second coordinate, T 2 For a target translation matrix of the second coordinates relative to the first coordinates, (x) 1 ,y 1 ) Is the first coordinate, (x 2 ,y 2 ) Z is the second coordinate 1 As a first scale factor, z 2 K is an internal parameter matrix of the image acquisition equipment, and X is an object point space coordinate.
Since only the relative positional relationship between the first image and the second image is required to be obtained, R can be made 1 Is a unitary matrix, T 1 For 0, formula (1) and formula (2) can be combined into:
z 2 K -1 x 2 =z 1 R 2 K -1 x 1 +T 2 (3)
it can be seen that the unknown amount in the formula leaves only R 2 And T 2 Two matrices, and wherein:
T 2 =[x y z] T (5)
based on the formulas (4) and (5), R can be constructed according to the principle that the object has six degrees of freedom in space 2 And T 2 Wherein α, β, γ, x, y, z are three angles and three spatial coordinates in the six degrees of freedom, thus R 2 And T 2 There are 6 unknowns, and for a group of corresponding pixels, equation (3) can be constructed by the abscissa of the first and second coordinates, and The abscissa in formula (3) is replaced with the ordinate to obtain the following formula:
z 2 K -1 y 2 =z 1 R 2 K -1 y 1 +T 2 (6)
from the above, a set of first coordinates and second coordinates can be constructed to construct a set of formulas (3) and (6), so that at least three sets of first coordinates and second coordinates are needed to construct corresponding formulas (3) and (6) for solving in the case of an unknown quantity of 6.
Because each degree of freedom is in a nonlinear relation with matrix elements, corresponding formulas (3) and (6) can be constructed by adopting five groups of first coordinates and second coordinates of at least five pairs of pixel points, and the target rotation matrix and the target translation matrix are solved, so that the accuracy of a solving result is improved.
It should be noted that, if the first image and/or the second image has the object to be detected blocking the projection light path, so that the speckle is deformed due to the blocking, then the selected corresponding pixel point may be the pixel point in the target labeling area when the above-mentioned solving process is performed. The target labeling area is a pixel area in which the speckle is not deformed due to shielding of an object to be detected, and the target area can be an area identified through a preset identification algorithm or an area determined based on a manual labeling result, and is not limited herein. Therefore, the positioning method can be applied to the positioning of the image acquisition device by any type of three-dimensional measurement method, and further, the ratio between the distances between the image acquisition device and the reference surface at different positions and other needed related data can be determined based on the positioning method, which is not described herein.
Optionally, step S22 may specifically include:
and obtaining excessive pose change data of the image acquisition equipment from the first position to the second position based on gyroscope data, magnetometer data and accelerometer data in the inertial measurement data.
Specifically, the excessive pose change data includes an excessive rotation matrix and an excessive translation matrix.
Based on gyroscope data, magnetometer data and accelerometer data in the inertial measurement data, the step of obtaining excessive pose change data of the image acquisition device from the first position to the second position may specifically include:
and obtaining an excessive rotation matrix and an excessive translation matrix of the image acquisition device from the first position to the second position based on gyroscope data, magnetometer data and accelerometer data in the inertial measurement data.
The step of generating the excessive image and obtaining the pixel matching relationship between the first image and the excessive image based on the first image and the excessive pose change data may specifically include:
an over image is generated based on the over rotation matrix, the over translation matrix, and the first image.
A pixel matching relationship between the first image and the over image is determined based on the over rotation matrix, the over translation matrix, the first image, and the over image.
Based on the above mode, the over-rotation matrix and the over-translation matrix can be obtained by adopting a related inertia measurement algorithm according to the gyroscope data, the magnetometer data and the accelerometer data in the inertia measurement data, then each pixel point in the first image is moved based on the over-rotation matrix and the over-translation matrix, a corresponding over-image can be obtained, and the positions of each pixel in the first image and the over-image can be recorded to obtain a corresponding pixel matching relation between the first image and the over-image.
Still further, obtaining an over-rotation matrix and an over-translation matrix of the image acquisition device from the first position to the second position based on gyroscope data, magnetometer data, accelerometer data in the inertial measurement data, comprising:
and obtaining an excessive rotation matrix by adopting a preset filtering algorithm and a preset fusion algorithm based on the gyroscope data, the magnetometer data and the accelerometer data.
And obtaining an excessive translation matrix by adopting a preset integral algorithm based on the gyroscope data, the magnetometer data, the accelerometer data and the excessive rotation matrix.
Specifically, the gyroscope data, the magnetometer data, and the accelerometer data may be processed based on a preset filtering algorithm and a preset fusion algorithm to obtain an excessive rotation matrix, and then the excessive rotation matrix, the gyroscope data, the magnetometer data, and the accelerometer data may be processed based on a preset integration algorithm to obtain an excessive translation matrix.
Based on the above mode, the relatively accurate over-rotation matrix and over-translation matrix can be obtained based on inertial measurement data and combined with a preset related algorithm, and the accuracy of the positioning method is improved.
In one embodiment, the step S25 may specifically include:
the scale between the first image and the corresponding projection image is obtained and is recorded as a first scale, and the scale between the second image and the corresponding projection image is obtained and is recorded as a second scale.
Based on the first scale, the second scale and the target pose change data, determining the relative position and the corresponding actual length between the first position and the second position so as to complete the positioning of the image acquisition equipment at the second position.
Specifically, determining the relative position and the corresponding actual length between the first position and the second position based on the first scale, the second scale and the target pose change data to complete the positioning of the image acquisition device at the second position may specifically include:
based on the first scale, the second scale and the target pose change data, the relative position and the corresponding actual length between the first position and the second position are determined.
The actual coordinates of the first position are obtained, and the actual coordinates of the second position are determined based on the actual coordinates of the first position, the relative positions between the first position and the second position and the corresponding actual lengths, so that the positioning of the image acquisition equipment at the second position is completed.
Based on the above manner, the first scale and the second scale can be combined to determine the distance between the obtained actual lengths corresponding to the relative positions between the first position and the second position, and if the actual coordinates of the first position in the three-dimensional space are the known values, the actual coordinates of the second position can be further determined based on the relative positions, the actual lengths and the actual coordinates of the first position between the first position and the second position, so as to complete the positioning of the actual coordinates of the second position, thereby improving the accuracy and applicability of the positioning method.
In the technical scheme of the application, through acquiring a first image and a second image formed by projecting a source image to a reference surface by a projection device, acquiring inertial measurement data corresponding to a pose of the image acquisition device under an image acquisition view angle of the first image, converting the inertial measurement data into the inertial measurement data corresponding to a pose of the image acquisition view angle of the second image, and acquiring excessive pose change data from a first position corresponding to the image acquisition view angle of the first image to a second position corresponding to the image acquisition view angle of the second image by the image acquisition device based on the inertial measurement data. The excessive image based on the excessive pose change data is similar to the second image, so that the calculation amount corresponding to the digital image correlation algorithm is reduced in the process of obtaining the target pose change data based on the excessive image and the second image, and the calculation efficiency of the digital image correlation algorithm can be improved on the premise of improving the positioning method based on the digital image correlation algorithm, thereby ensuring the accuracy of the positioning method and improving the efficiency at the same time.
The present application further proposes a projection system, fig. 3 is a schematic structural diagram of an embodiment of the projection system of the present application, as shown in fig. 3, where the projection system 30 may include: an acquisition module 31 and a processing module 32.
The acquisition module 31 is configured to: the method comprises the steps of obtaining a first image, a second image and inertial measurement data, wherein the first image and the second image are images formed by projecting a source image to a reference surface by projection equipment, image acquisition visual angles corresponding to the first image and the second image are different, and the inertial measurement data represent inertial change conditions in the process of transforming the pose of the image acquisition equipment from the image acquisition visual angle of the first image to the pose of the second image.
The processing module 32 is configured to: and obtaining excessive pose change data of the image acquisition device from a first position to a second position based on the inertial measurement data, wherein the first position is the position of the image acquisition device under the image acquisition view angle of the first image, and the second position is the position of the image acquisition device under the image acquisition view angle of the second image. An excessive image is generated based on the excessive pose change data and the first image. And obtaining target pose change data of the image acquisition equipment from the first position to the second position based on the excessive image and the second image. Based on the target pose change data, a relative position between the first position and the second position is determined to complete positioning of the image acquisition device at the second position.
In the technical scheme of the application, through acquiring a first image and a second image formed by projecting a source image to a reference surface by a projection device, acquiring inertial measurement data corresponding to a pose of the image acquisition device under an image acquisition view angle of the first image, converting the inertial measurement data into the inertial measurement data corresponding to a pose of the image acquisition view angle of the second image, and acquiring excessive pose change data from a first position corresponding to the image acquisition view angle of the first image to a second position corresponding to the image acquisition view angle of the second image by the image acquisition device based on the inertial measurement data. The excessive image based on the excessive pose change data is similar to the second image, so that the calculation amount corresponding to the digital image correlation algorithm is reduced in the process of obtaining the target pose change data based on the excessive image and the second image, and the calculation efficiency of the digital image correlation algorithm can be improved on the premise of improving the positioning method based on the digital image correlation algorithm, thereby ensuring the accuracy of the positioning method and improving the efficiency at the same time.
The present application further proposes a computer readable storage medium, fig. 4 is a schematic structural diagram of an embodiment of the computer readable storage medium of the present application, as shown in fig. 4, where the computer readable storage medium 40 stores program instructions 41, and the program instructions 41 implement the positioning method in the above embodiment when executed by a processor (not shown).
The computer readable storage medium 40 of the present embodiment may be, but is not limited to, a usb disk, an SD card, a PD optical drive, a mobile hard disk, a high capacity floppy drive, a flash memory, a multimedia memory card, a server, a storage unit in an FPGA or an ASIC, and the like.
In the technical scheme of the application, through acquiring a first image and a second image formed by projecting a source image to a reference surface by a projection device, acquiring inertial measurement data corresponding to a pose of the image acquisition device under an image acquisition view angle of the first image, converting the inertial measurement data into the inertial measurement data corresponding to a pose of the image acquisition view angle of the second image, and acquiring excessive pose change data from a first position corresponding to the image acquisition view angle of the first image to a second position corresponding to the image acquisition view angle of the second image by the image acquisition device based on the inertial measurement data. The excessive image based on the excessive pose change data is similar to the second image, so that the calculation amount corresponding to the digital image correlation algorithm is reduced in the process of obtaining the target pose change data based on the excessive image and the second image, and the calculation efficiency of the digital image correlation algorithm can be improved on the premise of improving the positioning method based on the digital image correlation algorithm, thereby ensuring the accuracy of the positioning method and improving the efficiency at the same time.
In the description of the present application, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., may be considered as a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device (which can be a personal computer, server, network device, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the contents of the present application or other related technical fields are included in the scope of the patent application.

Claims (10)

1. A method of positioning an image acquisition device, comprising:
acquiring a first image, a second image and inertial measurement data, wherein the first image and the second image are images formed by projecting a source image to a reference surface by projection equipment, the image acquisition view angles corresponding to the first image and the second image are different, and the inertial measurement data represent the inertial change condition of the image acquisition equipment in the process of transforming from the pose of the image acquisition view angle of the first image to the pose of the image acquisition view angle of the second image;
acquiring excessive pose change data of the image acquisition device from a first position to a second position based on the inertial measurement data, wherein the first position is the position of the image acquisition device under the image acquisition view angle of the first image, and the second position is the position of the image acquisition device under the image acquisition view angle of the second image;
Generating an excessive image based on the excessive pose change data and the first image;
obtaining target pose change data of the image acquisition equipment from the first position to the second position based on the excessive image and the second image;
and determining a relative position between the first position and the second position based on the target pose change data to complete positioning of the image acquisition device at the second position.
2. The positioning method of claim 1, wherein the generating an excessive image based on the excessive pose change data and the first image comprises:
generating an excessive image based on the first image and the excessive pose change data and obtaining a pixel matching relationship between the first image and the excessive image;
the obtaining, based on the excessive image and the second image, target pose change data of the image acquisition device from the first position to the second position includes:
obtaining a pixel matching relationship between the excessive image and the second image based on the excessive image and the second image;
obtaining a pixel matching relationship between the first image and the second image based on the pixel matching relationship between the first image and the excessive image and the pixel matching relationship between the excessive image and the second image;
And obtaining target pose change data of the image acquisition equipment from the first position to the second position based on the pixel matching relation between the first image and the second image.
3. The positioning method according to claim 2, wherein the obtaining excessive pose change data of the image acquisition apparatus from the first position to the second position based on the inertial measurement data includes:
and obtaining excessive pose change data of the image acquisition equipment from the first position to the second position based on gyroscope data, magnetometer data and accelerometer data in the inertial measurement data.
4. A positioning method according to claim 3, wherein the excessive pose change data comprises an excessive rotation matrix and an excessive translation matrix;
the obtaining the excessive pose change data of the image acquisition device from the first position to the second position based on the gyroscope data, the magnetometer data and the accelerometer data in the inertial measurement data comprises the following steps:
obtaining the over-rotation matrix and the over-translation matrix of the image acquisition device from the first position to the second position based on gyroscope data, magnetometer data and accelerometer data in the inertial measurement data;
The generating an excessive image based on the first image and the excessive pose change data and obtaining a pixel matching relationship between the first image and the excessive image includes:
generating an over image based on the over rotation matrix, the over translation matrix, and the first image;
a pixel matching relationship between the first image and the excess image is determined based on the excess rotation matrix, the excess translation matrix, the first image, and the excess image.
5. The positioning method of claim 4, wherein the deriving the over-rotation matrix and the over-translation matrix of the image acquisition device from the first position to the second position based on gyroscope data, magnetometer data, accelerometer data in the inertial measurement data comprises:
based on the gyroscope data, the magnetometer data and the accelerometer data, a preset filtering algorithm and a preset fusion algorithm are adopted to obtain the excessive rotation matrix;
and obtaining the excessive translation matrix by adopting a preset integration algorithm based on the gyroscope data, the magnetometer data, the accelerometer data and the excessive rotation matrix.
6. The positioning method according to any one of claims 2 to 5, wherein the obtaining target pose change data of the image capturing device from the first position to the second position based on the pixel matching relationship between the first image and the second image includes:
screening the pixel matching relation between the first image and the second image to delete matched pixel points which do not meet a preset error condition in the pixel matching relation between the first image and the second image;
and obtaining target pose change data of the image acquisition equipment from the first position to the second position based on the pixel matching relation between the first image and the second image after the screening processing.
7. The positioning method according to any one of claims 1 to 5, wherein the determining a relative position between the first position and the second position based on the target pose change data to complete positioning of the image capturing device at the second position includes:
acquiring a scale between the first image and the corresponding projection image, and recording the scale as a first scale, and acquiring a scale between the second image and the corresponding projection image, and recording the scale as a second scale;
And determining the relative position and the corresponding actual length between the first position and the second position based on the first scale, the second scale and the target pose change data so as to complete the positioning of the image acquisition equipment at the second position.
8. The positioning method according to claim 7, wherein determining the relative position and the corresponding actual length between the first position and the second position based on the first scale, the second scale, and the target pose change data to complete the positioning of the image capturing device at the second position includes:
determining a relative position and a corresponding actual length between the first position and the second position based on the first scale, the second scale, and the target pose change data;
and acquiring the actual coordinates of the first position, and determining the actual coordinates of the second position based on the actual coordinates of the first position, the relative positions between the first position and the second position and the corresponding actual lengths so as to finish the positioning of the image acquisition equipment at the second position.
9. A projection system, comprising:
an acquisition module for: acquiring a first image, a second image and inertial measurement data, wherein the first image and the second image are images formed by projecting a source image to a reference surface by projection equipment, the image acquisition view angles corresponding to the first image and the second image are different, and the inertial measurement data represent the inertial change condition of the image acquisition equipment in the process of transforming from the pose of the image acquisition view angle of the first image to the pose of the image acquisition view angle of the second image;
a processing module for: acquiring excessive pose change data of the image acquisition device from a first position to a second position based on the inertial measurement data, wherein the first position is the position of the image acquisition device under the image acquisition view angle of the first image, and the second position is the position of the image acquisition device under the image acquisition view angle of the second image; generating an excessive image based on the excessive pose change data and the first image; obtaining target pose change data of the image acquisition equipment from the first position to the second position based on the excessive image and the second image; and determining a relative position between the first position and the second position based on the target pose change data to complete positioning of the image acquisition device at the second position.
10. A computer readable storage medium storing program instructions which, when executed by a processor, implement the method of any one of claims 1 to 8.
CN202210981960.5A 2022-08-16 2022-08-16 Positioning method of image acquisition equipment, projection system and storage medium Pending CN117635702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210981960.5A CN117635702A (en) 2022-08-16 2022-08-16 Positioning method of image acquisition equipment, projection system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210981960.5A CN117635702A (en) 2022-08-16 2022-08-16 Positioning method of image acquisition equipment, projection system and storage medium

Publications (1)

Publication Number Publication Date
CN117635702A true CN117635702A (en) 2024-03-01

Family

ID=90029167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210981960.5A Pending CN117635702A (en) 2022-08-16 2022-08-16 Positioning method of image acquisition equipment, projection system and storage medium

Country Status (1)

Country Link
CN (1) CN117635702A (en)

Similar Documents

Publication Publication Date Title
RU2698402C1 (en) Method of training a convolutional neural network for image reconstruction and a system for forming an image depth map (versions)
Moreno et al. Simple, accurate, and robust projector-camera calibration
Callieri et al. Masked photo blending: Mapping dense photographic data set on high-resolution sampled 3D models
JP3859574B2 (en) 3D visual sensor
JP2874710B2 (en) 3D position measuring device
US8330803B2 (en) Method and apparatus for 3D digitization of an object
WO2016181687A1 (en) Image processing device, image processing method and program
US11184604B2 (en) Passive stereo depth sensing
JP2015022510A (en) Free viewpoint image imaging device and method for the same
JP5439277B2 (en) Position / orientation measuring apparatus and position / orientation measuring program
JP2012050013A (en) Imaging apparatus, image processing device, image processing method, and image processing program
JP6411188B2 (en) Stereo matching device, stereo matching program, and stereo matching method
JP2003296708A (en) Data processing method, data processing program and recording medium
Ha et al. Embedded panoramic mosaic system using auto-shot interface
JP3221384B2 (en) 3D coordinate measuring device
JP4169464B2 (en) Image processing method, image processing apparatus, and computer-readable recording medium
RU2384882C1 (en) Method for automatic linking panoramic landscape images
JP5925109B2 (en) Image processing apparatus, control method thereof, and control program
CN117635702A (en) Positioning method of image acquisition equipment, projection system and storage medium
JP2006317418A (en) Image measuring device, image measurement method, measurement processing program, and recording medium
JP2005031044A (en) Three-dimensional error measuring device
CN112262411B (en) Image association method, system and device
JP3066137B2 (en) Pattern matching method
JP2006113001A (en) Three-dimensional measuring method and device by photogrammetry
JP3452188B2 (en) Tracking method of feature points in 2D video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication