CN115482275A - Position parameter acquisition method, device, equipment and medium - Google Patents

Position parameter acquisition method, device, equipment and medium Download PDF

Info

Publication number
CN115482275A
CN115482275A CN202110603476.4A CN202110603476A CN115482275A CN 115482275 A CN115482275 A CN 115482275A CN 202110603476 A CN202110603476 A CN 202110603476A CN 115482275 A CN115482275 A CN 115482275A
Authority
CN
China
Prior art keywords
algorithm
image
target
position parameter
error value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110603476.4A
Other languages
Chinese (zh)
Inventor
孙曦
郭亨凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110603476.4A priority Critical patent/CN115482275A/en
Publication of CN115482275A publication Critical patent/CN115482275A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure relates to a method, a device, equipment and a medium for acquiring position parameters, wherein the method comprises the following steps: calculating candidate position parameters of a target image in continuous multi-frame images according to a preset image algorithm; the candidate position parameters are non-scale position parameters, the inertial acceleration of the target image is obtained, an algorithm error value of a preset image algorithm is determined according to the inertial acceleration and the candidate position parameters, and the candidate position parameters are corrected according to the algorithm error value to obtain the target position parameters; the target location parameter is a scaled location parameter. Therefore, the conversion from the non-scale position parameters calculated by the preset image algorithm to the real world coordinates is realized, the technical support is provided for the scene of acquiring the real pixel position, the position parameters are solved between the image algorithm and the inertial acceleration in a loose coupling mode, and the application flexibility is high.

Description

Position parameter acquisition method, device, equipment and medium
Technical Field
The present disclosure relates to the field of communications terminal technologies, and in particular, to a method, an apparatus, a device, and a medium for acquiring a location parameter.
Background
With the popularization of mobile terminals such as smartphones, it is more common for users to take pictures through camera applications of the mobile terminals, and accompanying this, the demands of users on the taken pictures are more diversified, for example, in order to know the real positions of pixel points in the pictures, there is a demand for acquiring the real positions of the camera taken pictures.
In the related art, in order to meet the requirement for acquiring the real position of the camera, an image is usually captured by a mobile terminal with a depth camera and a color camera, a conversion relationship between pixel coordinates and real world coordinates is calculated by aligning depth information and color pixel information in the image, and the real position of the camera is derived based on the conversion relationship.
However, in the above-mentioned method for acquiring the true position of the camera, on one hand, the mobile terminal must be equipped with a depth camera and a color camera for shooting, which is highly restrictive, and on the other hand, multiple computations such as pixel alignment are required, which is not high in computation accuracy and efficiency.
Disclosure of Invention
In order to solve the technical problem described above or at least partially solve the technical problem described above, the present disclosure provides a position parameter acquiring method, including: calculating candidate position parameters of a target image in continuous multi-frame images according to a preset image algorithm; the candidate position parameters are dimensionless position parameters; acquiring the inertial acceleration of the target image, and determining an algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter; correcting the candidate position parameters according to the algorithm error value to obtain target position parameters; the target position parameter is a position parameter with a scale.
The embodiment of the present disclosure further provides a device for acquiring location parameters, where the device includes: the calculation module is used for calculating candidate position parameters of a target image in continuous multi-frame images according to a preset image algorithm; the candidate position parameters are dimensionless position parameters; the acquisition module is used for acquiring the inertial acceleration of the target image and determining the algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter; the correction module is used for correcting the candidate position parameters according to the algorithm error value so as to obtain target position parameters; the target position parameter is a position parameter with a scale.
An embodiment of the present disclosure further provides an electronic device, which includes: a processor; a memory for storing the processor-executable instructions; the processor is used for reading the executable instruction from the memory and executing the instruction to realize the position parameter obtaining method provided by the embodiment of the disclosure.
The embodiment of the present disclosure further provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program is used to execute the position parameter obtaining method provided by the embodiment of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the position parameter obtaining scheme provided by the embodiment of the disclosure, candidate position parameters of a target image in continuous multi-frame images are calculated according to a preset image algorithm, wherein the candidate position parameters are non-scale position parameters, further, inertial acceleration of the target image is obtained, an algorithm error value of the preset image algorithm is determined according to the inertial acceleration and the candidate position parameters, and finally, the candidate position parameters are corrected according to the algorithm error value to obtain the target position parameters; the target location parameter is a scaled location parameter. Therefore, the conversion from the non-scale position parameters calculated by the preset image algorithm to the real world coordinates is realized, the technical support is provided for the scene of acquiring the real pixel position, the position parameters are solved between the image algorithm and the inertial acceleration in a loose coupling mode, and the application flexibility is high.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of a position parameter obtaining method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a location parameter acquisition application scenario provided in the embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another position parameter acquiring method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another position parameter obtaining method according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of another location parameter acquisition application scenario provided in the embodiment of the present disclosure;
fig. 6 is a schematic diagram of another location parameter acquisition application scenario provided in the embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a position parameter obtaining apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
In order to solve the above-mentioned problems of low accuracy of calculating the real position of the camera and inflexible application in the background art, embodiments of the present disclosure provide a position parameter obtaining method, which can efficiently and accurately calculate the real position parameter of the camera. The camera position parameter of the embodiment of the present disclosure may be any parameter indicating a shooting position of the camera in a real world coordinate system, including but not limited to a camera world coordinate and a camera pose, or a camera rotation parameter and a camera translation parameter, etc.
The method is described below with reference to specific examples.
Fig. 1 is a schematic flowchart of a position parameter obtaining method according to an embodiment of the present disclosure, where the method may be executed by a position parameter obtaining apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 1, the method includes:
step 101, calculating candidate position parameters of a target image in continuous multi-frame images according to a preset image algorithm; the candidate location parameters are non-scale location parameters.
The preset image algorithm includes, but is not limited to, visual-inertial-object-geometry (VIO), simultaneous Localization and Mapping (SLAM), and any other algorithms capable of locating candidate position parameters of an image.
In this embodiment, the candidate position parameter of the target image in the continuous multi-frame images is calculated according to a preset image algorithm, wherein the target image may be any one of the continuous multi-frame images.
Of course, if the preset image algorithms are different, the manner of calculating the candidate position parameter of the target image in the consecutive multi-frame images is also different:
in some possible embodiments, when the predetermined image algorithm is a SLAM algorithm, since the camera position parameter is calculated cumulatively based on the previous frame image, in this embodiment, a plurality of pairs of feature points (for example, 4 pairs of feature points) of the target image matching the reference image are determined, where the reference image is the previous frame image adjacent to the target image in the consecutive multi-frame images, and the target image in this embodiment is not the first image in the consecutive multi-pin images, and then candidate position parameters of the target image frame are calculated according to the coordinates of the plurality of pairs of feature points and the camera internal parameters, that is, according to a homography matrix calculation manner, a plurality of equations are constructed according to the coordinates of the plurality of pairs of feature points to calculate a position mapping relationship between the world coordinate system and the pixel coordinate system of the plurality of pairs of feature points, and the position mapping relationship is the candidate position parameter.
In other possible embodiments, the pixel coordinates of the feature points of the pre-calibrated standard world coordinates of the target image may be calculated according to an image processing algorithm, and the candidate position parameters may be obtained based on conversion calculation between the pixel coordinates and the standard world coordinates.
It should be emphasized that the candidate position parameter of the target image calculated by the preset image algorithm is limited by the preset image algorithm, is a relative position parameter without a real scale, and is not a position parameter in real world coordinates.
For example, when the preset image algorithm is the SLAM algorithm, and the position parameter is the camera pose: when the parameter R and the offset parameter t are rotated, the camera pose R without the real scale of the target image is calculated to be 10, t is 2, for t, the real scale represented by 2 is not clear, namely the value of t under the world coordinate system corresponding to 2 cannot be obtained, therefore, the camera pose of the target image calculated based on the SLAM algorithm is a relative pose under the algorithm and is not the real pose of the world coordinate system, and therefore, the candidate position parameters need to be further processed.
And 102, acquiring the inertial acceleration of the target image, and determining an algorithm error value of a preset image algorithm according to the inertial acceleration and the candidate position parameter.
The inertial acceleration may be one or more of three-dimensional accelerations of a speed (X dimension), a direction (Y dimension), and a gravity (Z dimension) measured by three sensors, namely an accelerometer, a gyroscope, and a magnetometer, of the mobile terminal when the target image is captured. It should be emphasized that, when the inertial acceleration in this embodiment is one of three-dimensional accelerations, i.e., the measurement speed, the measurement direction, and the measurement gravity, the real world coordinates of the pixel point can be obtained, and the algorithm efficiency is high.
It should be understood that the inertial acceleration is real acceleration data measured by a sensor in the real world, and therefore, if the inertial acceleration can be aligned with a preset image algorithm, it is obvious that a real position parameter corresponding to the candidate position parameter can be derived.
Therefore, in the embodiment of the present disclosure, an algorithm error value of the preset image algorithm is obtained according to the inertial acceleration and the candidate position parameter, and the algorithm error value may be understood as a difference value that the preset image algorithm needs to compensate when being aligned to the inertial acceleration.
Of course, in an embodiment of the present disclosure, to further ensure the accuracy of the obtained algorithm error, the algorithm error value may also be calculated according to multiple target images in the continuous multiple-needle images.
In this embodiment, the error mean value of all the algorithm error values corresponding to the plurality of target images is calculated, and the algorithm error mean value is further used as the algorithm error value for correcting the first candidate world coordinate, so that the first candidate world coordinate can be further corrected according to the error mean value.
And 103, correcting the candidate position parameters according to the algorithm error value to obtain target position parameters, wherein the target position parameters are position parameters with scales.
In this embodiment, since the inertial acceleration has a value with a scale, the algorithm error value has a difference value in a corresponding scale, and thus the candidate position parameter is corrected according to the algorithm error value to obtain the target position parameter, that is, the candidate position parameter is aligned from the image coordinate in the preset image algorithm dimension to the world coordinate system.
For example, when the correction of the candidate position parameter according to the algorithm error value is the product correction according to the algorithm error value and the candidate position parameter, when t in the candidate position parameter is 2 without scale as mentioned above, the algorithm error value is 10, and thus, the tape scale value of t in the obtained target position parameter is 2 × 10.
For example, when the algorithm error value is an alignment ratio of the candidate position parameter from the image coordinate to the world coordinate system, the method for correcting the candidate position parameter according to the algorithm error value is as follows: calculating a product value of the algorithm error value and the candidate position parameter, for example, when the algorithm error value is an alignment difference value of the candidate position parameter from the image coordinate to the world coordinate system, the method for correcting the candidate position parameter according to the algorithm error value is as follows: an algorithm error value and an added value of the candidate position parameters are calculated.
It should be emphasized that, the alignment and the calculation of the image algorithm in this embodiment are separated, and both are loosely coupled calculation manners, and the loosely coupled algorithm is convenient to apply, and is not dependent on the robustness of the image algorithm compared to the tightly coupled manner, for example, as shown in fig. 2, when the preset image algorithm is a VIO algorithm, the output end of the VIO algorithm system may be directly connected to the position parameter obtaining system in this embodiment (the world coordinates and calculation of the pixel points may be converted from target position parameters), even if the VIO algorithm has a situation of divergence of the calculation results, etc., the position parameter obtaining system in this embodiment is not dependent on specific parameters of the algorithm, and can also achieve accurate obtaining of the world coordinates of the pixel points in the world coordinate system.
In summary, according to the position parameter obtaining method of the embodiment of the disclosure, the candidate position parameter of the target image in the continuous multi-frame images is calculated according to the preset image algorithm, wherein the candidate position parameter is a dimensionless position parameter, so as to obtain the inertial acceleration of the target image, determine the algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter, and finally correct the candidate position parameter according to the algorithm error value to obtain the target position parameter; the target location parameter is a scaled location parameter. Therefore, the conversion from the non-scale position parameters calculated by the preset image algorithm to the real world coordinates is realized, the technical support is provided for the scene of acquiring the real pixel position, the position parameters are solved between the image algorithm and the inertial acceleration in a loose coupling mode, and the application flexibility is high.
It should be noted that, in different application scenarios, the manner of obtaining the algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter is different in different application scenarios, and the following example is given:
example one:
in the present example, as shown in fig. 3, obtaining an algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter includes:
and 301, acquiring a corresponding visual speed according to the pose coordinate parameters in the candidate position parameters.
It can be understood that the pose coordinate parameter is a relative displacement length of the target image frame relative to the first frame image, for example, the pose coordinate parameter may be t in the above embodiment, and therefore, the corresponding visual speed may be obtained according to the pose coordinate parameter in the candidate position parameters.
For example, the posture coordinate parameter is subjected to differential calculation to obtain a corresponding speed as a visual speed; for another example, the pose coordinate parameters are input into a preset deep learning model, and the visual speed output by the deep learning model is obtained.
And 302, acquiring an algorithm error value of a preset image algorithm according to the visual speed and the inertial acceleration.
In this embodiment, the inertial acceleration data and the visual velocity may be aligned in a velocity dimension, so that an algorithm error of a preset image algorithm may be obtained.
In some possible embodiments, the inertial acceleration data and the visual velocity are aligned in the velocity dimension, and in this embodiment, the inertial acceleration is integrated, and since the integral of the acceleration is the velocity, the inertial velocity of the target image can be obtained, so as to calculate the ratio of the inertial velocity and the visual velocity to obtain the algorithm error value.
Further, in the present example, a product of the algorithm error value and the candidate position parameter is calculated to obtain the target position parameter.
Taking a preset algorithm as an example of an SLAM algorithm, when inertial acceleration data is acceleration in an X dimension, the SLAM algorithm is operated to obtain a non-scale camera pose of a target image, pose coordinate parameter differentiation in the camera pose is calculated to obtain a visual velocity v _ img of the target image, further, the inertial acceleration is integrated to obtain an inertial navigation velocity v _ imu, the two velocities are aligned to obtain v _ imu = s × v _ im, and s is used as an algorithm error value, so that a target position parameter is obtained by multiplying s and a candidate position parameter.
In other possible embodiments, the inertial acceleration data and the visual velocity are aligned in the acceleration dimension, and in this embodiment, since the velocity differential is the acceleration, the visual velocity differential is calculated to obtain the corresponding visual acceleration, and then the ratio of the inertial acceleration and the visual acceleration is calculated to obtain the algorithm error value.
Further, in this example, a product of the algorithm error value and the candidate position parameter target is calculated to obtain the target position parameter.
Taking a preset algorithm as an example of an SLAM algorithm, when inertial acceleration data is acceleration a-imu of an X dimension, the SLAM algorithm is operated to obtain a dimensionless camera pose of a target image, a visual velocity v _ img of the target image is obtained according to pose coordinate parameters in the camera pose, then the visual velocity v _ img is subjected to differential calculation to obtain a visual acceleration a-img, the two accelerations are aligned to a _ imu = s a _ im, and s is used as an algorithm error value, so that a target position parameter is obtained by multiplying s and a candidate position parameter.
Example two:
in this embodiment, a depth model is obtained in advance by learning according to a large amount of sample data, the input of the depth model is the inertial acceleration and the candidate position parameter, and the output is the algorithm error value, so the inertial acceleration and the candidate position parameter can be input into the depth model to obtain the algorithm error value output by the depth model.
The algorithm error value can be understood as the alignment difference value of the candidate position parameter from the image coordinate to the world coordinate system, and then the first candidate world coordinate is corrected by the addition value of the calculation algorithm error value and the first candidate world coordinate.
In summary, according to the position parameter obtaining method disclosed by the embodiment of the disclosure, the algorithm error value of the preset image algorithm is obtained according to the inertial acceleration and the candidate position parameter, and the flexibility of application when obtaining the position parameter is improved in a form of loose coupling of calculation of the algorithm error value and calculation time of the preset image algorithm.
Based on the embodiment, the position parameters of the camera under the real scale of the world coordinate system can be obtained, so that the real coordinates of the pixel points can be determined according to the position parameters under the real scale, for example, when the target position parameter is the camera pose, the corresponding pixel world coordinates can be obtained after the pixel coordinates are multiplied by the camera intrinsic parameters and the camera pose, and therefore, when the camera pose in the embodiment has the scale under the real world coordinate system, the world coordinates of the real scale of the pixel points can be directly obtained through calculation according to the camera pose. Further, the method can meet the requirements of scene applications such as size measurement, three-dimensional modeling and the like.
In one embodiment of the present disclosure, as shown in fig. 4, the method may be applied to dimensional measurement, as shown in fig. 4, the method further comprising:
step 401, obtaining a first image coordinate of a first pixel point in a target image, and obtaining a first world coordinate according to a target position parameter and the first image coordinate.
In order to acquire the coordinate of the first pixel point in the real world coordinate, the real shooting position of the camera can directly or indirectly correspond to the camera pose as the target position parameter indicates the real shooting position, and therefore the first world coordinate can be acquired according to the target position parameter and the first image coordinate.
For example, when the target position parameter is a camera rotation parameter and a camera translation parameter, that is, an external parameter of the camera, a product of the first pixel point and the external parameter of the camera and an internal parameter of the camera is calculated to obtain a corresponding first world coordinate.
Step 402, obtaining a second image coordinate of at least one second pixel point in the target image, and obtaining a second world coordinate according to the target position parameter and the second image coordinate.
The second pixel points can be pixel points other than any first pixel points, and can be triggered and selected by a user.
Similarly, a second image coordinate of at least one second pixel point in the target image is obtained, and a second world coordinate is obtained according to the target position parameter and the second image coordinate.
Step 403, when there is at least one second pixel, determining a real distance between the first pixel and the second pixel according to the first world coordinate and the second world coordinate.
When the number of the second pixel points is one, after the second world coordinate under the real scale of the second pixel points is obtained, the distance between the first pixel points and the second pixel points in the world coordinate system is determined according to the first world coordinate and the second world coordinate, and therefore the size measurement of any two pixel points on the target image can be achieved.
For example, as shown in fig. 5, when the target image includes a ruler image, a first pixel point at an edge of a ruler scale is a, and a second pixel point is B, a first image coordinate of a calculated according to a preset image algorithm is a1, a first world coordinate is a2, a second image coordinate of B is B1, and a second world coordinate is B2, if the distance between the first image coordinate and the second image coordinate is 0.03 calculated directly according to the first image coordinate being a1 and the second image coordinate being B1, it is obviously not an actual distance between the first image coordinate and the second image coordinate, and in this embodiment, a real distance between the first pixel point and the second pixel point is 2 cm obtained by calculating a distance between the first world coordinate a2 and the second world coordinate B2.
In another embodiment of the present disclosure, the method may be applied to three-dimensional modeling, as shown in fig. 4, the method further comprising:
and 404, when at least one second pixel point is multiple, constructing a three-dimensional model corresponding to the first pixel point and the multiple second pixel points according to the first world coordinate and the multiple second world coordinates.
The plurality of second pixel points can be pixel points other than any first pixel point belonging to the same object in the target image, the selection mode of the third pixel point can be triggered by a user, and the corner points of the outline can be determined to be corresponding second pixel points by identifying the outline of the object.
Similarly, the world coordinates of the second images of the third pixel points calculated according to the preset image algorithm are relative coordinates, and therefore the world coordinates of the second images need to be corrected according to the algorithm error value to obtain the world coordinates of the second pixel points.
In this embodiment, after the second world coordinate, which is the true scale of the plurality of second pixel points, is obtained, the pose relationship of the first pixel point and the plurality of second pixel points in the world coordinate system is determined according to the first target world coordinate and the second world coordinate, so that the three-dimensional model can be constructed. Certainly, when the three-dimensional model is constructed, if the distances between the first pixel points and the plurality of second pixel points are longer, the three-dimensional model can also be constructed in an equal-ratio scaling mode.
For example, as shown in fig. 6, when the target image includes a cube, the acquired first pixel points and the plurality of second pixel points respectively correspond to vertices (black point poses in the figure) of the cube, and after world coordinates corresponding to the first pixel points and the plurality of second pixel points are acquired, a three-dimensional model can be constructed according to the world coordinates.
In summary, the position parameter obtaining method of the embodiment of the disclosure can obtain real world coordinates of a plurality of pixel points in an image, and provides technical support for scene applications such as real size measurement and three-dimensional modeling between the pixel points.
Fig. 7 is a schematic structural diagram of a position parameter obtaining apparatus provided in an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 7, includes: a calculation module 710, an acquisition module 720, and a modification module 730, wherein,
the calculating module 710 is configured to calculate a candidate position parameter of a target image in consecutive multi-frame images according to a preset image algorithm; the candidate position parameters are dimensionless position parameters;
an obtaining module 720, configured to obtain an inertial acceleration of the target image, and determine an algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter;
a correction module 730, configured to correct the candidate position parameter according to the algorithm error value to obtain a target position parameter; the target position parameter is a position parameter with a scale.
The position parameter acquiring device provided by the embodiment of the disclosure can execute the position parameter acquiring method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the executing method.
In order to implement the above embodiments, the present disclosure also proposes a computer program product comprising a computer program/instructions which, when executed by a processor, implements the position parameter obtaining method in the above embodiments.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Referring now specifically to fig. 8, a schematic diagram of a structure suitable for implementing an electronic device 800 in embodiments of the present disclosure is shown. The electronic device 800 in the disclosed embodiment may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, electronic device 800 may include a processing means (e.g., central processing unit, graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 808 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication device 808, or installed from the storage device 808, or installed from the ROM 802. When executed by the processing apparatus 801, the computer program performs the above-described functions defined in the position parameter acquisition method of the embodiment of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
calculating candidate position parameters of a target image in continuous multi-frame images according to a preset image algorithm, wherein the candidate position parameters are non-scale position parameters, further acquiring inertial acceleration of the target image, determining an algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameters, and finally correcting the candidate position parameters according to the algorithm error value to acquire the target position parameters; the target location parameter is a scaled location parameter. Therefore, the conversion from the non-scale position parameters calculated by the preset image algorithm to the real world coordinates is realized, the technical support is provided for the scene of acquiring the real pixel position, the position parameters are solved between the image algorithm and the inertial acceleration in a loose coupling mode, and the application flexibility is high.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a position parameter acquisition method including:
calculating candidate position parameters of a target image in continuous multi-frame images according to a preset image algorithm; the candidate position parameters are dimensionless position parameters;
obtaining the inertial acceleration of the target image, and determining an algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter;
correcting the candidate position parameters according to the algorithm error value to obtain target position parameters; the target position parameter is a position parameter with a scale.
According to one or more embodiments of the present disclosure, in the position parameter obtaining method provided by the present disclosure, the calculating a candidate position parameter of a target image in consecutive multi-frame images includes:
determining a plurality of pairs of feature points matched with the target image and a reference image, wherein the reference image is a previous frame image adjacent to the target image in the continuous multi-frame images;
and calculating candidate position parameters of the target image frame according to the coordinates of the multiple pairs of feature points and the internal reference of the camera.
According to one or more embodiments of the present disclosure, in a location parameter obtaining method provided by the present disclosure,
the determining an algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter includes:
acquiring a corresponding visual speed according to a pose coordinate parameter in the candidate position parameters;
and determining an algorithm error value of the preset image algorithm according to the visual velocity and the inertial acceleration.
According to one or more embodiments of the present disclosure, in the position parameter obtaining method provided by the present disclosure, the determining an algorithm error value of the preset image algorithm according to the visual velocity and the inertial acceleration includes:
determining the inertial navigation speed of the target image according to the inertial acceleration;
calculating a ratio of the inertial velocity and the visual velocity to determine the algorithmic error value.
According to one or more embodiments of the present disclosure, in the position parameter obtaining method provided by the present disclosure, the determining an algorithm error value of the preset image algorithm according to the visual velocity and the inertial acceleration includes:
determining corresponding visual acceleration according to the visual speed;
calculating a ratio of the inertial acceleration and the visual acceleration to determine the algorithm error value.
According to one or more embodiments of the present disclosure, in the position parameter obtaining method provided by the present disclosure, the correcting the candidate position parameter according to the algorithm error value to obtain the target position parameter of the first pixel point includes:
calculating a product of the algorithm error value and the candidate location parameter to determine the target location parameter.
According to one or more embodiments of the present disclosure, in the position parameter obtaining method provided by the present disclosure, when there are a plurality of target images, the correcting the candidate position parameter according to the algorithm error value includes:
calculating the error mean value of all algorithm error values corresponding to the target images;
and correcting the first candidate world coordinate according to the error mean value.
According to one or more embodiments of the present disclosure, the method for acquiring location parameters provided by the present disclosure further includes:
acquiring a first image coordinate of a first pixel point in the target image, and acquiring a first world coordinate according to the target position parameter and the first image coordinate;
acquiring a second image coordinate of at least one second pixel point in the target image, and acquiring a second world coordinate according to the target position parameter and the second image coordinate;
and when the number of the at least one second pixel point is one, determining the real distance between the first pixel point and the second pixel point according to the first world coordinate and the second world coordinate.
According to one or more embodiments of the present disclosure, the method for obtaining location parameters further includes:
and when the at least one second pixel point is multiple, constructing a three-dimensional model corresponding to the first pixel point and the multiple second pixel points according to the first world coordinate and the multiple second world coordinates.
According to one or more embodiments of the present disclosure, in the position parameter obtaining method provided by the present disclosure, the target position parameter includes:
camera world coordinates and camera pose; and/or the presence of a gas in the gas,
camera rotation parameters and camera translation parameters.
According to one or more embodiments of the present disclosure, there is provided a position parameter acquisition apparatus including:
the calculation module is used for calculating candidate position parameters of a target image in continuous multi-frame images according to a preset image algorithm; the candidate position parameters are dimensionless position parameters;
the acquisition module is used for acquiring the inertial acceleration of the target image and determining an algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter;
the correction module is used for correcting the candidate position parameters according to the algorithm error value so as to obtain target position parameters; the target position parameter is a position parameter with a scale.
According to one or more embodiments of the present disclosure, in the position parameter obtaining apparatus provided in the present disclosure, the calculating module is specifically configured to:
determining a plurality of pairs of feature points matched with the target image and a reference image, wherein the reference image is a previous frame image adjacent to the target image in the continuous multi-frame images;
and calculating candidate position parameters of the target image frame according to the coordinates of the multiple pairs of feature points and the internal reference of the camera.
According to one or more embodiments of the present disclosure, in the position parameter obtaining apparatus provided by the present disclosure, the obtaining module includes:
the acquisition unit is used for acquiring corresponding visual speed according to the pose coordinate parameter in the candidate position parameters;
and the determining unit is used for determining an algorithm error value of the preset image algorithm according to the visual speed and the inertial acceleration.
According to one or more embodiments of the present disclosure, in the position parameter acquiring apparatus provided by the present disclosure, the determining unit is specifically configured to:
determining the inertial navigation speed of the target image according to the inertial acceleration;
calculating a ratio of the inertial navigation velocity and the vision velocity to determine the algorithmic error value.
According to one or more embodiments of the present disclosure, in the position parameter obtaining apparatus provided by the present disclosure, the determining unit is specifically configured to: determining corresponding visual acceleration according to the visual speed;
calculating a ratio of the inertial acceleration and the visual acceleration to determine the algorithm error value.
According to one or more embodiments of the present disclosure, in the position parameter obtaining apparatus provided by the present disclosure, the correction module is specifically configured to:
calculating a product of the algorithm error value and the candidate location parameter to determine the target location parameter.
According to one or more embodiments of the present disclosure, in the position parameter acquiring apparatus provided by the present disclosure, when there are a plurality of target images, the correcting module is specifically configured to:
calculating the error mean value of all algorithm error values corresponding to the target images;
and correcting the first candidate world coordinate according to the error mean value.
According to one or more embodiments of the present disclosure, in a position parameter acquisition apparatus provided by the present disclosure,
the obtaining module is further configured to: acquiring a first image coordinate of a first pixel point in the target image, and acquiring a first world coordinate according to the target position parameter and the first image coordinate;
acquiring a second image coordinate of at least one second pixel point in the target image, and acquiring a second world coordinate according to the target position parameter and the second image coordinate;
and the distance measuring module is used for determining the real distance between the first pixel point and the second pixel point according to the first world coordinate and the second world coordinate when the at least one second pixel point is one.
According to one or more embodiments of the present disclosure, the position parameter obtaining apparatus further includes:
build module for
And when the at least one second pixel point is multiple, constructing a three-dimensional model corresponding to the first pixel point and the multiple second pixel points according to the first world coordinate and the multiple second world coordinates.
According to one or more embodiments of the present disclosure, in a position parameter obtaining apparatus provided by the present disclosure, the target position parameter includes:
camera world coordinates and camera pose; and/or the presence of a gas in the atmosphere,
camera rotation parameters and camera translation parameters.
In accordance with one or more embodiments of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the instructions to realize any position parameter acquisition method provided by the disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing any one of the position parameter acquisition methods provided by the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other combinations of features described above or equivalents thereof without departing from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (13)

1. A position parameter acquisition method is characterized by comprising the following steps:
calculating candidate position parameters of a target image in continuous multi-frame images according to a preset image algorithm; the candidate position parameters are dimensionless position parameters;
acquiring the inertial acceleration of the target image, and determining an algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter;
correcting the candidate position parameters according to the algorithm error value to obtain target position parameters; the target position parameter is a position parameter with a scale.
2. The method of claim 1, wherein the calculating of the candidate position parameter of the target image in the continuous multiframe images comprises:
determining a plurality of pairs of feature points of the target image matched with a reference image, wherein the reference image is the last frame image adjacent to the target image in the continuous multi-frame images;
and calculating candidate position parameters of the target image frame according to the coordinates of the multiple pairs of feature points and the camera internal parameters.
3. The method of claim 1, wherein determining an algorithm error value for the pre-set image algorithm based on the inertial acceleration and the candidate position parameter comprises:
acquiring a corresponding visual speed according to the pose coordinate parameter in the candidate position parameters;
and determining an algorithm error value of the preset image algorithm according to the visual velocity and the inertial acceleration.
4. The method of claim 3, wherein said determining an algorithm error value for said preset image algorithm based on said visual velocity and said inertial acceleration comprises:
determining inertial navigation speed of the target image according to the inertial acceleration;
calculating a ratio of the inertial navigation velocity and the vision velocity to determine the algorithmic error value.
5. The method of claim 3, wherein said determining an algorithm error value for said preset image algorithm based on said visual velocity and said inertial acceleration comprises:
determining corresponding visual acceleration according to the visual speed;
calculating a ratio of the inertial acceleration and the visual acceleration to determine the algorithm error value.
6. The method as claimed in claim 4 or 5, wherein said modifying said candidate location parameter according to said algorithm error value to obtain said target location parameter of said first pixel point comprises:
calculating a product of the algorithm error value and the candidate location parameter to determine the target location parameter.
7. The method according to any one of claims 1-5, wherein when there are a plurality of target images, said modifying the candidate position parameter according to the algorithm error value comprises:
calculating the error mean value of all algorithm error values corresponding to the target images;
and correcting the first candidate world coordinate according to the error mean value.
8. The method of claim 1, further comprising:
acquiring a first image coordinate of a first pixel point in the target image, and acquiring a first world coordinate according to the target position parameter and the first image coordinate;
acquiring a second image coordinate of at least one second pixel point in the target image, and acquiring a second world coordinate according to the target position parameter and the second image coordinate;
and when the at least one second pixel point is one, determining the real distance between the first pixel point and the second pixel point according to the first world coordinate and the second world coordinate.
9. The method of claim 8, further comprising:
and when the at least one second pixel point is multiple, constructing a three-dimensional model corresponding to the first pixel point and the multiple second pixel points according to the first world coordinate and the multiple second world coordinates.
10. The method of claim 1, wherein the target location parameters comprise:
camera world coordinates and camera pose; and/or the presence of a gas in the gas,
camera rotation parameters and camera translation parameters.
11. A position parameter acquisition apparatus, characterized by comprising:
the calculation module is used for calculating candidate position parameters of a target image in continuous multi-frame images according to a preset image algorithm; the candidate position parameters are dimensionless position parameters;
the acquisition module is used for acquiring the inertial acceleration of the target image and determining an algorithm error value of the preset image algorithm according to the inertial acceleration and the candidate position parameter;
the correction module is used for correcting the candidate position parameters according to the algorithm error value so as to obtain target position parameters; the target position parameter is a position parameter with a scale.
12. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the location parameter obtaining method according to any one of claims 1 to 10.
13. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the position parameter acquisition method of any one of claims 1 to 10.
CN202110603476.4A 2021-05-31 2021-05-31 Position parameter acquisition method, device, equipment and medium Pending CN115482275A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110603476.4A CN115482275A (en) 2021-05-31 2021-05-31 Position parameter acquisition method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110603476.4A CN115482275A (en) 2021-05-31 2021-05-31 Position parameter acquisition method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN115482275A true CN115482275A (en) 2022-12-16

Family

ID=84419653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110603476.4A Pending CN115482275A (en) 2021-05-31 2021-05-31 Position parameter acquisition method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115482275A (en)

Similar Documents

Publication Publication Date Title
CN111127563A (en) Combined calibration method and device, electronic equipment and storage medium
CN111060138B (en) Calibration method and device, processor, electronic equipment and storage medium
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
CN114399588B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN110660098A (en) Positioning method and device based on monocular vision
CN111325792B (en) Method, apparatus, device and medium for determining camera pose
CN114494388B (en) Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment
CN116182878B (en) Road curved surface information generation method, device, equipment and computer readable medium
CN115984371A (en) Scanning head posture detection method, device, equipment and medium
CN112818898B (en) Model training method and device and electronic equipment
CN116079697B (en) Monocular vision servo method, device, equipment and medium based on image
CN115482275A (en) Position parameter acquisition method, device, equipment and medium
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN115086538B (en) Shooting position determining method, device, equipment and medium
CN115086541B (en) Shooting position determining method, device, equipment and medium
CN115082516A (en) Target tracking method, device, equipment and medium
CN114529452A (en) Method and device for displaying image and electronic equipment
CN112880675B (en) Pose smoothing method and device for visual positioning, terminal and mobile robot
CN117906634A (en) Equipment detection method, device, equipment and medium
CN114494428B (en) Vehicle pose correction method and device, electronic equipment and computer readable medium
US20240005552A1 (en) Target tracking method and apparatus, device, and medium
CN117710591A (en) Space map processing method, device, electronic equipment, medium and program product
CN117308929A (en) Method, device, equipment and medium for determining posture of optical positioner
CN109255095B (en) IMU data integration method and device, computer readable medium and electronic equipment
CN117291958A (en) Image matching method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination