CN113034582A - Pose optimization device and method, electronic device and computer readable storage medium - Google Patents

Pose optimization device and method, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN113034582A
CN113034582A CN202110320659.5A CN202110320659A CN113034582A CN 113034582 A CN113034582 A CN 113034582A CN 202110320659 A CN202110320659 A CN 202110320659A CN 113034582 A CN113034582 A CN 113034582A
Authority
CN
China
Prior art keywords
pose information
camera
current frame
pose
constraint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110320659.5A
Other languages
Chinese (zh)
Inventor
黄凯
王楠
章国锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202110320659.5A priority Critical patent/CN113034582A/en
Publication of CN113034582A publication Critical patent/CN113034582A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a pose optimization device and method, electronic equipment and a computer readable storage medium. The pose optimization method comprises the following steps: acquiring current frame pose information and at least previous frame pose information of a camera, and acquiring a prior value of the current frame pose information of the camera by using the at least previous frame pose information and/or the current frame pose information; generating prior constraint on the pose information of the current frame by using the prior value and the current estimated value of the pose information of the current frame of the camera; and optimizing the pose information of the current frame by using prior constraint. By means of the method, the calculation efficiency of the pose optimization method can be reduced, and the robustness of the pose optimization method is improved.

Description

Pose optimization device and method, electronic device and computer readable storage medium
Technical Field
The present disclosure relates to the field of positioning and navigation technologies, and in particular, to a pose optimization apparatus and method, an electronic device, and a computer-readable storage medium.
Background
The visual odometry technology is a technology for determining and optimizing the attitude of a robot by analyzing an image correlation sequence. On one hand, the visual mileage calculation method needs to calculate a large amount of data in real time, and depends on hardware performance, and if the algorithm cannot calculate in real time, the smooth application of the algorithm in some scenes is influenced; meanwhile, the quality of the sensor data also affects the effect of the algorithm, and greatly affects the experience. However, many existing visual mileage calculation methods for intelligent terminals can only be operated on terminals with strong calculation capability and good sensor data quality.
Therefore, obtaining a visual mileage calculation method with low calculation complexity and strong robustness, i.e. a posture optimization method, is an urgent problem to be solved in the positioning and navigation technology.
Disclosure of Invention
The application provides a pose optimization device and method, an electronic device and a computer readable storage medium, which can reduce the calculation efficiency of pose optimization and improve the robustness of the pose optimization.
In order to solve the technical problem, the first aspect of the present application provides a pose optimization method. The pose optimization method comprises the following steps: acquiring current frame pose information and at least previous frame pose information of a camera, and acquiring a prior value of the current frame pose information of the camera by using the at least previous frame pose information and/or the current frame pose information; generating prior constraint on the pose information of the current frame by using the prior value and the current estimated value of the pose information of the current frame of the camera; and optimizing the pose information of the current frame by using prior constraint.
Wherein, the optimizing the pose information of the current frame by using the prior constraint comprises: and carrying out iterative optimization on the pose information of the current frame by using prior constraints, wherein the current estimation value in the prior constraints used in the optimization is the pose information of the current frame after the last optimization, and the current estimation value in the prior constraints used in the first optimization is the prior value.
By the method, iterative optimization of the pose information of the current frame can be realized, and the precision of the pose information after iterative optimization meets the requirement.
The step of obtaining at least the pose information of the previous frame and the pose information of the current frame of the camera and obtaining the prior value of the pose information of the current frame of the camera by using the pose information of the previous frame and/or the pose information of the current frame comprises the following steps: acquiring first two frames of pose information of a camera; respectively acquiring a first position coordinate and a second position coordinate of the camera in a preset coordinate system from the previous two frames of pose information; calculating the first position coordinate and the second position coordinate to obtain a position coordinate prior value; the step of generating the prior constraint for the current frame pose information by using the prior value and the current estimated value of the current frame pose information of the camera includes: acquiring a current estimated value of a current frame position coordinate of a camera; and calculating the difference value between the current estimation value of the current frame position coordinate and the position coordinate prior value to obtain the motion constraint.
According to the camera motion constraint method and device, the position coordinate prior value is obtained through the first position coordinate and the second position coordinate of the first two frames of the camera, and the difference value between the current estimation value of the current frame position coordinate of the camera and the position coordinate prior value is obtained to be used as the camera motion constraint, so that the motion constraint of the camera is achieved, and the calculated amount is small.
Wherein, the step of calculating the first position and the second position to obtain the position coordinate prior value comprises: acquiring a coordinate difference value of the second position coordinate and the first position coordinate; and acquiring the sum of the coordinate difference and the second position coordinate to obtain a position coordinate prior value.
According to the method and the device, the coordinate difference value of the second position coordinate and the first position coordinate is obtained firstly, then the sum of the coordinate difference value and the second position coordinate is obtained as the position coordinate prior value, and the uniform motion constraint of the camera can be achieved.
Wherein, the pose information comprises the height of the camera, the prior constraint comprises the height constraint, the prior value comprises a height prior value, and the step of obtaining the prior value of the pose information of the current frame of the camera by using at least the pose information of the previous frame and/or the pose information of the current frame comprises the following steps: acquiring the height of the previous frame of the camera from the position and posture information of the previous frame as a height prior value; the step of utilizing the prior value to generate and the current estimation value of the pose information of the current frame of the camera to perform prior constraint on the pose information of the current frame comprises the following steps: acquiring a current estimated value of the height of a current frame of a camera; and calculating the height difference value between the current estimated value of the height of the current frame and the height prior value to obtain the height constraint.
Therefore, the height of the previous frame of the camera is used as the height prior value of the current frame of the camera, the height difference value between the current estimated value of the current frame of the height and the height prior value is obtained and is used as the height constraint of the current frame of the camera, the height constraint of the camera can be realized, and the calculation amount is small.
Wherein, the pose information comprises orientation information of the camera, the prior constraint comprises orientation constraint, the prior value comprises an orientation prior value, and the step of obtaining the prior value of the pose information of the current frame of the camera by using at least the pose information of the previous frame and/or the pose information of the current frame comprises the following steps: acquiring orientation information of the camera from the pose information of the current frame as an orientation prior value; the step of generating the prior constraint for the current frame pose information by using the prior value and the current estimated value of the current frame pose information of the camera includes: acquiring a current estimated value of orientation information of a current frame of the camera; the logarithm of the product between the conjugate of the orientation prior value and the current estimate of the current frame orientation information is computed to obtain the orientation constraint.
Therefore, the orientation information of the previous frame of the camera is used as the orientation prior value of the orientation information of the current frame of the camera, and the logarithm of the product of the conjugate value of the orientation prior value and the current estimation value of the orientation information of the current frame is obtained to be the orientation constraint of the current frame of the camera, so that the orientation constraint of the camera can be realized, and the calculation amount is small.
Wherein, the prior constraint comprises a re-projection error constraint, the prior value comprises a re-projection prior value, the step of obtaining at least the pose information of the previous frame and the pose information of the current frame of the camera and obtaining the prior value of the pose information of the current frame of the camera by using the pose information of the previous frame and/or the pose information of the current frame comprises the following steps: acquiring the position and posture information of the previous frame, the image information of the previous frame and the current estimation value of the position and posture information of the current frame of the camera; acquiring a first characteristic point of the previous frame of image information; projecting the first characteristic point to a Z plane of a world coordinate system by using internal parameters of a camera and the posture information of the previous frame to obtain a second characteristic point; projecting the second feature point to a current frame pixel coordinate system by using the internal reference of the camera and the current estimated value of the current frame pose information of the camera to obtain a re-projection prior value; the step of obtaining at least the current frame pose information of the camera and generating the prior constraint for the current frame pose information by using the prior value includes: acquiring current frame image information of a camera; acquiring a third feature point corresponding to the first feature point in the current frame image by using an optical flow algorithm; and calculating the difference between the re-projection prior value and the third characteristic point to obtain a re-projection error constraint.
Therefore, the first characteristic point in the previous frame image of the camera is projected onto the Z plane of the world coordinate system to obtain a second characteristic point; then projecting the second characteristic point to a current frame pixel coordinate system to obtain a re-projection prior value; and finally, obtaining a difference value between the re-projection prior value and a third feature point corresponding to the first feature point in the current frame image of the camera as a re-projection error constraint, so that the re-projection error constraint of the camera can be realized, and the calculated amount is small.
The prior constraint comprises a motion constraint, a height constraint, an orientation constraint and a reprojection error constraint, and the step of performing iterative optimization on the pose information of the current frame by using the prior constraint comprises the following steps: setting weights for position coordinate constraint, height constraint, orientation constraint and reprojection error constraint respectively; obtaining a weighted sum of a position coordinate constraint, a height constraint, an orientation constraint and a reprojection error constraint based on the weights; and carrying out nonlinear optimization on the weighted sum, and outputting the optimized pose information of the camera.
Therefore, the method and the device for acquiring the position coordinate constraint, the height constraint, the orientation constraint and the reprojection error constraint of the current frame of the camera firstly acquire the weighted sum of the position coordinate constraint, the height constraint, the orientation constraint and the reprojection error constraint of the current frame of the camera, then perform nonlinear constraint on the weighted sum, and can acquire the final pose information of the current frame of the camera.
In order to solve the above technical problem, a second aspect of the present application provides a pose optimization apparatus. The pose optimization device includes: the acquisition module is used for acquiring the current frame pose information and at least one previous frame pose information of the camera and acquiring the prior value of the current frame pose information of the camera by utilizing the at least previous frame pose information and/or the current frame pose information; the generating module is coupled with the acquiring module and used for generating prior constraint on the pose information of the current frame by utilizing the prior value and the current estimated value of the pose information of the current frame of the camera; the optimization module is coupled to the generation module and configured to optimize the pose information of the current frame by using the prior constraint.
To solve the above technical problem, a third aspect of the present application provides an electronic device, comprising: the pose optimization system comprises a memory and a processor which are coupled with each other, wherein the processor is used for executing program data stored in the memory so as to realize the pose optimization method of any one of the above.
To solve the above technical problem, a fourth aspect of the present application provides a computer-readable storage medium storing program data that can be executed to implement the pose optimization method of any one of the above.
According to the scheme, the prior value of the pose information of the current frame of the camera is obtained through at least the pose information of the previous frame and/or the pose information of the current frame of the camera, and the pose information of the current frame is optimized through the prior value and the prior constraint generated by the current estimation value of the pose information of the current frame of the camera, so that the pose information of the current frame after optimization is obtained. Therefore, the pose information of the current frame of the camera can be optimized only through the pose information of at least the previous frame and/or the pose information of the current frame of the camera, the calculated amount can be reduced, the calculation time consumption of the pose optimization method can be shortened, and the calculation efficiency of the pose optimization method can be improved; in addition, each frame of pose information of the camera can be optimized by utilizing at least one frame of pose information of the previous frame and/or the current frame of pose information, so that each frame of pose information of the camera can be optimized, continuous tracking of the pose information of the camera is realized, and the robustness of the pose optimization method can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of an embodiment of a pose optimization method of the present application;
FIG. 2 is a detailed flowchart of step S101 in the pose optimization method according to the embodiment of FIG. 1;
FIG. 3 is a detailed flowchart of step S203 in the embodiment of FIG. 2;
FIG. 4 is a detailed flowchart of step S101 in the pose optimization method according to the embodiment of FIG. 1;
FIG. 5 is a detailed flowchart of step S102 in the pose optimization method in the embodiment of FIG. 1;
FIG. 6 is a detailed flowchart of step S102 in the pose optimization method in the embodiment of FIG. 1;
FIG. 7 is a detailed flowchart of step S102 in the pose optimization method in the embodiment of FIG. 1;
FIG. 8 is a detailed flowchart of step S102 in the pose optimization method in the embodiment of FIG. 1;
FIG. 9 is a detailed flowchart of step S103 in the pose optimization method according to the embodiment of FIG. 1;
FIG. 10 is a schematic structural diagram of an embodiment of the pose optimization apparatus of the present application;
FIG. 11 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 12 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The existing visual mileage calculation method has high calculation complexity, and is very dependent on hardware performance when a large amount of data is calculated in real time, and the quality of sensor data can also influence the effect of the algorithm, thus greatly influencing the experience.
For example, in a webpage-side application scenario, the running time of a webpage-side program is often more than 10 times that of a native application, so that the webpage-side-oriented visual mileage calculation method needs to control the running time to the first place. Meanwhile, data such as images, accelerometers, gyroscopes, gravimeters and orientation meters acquired at the webpage end have no accurate timestamp, so that errors exist in data sources. The webpage is basically a single thread, and the frequency of the data is influenced by the performance of the algorithm and other parts (such as webpage loading, rendering and the like) of the webpage, so that effective guarantee is difficult to obtain.
In order to solve the problems, the application provides a pose optimization method. The pose optimization method is introduced based on a webpage end application scene. Of course, the webpage side application scenario in the embodiment of the present application may also be used in other application scenarios.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a pose optimization method according to the present application, where the pose optimization method according to the present embodiment includes the following steps:
step S101: acquiring current frame pose information and at least previous frame pose information of the camera, and acquiring a prior value of the current frame pose information of the camera by using the at least previous frame pose information and/or the current frame pose information.
Each frame of information of the camera comprises a timestamp, pose information and image information, wherein the pose information comprises position coordinates, height, orientation information and the like of the camera.
The embodiment can acquire pose information of a previous frame or previous N frames of the camera, or pose information and image information of the previous frame of the camera, or pose information and image information of the previous N frames of the camera; wherein N is a natural number greater than or equal to 2.
The position coordinates of the ith frame of the camera in the embodiment comprise two-dimensional coordinates (p) of the camera in a world coordinate systemix,piy) The two-dimensional coordinate (p)ix,piy) Height p from cameraizFor three-dimensional coordinates p of the camera in the world coordinate systemi=(pix,piy,piz) (ii) a The orientation information of the camera is orientation information q of the camera in a world coordinate systemi=(qiw,qix,qiy,qiz) Qi is a unit quaternion; the pose information of the ith frame of the camera is { pi,qi}。
To simplify the measurement method, a world coordinate system may be set as a camera coordinate system at system initialization.
Among them, the two-dimensional coordinates (p) of the present embodimentix,piy) Is the coordinate of the ith frame of the camera in the Z plane, the height p of the cameraizIs the height of the camera frame i from the Z plane.
The pose information of the camera can be obtained by a positioning technology, and the orientation information of the camera can be obtained by an orientation meter or a gyroscope.
Assuming that the current frame of the camera is the ith frame, the present embodiment obtains at least the pose information { p of the previous frame of the camera(i-1),q(i-1)And using at least one previous frame position and attitude information { p }(i-1),q(i-1)Get the pose information { p of the current frame of the camerai,qiA priori value of.
The prior value of the present embodiment includes any one or any combination of a position coordinate prior value, a height prior value, an orientation prior value, and a reprojection prior value.
Optionally, the prior value of the present embodiment includes a coordinate position prior value, and the present embodiment may implement step S101 by the method shown in fig. 2. The method of the present embodiment specifically includes steps S201 to S203.
Step S201: and acquiring the first two frames of pose information of the camera.
Assuming that the current frame of the camera is the ith frame, the first two frames of the camera are the (i-2) th frame and the (i-1) th frame, and acquiring pose information { p) of the first two frames of the camera(i-2),q(i-2)And { p }(i-1),q(i-1)}。
Step S202: and respectively acquiring a first position coordinate and a second position coordinate of the camera in a preset coordinate system from the previous two frames of pose information.
The preset coordinate system of the present embodiment may be a camera coordinate system; in other embodiments, the predetermined coordinate system may also be a world coordinate system.
(i-2) th frame pose information { p) of slave camera(i-2),q(i-2)InAcquiring first position coordinates (p) of a camera(i-2)x,p(i-2)y) (ii) a (i-1) th frame attitude information { p) of slave camera(i-1),q(i-1)Get the second position coordinate (p)(i-1)x,p(i-1)y)。
Step S203: the first and second position coordinates are calculated to obtain a position coordinate prior value.
Specifically, the present embodiment may implement step S203 by the method as shown in fig. 3. The method of the present embodiment specifically includes step S301 and step S302.
Step S301: and acquiring a coordinate difference value of the second position coordinate and the first position coordinate.
Specifically, the second position coordinate (p) is acquired(i-1)x,p(i-1)y) Middle coordinate value p(i-1)xWith first position coordinates (p)(i-2)x,p(i-2)y) Middle coordinate value p(i-2)xAnd obtaining a second position coordinate (p)(i-1)x,p(i-1)y) Middle coordinate value p(i-1)yWith first position coordinates (p)(i-2)x,p(i-2)y) Middle coordinate value p(i-2)yIs calculated from the first coordinate difference.
The first coordinate difference value represents the movement distance of the camera along the coordinate axis X from the (i-2) th frame to the (i-1) th frame; the second coordinate difference value characterizes a moving distance of the camera along the coordinate axis Y from the (i-2) th frame to the (i-1) th frame.
Step S302: and acquiring the sum of the coordinate difference and the second position coordinate to obtain a position coordinate prior value.
Specifically, a first coordinate difference value and second position coordinate information (p) are acquired(i-1)x,p(i-1)y) Middle coordinate value p(i-1)xThe sum of the first coordinate position prior value and the second coordinate difference value and the second position coordinate information (p) are obtained(i-1)x,p(i-1)y) Middle coordinate value p(i-1)yThe sum of which is the second coordinate location prior value.
The first coordinate position prior value is a prior value of the camera on a coordinate axis X so as to restrict the movement of the camera along the coordinate axis X; and the prior value of the second coordinate position is the prior value of the camera on the coordinate axis Y so as to restrict the movement of the camera along the coordinate axis Y.
In a webpage end application scene, when a user uses a mobile phone, the mobile phone is usually static or slowly moving relative to the ground, so that the motion of a camera on the mobile phone can be assumed to be uniform motion; therefore, based on the above steps S301 and S302, the coordinate position prior value (i.e. initial value of coordinate position) [ p ] of the ith frame of the camera can be obtained according to the (i-2) th frame, the (i-1) th frame and the uniform motion among the ith frames of the camera(i-1)+(p(i-1)-p(i-2))]xy
Optionally, the prior value of this embodiment includes a height prior value, and this embodiment may obtain, from the pose information of the previous frame of the camera, the height of the previous frame of the camera as the height prior value.
The height of the camera can be obtained from cloud visual positioning, or can be obtained in internal Wi-Fi positioning, Bluetooth positioning and other manners, and is not limited herein.
In the present embodiment, the camera coordinate system is set as the world coordinate system during initialization, so that the height of the camera in the world coordinate system is the height in the camera coordinate system, and can be unified with the two-dimensional coordinates of the camera.
Specifically, (i-1) th frame pose information { (p) of the slave camera(i-1)x,p(i-1)y,p(i-1z),q(i-1)Get the (i-1) th frame height p of the camera(i-1)zIs a high prior value.
In a webpage end application scene, when a user uses a mobile phone, the height between the mobile phone and the ground is usually unchanged (or the change amplitude is small and can be ignored), so that the embodiment can assume that the user always places the mobile phone at a fixed height, and does not consider the situation of fluctuation of the height of the mobile phone caused by squatting, jumping and the like; based on the above method, the height of the (i-1) th frame of the camera can be used as the height prior value (i.e. the initial height value) p of the i th frame of the camera(i-1)z
Optionally, the prior value of the present embodiment includes an orientation prior value, and the present embodiment may obtain the orientation prior value from an orientation meter provided by the smartphone system.
Reading the current reading (q) of the smartphone system orientation meteri_atti_w,qi_atti_x,qi_atti_y,qi_atti_z) As an orientation prior value, and as an initial current estimate of orientation information optimized as described later.
Optionally, the prior value of this embodiment includes a reprojection prior value, and this embodiment may implement step S101 by the method shown in fig. 4. The method of the present embodiment includes steps S401 to S404.
Step S401: and acquiring the position and attitude information of the previous frame, the image information of the previous frame and the current estimated value of the position and attitude information of the current frame of the camera.
Acquiring (i-1) th frame attitude information { p) of a camera(i-1),q(i-1)And (I-1) th frame image information I(i-1)Current estimate of i-th frame pose information for camera { p }i,qi}。
Wherein, the image information of the camera can be obtained by the camera collecting the image area.
Step S402: and acquiring a first characteristic point of the previous frame of image information.
The present embodiment can adopt ORB algorithm or the like to obtain the (I-1) th frame image information I(i-1)Extracting a first feature point Yi-1,nYi-1,n(N is more than or equal to 1 and less than or equal to N), and N is the number of the feature points extracted from the image.
Step S403: and projecting the first characteristic point to a Z plane of a world coordinate system by using the internal reference of the camera and the pose information of the previous frame to obtain a second characteristic point.
The camera internal parameter is used for realizing projection conversion between points in a camera coordinate system and points in a pixel coordinate system; the camera internal reference K may be a 3 x 3 matrix; for a certain point p on the camera coordinate systemcAt the coordinate K.p of the pixel coordinate systemc(ii) a Conversely, for a certain point on the pixel coordinate system, it can be projected onto the camera coordinate system by the inverse of K.
The first feature point Y is obtained by using the internal reference of the camera and the position and posture information of the previous framei-1,nProjecting the image to a camera coordinate system and obtaining the origin of the camera coordinate systemThe intersection point of the ray formed by the projection point and the Z plane of the world coordinate system, all the intersection points form a second characteristic point xi-1,n(1≤n≤N)。
The second characteristic points are in one-to-one correspondence with the first characteristic points, the second characteristic points are located in a world coordinate system, and the first characteristic points are located in a pixel coordinate system of a previous frame of image.
Step S404: and projecting the second characteristic point to a pixel coordinate system of the current frame by using the internal reference of the camera and the current estimated value of the pose information of the current frame of the camera to obtain a re-projection prior value. Estimate value { p) according to current frame pose information using camera's internal referencei,qiAnd a second characteristic point (xi-1, n) located in the world coordinate system, and projecting the second characteristic point (xi-1, n) into the pixel coordinate system of the current frame to obtain a reprojection prior value pi (q)i *(xi-1,n-pi))。
Step S102: and generating prior constraint on the pose information of the current frame by using the prior value and the current estimated value of the pose information of the current frame of the camera.
The current estimate of the camera's current frame pose information is the last optimized optimization of the current frame pose information. The current estimated value used for the first optimization is the a priori value generated in step S101.
Alternatively, in an embodiment, step S102 may be implemented by a method as shown in fig. 5. The prior value of this embodiment includes a coordinate position prior value, and the method of this embodiment specifically includes step S501 and step S502.
Step S501: a current estimate of the current frame position coordinates of the camera is obtained.
Obtaining a current estimate of the ith frame pose information of the camera { (p)ix,piy,piz),(qiw,qix,qiy,qiz) And from the current estimate of the ith frame position information { (p)ix,piy,piz),(qiw,qix,qiy,qiz) Acquiring a current estimated value (p) of the position coordinate of the ith frameix,piy)。
Step S502: and calculating the difference value between the current estimation value of the current frame position coordinate and the position coordinate prior value to obtain the motion constraint.
Obtaining the current estimated value [ p ] of the ith frame position coordinatei]xyWith a priori value of the position co-ordinates [2p ](i-1)-p(i-2)]xyDifference value of [ p ]i-2p(i-1)+p(i-2)]xyAs a motion constraint.
Specifically, a current estimated value p of the position coordinates of the ith frame is acquiredixWith a first prior value [2p ](i-1)-p(i-2)]xThe difference value of (a) is used as the motion constraint of the coordinate axis X; obtaining the current estimated value p of the ith frame position coordinateiypiyWith a first prior value [2p ](i-1)-p(i-2)]yThe difference value of (a) serves as a motion constraint of the camera along the coordinate axis Y.
The embodiment can realize the uniform motion constraint of the camera, and the method has small calculation amount and can improve the calculation efficiency. In other embodiments, the camera may be subjected to variable speed or static constraint, etc. according to the actual application scene of the camera.
Alternatively, in another embodiment, step S102 may be implemented by a method as shown in fig. 6. The prior value of this embodiment includes a high prior value, and the method of this embodiment specifically includes step S601 and step S602.
Step S601: a current estimate of the current frame height of the camera is obtained.
Obtaining a current estimate of the ith frame pose information of the camera { (p)ix,piy,piz),(qiw,qix,qiy,qiz) And from the current estimate of the ith frame position information { (p)ix,piy,piz),(qiw,qix,qiy,qiz) Get the current estimated value p of the ith frame heightiz
Step S602: and calculating the height difference value between the current estimated value of the height of the current frame and the height prior value to obtain the height constraint.
Obtaining the current estimated value p of the ith frame heightizWith a high prior value p(i-1)zDifference in height (p) ofiz-p(i-1)z) As a height constraint.
The height of the camera can be obtained from cloud visual positioning, or can be obtained in internal Wi-Fi positioning, Bluetooth positioning and other manners, and is not limited herein.
According to the embodiment, the height constraint of the camera can be realized, the calculation amount of the method is small, and the calculation efficiency can be improved.
Alternatively, in an embodiment, step S102 may be implemented by a method as shown in fig. 7. The prior value of this embodiment includes an orientation prior value, and the method of this embodiment specifically includes step S701 and step S702.
Step S701: a current estimate of the current frame orientation information of the camera is obtained.
Obtaining a current estimate of the ith frame pose information of the camera { (p)ix,piy,piz),(qiw,qix,qiy,qiz) And from the current estimate of the ith frame position information { (p)ix,piy,piz),(qiw,qix,qiy,qiz) Get the current estimated value q of the i-th frame orientation informationi=(qiw,qix,qiy,qiz)。
Step S702: the logarithm of the product between the conjugate of the orientation prior value and the current estimate of the current frame orientation information is computed to obtain the orientation constraint.
Specifically, the orientation prior value q is obtained firsti_atti=(qiatti_w,qi_atti_x,qi_atti_y,qi_atti_z) The conjugate value q ofi_atti *(ii) a Then, the conjugate value q of the orientation prior value is obtainedi_atti *Current estimate q of orientation information with the ith framei=(qiw,qix,qiy,qiz) Log (q) of the product betweeni_atti *·qi) Is an orientation constraint.
The method and the device can realize orientation constraint of the camera, and the method is small in calculation amount and can improve calculation efficiency.
Alternatively, in an embodiment, step S102 may be implemented by a method as shown in fig. 8. The prior value of this embodiment includes a reprojection error, and the method of this embodiment specifically includes steps S801 to S803.
Step S801: and acquiring current frame image information of the camera.
The acquisition method is similar to the acquisition of the previous frame of image information of the camera; because the image acquisition area corresponding to the previous frame of the camera is different from the image acquisition area corresponding to the current frame of the camera, the image information of the previous frame is different from the image information of the current frame.
Step S802: and acquiring a third feature point corresponding to the first feature point in the current frame image by using an optical flow algorithm.
Acquiring a first feature point (Y) in the ith frame image by using an optical flow algorithmi-1,n) Corresponding third feature point Xi,n(1≤n≤N)。
Step S803: and calculating the difference between the re-projection prior value and the third characteristic point to obtain a re-projection error constraint.
Acquiring a third feature point (X)i,n) With the reprojection prior value pi (q)i *(xi-1,n-pi) A) difference between the two. Suppose that the number of feature points in an image is N, therefore, the reprojection error is constrained to be
Figure BDA0002992695100000141
The embodiment can realize the reprojection error constraint of the camera, and the method has small calculation amount and can improve the calculation efficiency.
Step S103: and optimizing the pose information of the current frame by using prior constraint.
Specifically, iterative optimization is performed on the pose information of the current frame by using prior constraints, wherein the current estimated value in the prior constraints used in the optimization is the pose information of the current frame after the last optimization, and the current estimated value in the prior constraints used in the first optimization is the prior value
In the embodiment, prior constraint is utilized to carry out iterative optimization on the pose information of the current frame, and the optimization result of the pose information of the current frame after each optimization is used as the current estimation value of the next optimization; the prior value is used as the current estimation value of the first optimization.
The position and posture information of the previous frame of the camera refers to position and posture information of the previous frame after iterative optimization.
From the analysis, the prior constraints comprise any one or any combination of motion constraints, height constraints, orientation constraints and reprojection error constraints, so that the pose information of the current frame of the camera can be optimized by adopting any one or any combination of the prior constraints according to the actual application scene to output the optimized pose information.
The present embodiment may implement step S103 by the method as shown in fig. 9. The method of this embodiment specifically includes steps S901 to S903.
Step S901: weights are set for the position coordinate constraint, the height constraint, the orientation constraint and the reprojection error constraint, respectively.
Respectively a motion constraint [ pi-2p(i-1)+p(i-2)]xySetting weights
Figure BDA0002992695100000142
To a high degree of constraint (p)iz-p(i-1)z) Setting weights
Figure BDA0002992695100000143
Is constrained log (q) to orientationi_atti *·qi) Setting weights
Figure BDA0002992695100000144
For reprojection error constraints
Figure BDA0002992695100000145
Setting weights
Figure BDA0002992695100000146
Step S902: a weighted sum of a position coordinate constraint, a height constraint, an orientation constraint, and a reprojection error constraint is obtained based on the weights.
Obtaining the above-mentioned constraint weighted sum
Figure BDA0002992695100000151
Step S903: and carrying out nonlinear optimization on the weighted sum, and outputting the optimized pose information of the camera.
The embodiment can adopt a nonlinear least square method to obtain the weighted sum of position coordinate constraint, height constraint, orientation constraint and reprojection error constraint as the minimum value
Figure BDA0002992695100000152
And taking the corresponding pose information of the camera as the optimized pose information of the current frame.
In other embodiments, a levenberg-marquardt non-linear fit method, or the like, may also be employed.
Different from the prior art, in the embodiment, the prior value of the pose information of the current frame of the camera is obtained through at least the pose information of the previous frame and/or the pose information of the current frame of the camera, and the pose information of the current frame is optimized by using the prior value and the prior constraint generated by the current estimation value of the pose information of the current frame of the camera, so as to obtain the pose information of the current frame after optimization. Therefore, the pose information of the current frame of the camera can be optimized only through the pose information of at least the previous frame and/or the pose information of the current frame of the camera, the calculated amount can be reduced, the calculation time consumption of the pose optimization method can be shortened, and the calculation efficiency can be improved; in addition, each frame of pose information of the camera of the embodiment can be optimized by using at least the pose information of the previous frame and/or the pose information of the current frame, so that each frame of pose information of the camera can be optimized, continuous tracking of the pose information of the camera is realized, and the robustness of the pose optimization method can be improved.
Further, in response to i being 1, i.e., camera frame 1, camera pose { (0, 0, h, q)1}; in response to the situation that i is 2, setting the initial value (namely the prior value) of the 2 nd frame attitude information of the camera to be { (0, 0, h, q)1And set the weight of the motion constraint
Figure BDA0002992695100000153
If the frame number is zero, the 2 nd frame attitude information of the camera is not subjected to motion constraint; setting the weight of the motion constraint when i is more than or equal to 3
Figure BDA0002992695100000161
Highly constrained weights
Figure BDA0002992695100000162
Weights towards constraints
Figure BDA0002992695100000163
And weight of reprojection error constraint
Figure BDA0002992695100000164
The bit is non-zero.
The application further provides a pose optimization device, as shown in fig. 10, fig. 10 is a schematic structural diagram of an embodiment of the pose optimization device. The pose optimization apparatus 100 of the present embodiment includes: an acquisition module 111, a generation module 112 and an optimization module 113; the obtaining module 111 is configured to obtain pose information of a current frame of the camera and pose information of at least a previous frame of the camera, and obtain a prior value of pose information of the current frame of the camera by using the pose information of at least the previous frame and/or the pose information of the current frame of the camera; the generating module 112 is coupled to the obtaining module 111, and configured to generate a priori constraint on the pose information of the current frame by using the priori value and the current estimated value of the pose information of the current frame of the camera; the optimization module 113 is coupled to the generation module 112 for optimizing the pose information of the current frame by using the a priori constraints.
Different from the prior art, the pose optimization device 100 of the embodiment can optimize the pose information of the current frame of the camera only through at least the pose information of the previous frame and/or the pose information of the current frame of the camera, and can reduce the calculation amount, thereby shortening the calculation time of the pose optimization method and further improving the calculation efficiency; and each frame of pose information of the camera can be optimized by utilizing at least the pose information of the previous frame and/or the pose information of the current frame, so that each frame of pose information of the camera can be optimized, the continuous tracking of the pose information of the camera is realized, and the robustness of the pose optimization method can be improved.
The pose optimization apparatus 100 of this embodiment is also used to implement the pose optimization method, which is not described herein.
The present application further provides an electronic device, as shown in fig. 11, fig. 11 is a schematic structural diagram of an embodiment of the electronic device of the present application. The electronic device 80 of the present embodiment includes a processor 81, a memory 82, an input-output device 83, and a bus 84.
The processor 81, the memory 82 and the input/output device 83 are respectively connected to the bus 84, the memory 82 stores program data, and the processor 81 is configured to execute the program data to implement the pose optimization method according to the above embodiment.
In the present embodiment, the processor 81 may also be referred to as a CPU (Central Processing Unit). The processor 81 may be an integrated circuit chip having signal processing capabilities. Processor 81 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 81 may be any conventional processor or the like.
The present application further provides a computer-readable storage medium, as shown in fig. 12, fig. 12 is a schematic structural diagram of an embodiment of the computer-readable storage medium of the present application. The computer-readable storage medium 121 has stored thereon program instructions 122, and the program instructions 122, when executed by a processor (not shown), implement the pose optimization method described above.
The computer-readable storage medium 121 of this embodiment may be, but is not limited to, a usb disk, an SD card, a PD optical drive, a removable hard disk, a high-capacity floppy drive, a flash memory, a multimedia memory card, a server, and the like.
Different from the prior art, the pose optimization method in the embodiment of the application comprises the following steps: acquiring current frame pose information and at least previous frame pose information of a camera, and acquiring a prior value of the current frame pose information of the camera by using the at least previous frame pose information and/or the current frame pose information; generating prior constraint on the pose information of the current frame by using the prior value and the current estimated value of the pose information of the current frame of the camera; and optimizing the pose information of the current frame by using prior constraint. The method comprises the steps of obtaining a priori value of pose information of a current frame of the camera through at least previous frame pose information and/or current frame pose information of the camera, and optimizing the pose information of the current frame by using the priori value and priori constraint generated by a current estimation value of the pose information of the current frame of the camera to obtain optimized pose information of the current frame. Therefore, the pose information of the current frame of the camera can be optimized only through the pose information of at least the previous frame and/or the pose information of the current frame of the camera, the calculated amount can be reduced, the calculation time consumption of the pose optimization method can be shortened, and the calculation efficiency of the pose optimization method can be improved; in addition, each frame of pose information of the camera can be optimized by utilizing at least one frame of pose information of the previous frame and/or the current frame of pose information, so that each frame of pose information of the camera can be optimized, continuous tracking of the pose information of the camera is realized, and the robustness of the pose optimization method can be improved.
In addition, if the above functions are implemented in the form of software functions and sold or used as a standalone product, the functions may be stored in a storage medium readable by a mobile terminal, that is, the present application also provides a storage device storing program data, which can be executed to implement the method of the above embodiments, the storage device may be, for example, a usb disk, an optical disk, a server, etc. That is, the present application may be embodied as a software product, which includes several instructions for causing an intelligent terminal to perform all or part of the steps of the methods described in the embodiments.
In the description of the present application, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be viewed as implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device (e.g., a personal computer, server, network device, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (11)

1. A pose optimization method, characterized by comprising:
acquiring pose information of a current frame of a camera and pose information of at least one previous frame of the camera, and acquiring a prior value of the pose information of the current frame of the camera by using the pose information of at least one previous frame and/or the pose information of the current frame of the camera;
generating a priori constraint on the pose information of the current frame by using the priori value and the current estimated value of the pose information of the current frame of the camera;
and optimizing the pose information of the current frame by using the prior constraint.
2. The pose optimization method according to claim 1, wherein the optimizing the current frame pose information using the a priori constraints comprises:
and performing iterative optimization on the pose information of the current frame by using the prior constraint, wherein the current estimation value in the prior constraint used in the optimization is the pose information of the current frame after the last optimization, and the current estimation value in the prior constraint used in the first optimization is the prior value.
3. A pose optimization method according to claim 1, wherein the pose information comprises position coordinates of the camera, the a priori constraints comprise motion constraints, the a priori values comprise position coordinate a priori values, and the step of obtaining at least previous frame pose information and current frame pose information of the camera and obtaining a priori values of current frame pose information of the camera using at least the previous frame pose information and/or the current frame pose information comprises:
acquiring the first two frames of pose information of the camera;
respectively acquiring a first position coordinate and a second position coordinate of the camera in a preset coordinate system from the first two frames of pose information;
calculating the first position coordinate and the second position coordinate to obtain a prior value of the position coordinate;
the step of generating a prior constraint on the current frame pose information using the prior value and the current estimate of the current frame pose information of the camera comprises:
acquiring a current estimated value of a current frame position coordinate of the camera;
and calculating the difference value between the current estimation value of the current frame position coordinate and the position coordinate prior value to obtain the motion constraint.
4. A pose optimization method according to claim 3, wherein the step of calculating the first and second locations to obtain the location coordinate priors comprises:
acquiring a coordinate difference value of the second position coordinate and the first position coordinate;
and acquiring the sum of the coordinate difference and the second position coordinate to obtain the position coordinate prior value.
5. A pose optimization method according to claim 1, wherein the pose information comprises a height of the camera, the a priori constraints comprise height constraints, the a priori values comprise height a priori values, and the step of obtaining the a priori values of the pose information of the current frame of the camera using at least the pose information of the previous frame and/or the pose information of the current frame comprises:
acquiring the height of the previous frame of the camera from the pose information of the previous frame as the height prior value;
the step of generating a prior constraint on the current frame pose information using the prior value and the current estimate of the current frame pose information of the camera comprises:
acquiring a current estimated value of the current frame height of the camera;
and calculating a height difference value between the current estimated value of the height of the current frame and the height prior value to obtain the height constraint.
6. A pose optimization method according to claim 1, wherein the pose information comprises orientation information of the camera, the a priori constraints comprise orientation constraints, the a priori values comprise orientation a priori values, and the step of obtaining the a priori values of the pose information of the current frame of the camera using at least the pose information of the previous frame and/or the pose information of the current frame comprises:
acquiring orientation information of the camera from the pose information of the current frame as the orientation prior value;
the step of generating a prior constraint on the current frame pose information using the prior value and the current estimate of the current frame pose information of the camera comprises:
acquiring a current estimated value of orientation information of a current frame of the camera;
calculating a logarithm of a product between a conjugate of the orientation prior value and a current estimate of the current frame orientation information to obtain the orientation constraint.
7. A pose optimization method according to claim 1, wherein the a priori constraints comprise reprojection error constraints, the a priori values comprise reprojection a priori values, the step of obtaining at least a previous frame of pose information and a current frame of pose information for a camera, and obtaining a priori value for a current frame of pose information for the camera using at least the previous frame of pose information and/or the current frame of pose information comprises:
acquiring the position and posture information of the previous frame, the image information of the previous frame and the current estimation value of the position and posture information of the current frame of the camera;
acquiring a first characteristic point of the previous frame of image information;
projecting the first feature point to a Z plane of a world coordinate system by using the internal reference of the camera and the pose information of the previous frame to obtain a second feature point;
projecting the second feature point to a pixel coordinate system of a current frame by using the internal reference of the camera and the current estimated value of the pose information of the current frame of the camera to obtain a re-projection prior value;
the step of obtaining at least current frame pose information of the camera and generating prior constraints on the current frame pose information by using the prior values comprises:
acquiring current frame image information of the camera;
acquiring a third feature point corresponding to the first feature point in the current frame image by using an optical flow algorithm;
calculating a difference between the reprojection prior value and the third feature point to obtain the reprojection error constraint.
8. A pose optimization method according to any one of claims 1 to 7, wherein the a priori constraints include a motion constraint, a height constraint, an orientation constraint and a reprojection error constraint,
the step of performing iterative optimization on the pose information of the current frame by using the prior constraint comprises the following steps:
setting weights for the position coordinate constraint, the height constraint, the orientation constraint and the reprojection error constraint respectively;
obtaining a weighted sum of the position coordinate constraint, the height constraint, the orientation constraint, and the reprojection error constraint based on the weights;
and carrying out nonlinear optimization on the weighted sum, and outputting the optimized pose information of the camera.
9. A pose optimization apparatus, characterized by comprising:
the acquisition module is used for acquiring the current frame pose information and at least one previous frame pose information of the camera and acquiring the prior value of the current frame pose information of the camera by utilizing the at least one previous frame pose information and/or the current frame pose information;
a generating module, coupled to the obtaining module, configured to generate a priori constraint on the pose information of the current frame by using the priori value and a current estimated value of the pose information of the current frame of the camera;
and the optimization module is coupled with the generation module and used for optimizing the pose information of the current frame by utilizing the prior constraint.
10. An electronic device, characterized in that the electronic device comprises: a memory and a processor coupled to each other, the processor being configured to execute program data stored in the memory to implement the pose optimization method of any one of claims 1 to 8.
11. A computer-readable storage medium characterized in that the computer-readable storage medium stores program data executable to implement the pose optimization method according to any one of claims 1 to 8.
CN202110320659.5A 2021-03-25 2021-03-25 Pose optimization device and method, electronic device and computer readable storage medium Pending CN113034582A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110320659.5A CN113034582A (en) 2021-03-25 2021-03-25 Pose optimization device and method, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110320659.5A CN113034582A (en) 2021-03-25 2021-03-25 Pose optimization device and method, electronic device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113034582A true CN113034582A (en) 2021-06-25

Family

ID=76473860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110320659.5A Pending CN113034582A (en) 2021-03-25 2021-03-25 Pose optimization device and method, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113034582A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113607160A (en) * 2021-08-24 2021-11-05 湖南国科微电子股份有限公司 Visual positioning recovery method and device, robot and readable storage medium
CN113793251A (en) * 2021-08-13 2021-12-14 北京迈格威科技有限公司 Pose determination method and device, electronic equipment and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
EP3206163A1 (en) * 2016-02-11 2017-08-16 AR4 GmbH Image processing method, mobile device and method for generating a video image database
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN110246147A (en) * 2019-05-14 2019-09-17 中国科学院深圳先进技术研究院 Vision inertia odometer method, vision inertia mileage counter device and mobile device
CN110322500A (en) * 2019-06-28 2019-10-11 Oppo广东移动通信有限公司 Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring
CN110335316A (en) * 2019-06-28 2019-10-15 Oppo广东移动通信有限公司 Method, apparatus, medium and electronic equipment are determined based on the pose of depth information
CN110631554A (en) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 Robot posture determining method and device, robot and readable storage medium
CN110853085A (en) * 2018-08-21 2020-02-28 深圳地平线机器人科技有限公司 Semantic SLAM-based mapping method and device and electronic equipment
CN111415387A (en) * 2019-01-04 2020-07-14 南京人工智能高等研究院有限公司 Camera pose determining method and device, electronic equipment and storage medium
CN111445526A (en) * 2020-04-22 2020-07-24 清华大学 Estimation method and estimation device for pose between image frames and storage medium
CN112444242A (en) * 2019-08-31 2021-03-05 北京地平线机器人技术研发有限公司 Pose optimization method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3206163A1 (en) * 2016-02-11 2017-08-16 AR4 GmbH Image processing method, mobile device and method for generating a video image database
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
CN110631554A (en) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 Robot posture determining method and device, robot and readable storage medium
CN110853085A (en) * 2018-08-21 2020-02-28 深圳地平线机器人科技有限公司 Semantic SLAM-based mapping method and device and electronic equipment
CN109307508A (en) * 2018-08-29 2019-02-05 中国科学院合肥物质科学研究院 A kind of panorama inertial navigation SLAM method based on more key frames
CN111415387A (en) * 2019-01-04 2020-07-14 南京人工智能高等研究院有限公司 Camera pose determining method and device, electronic equipment and storage medium
CN110246147A (en) * 2019-05-14 2019-09-17 中国科学院深圳先进技术研究院 Vision inertia odometer method, vision inertia mileage counter device and mobile device
CN110322500A (en) * 2019-06-28 2019-10-11 Oppo广东移动通信有限公司 Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring
CN110335316A (en) * 2019-06-28 2019-10-15 Oppo广东移动通信有限公司 Method, apparatus, medium and electronic equipment are determined based on the pose of depth information
CN112444242A (en) * 2019-08-31 2021-03-05 北京地平线机器人技术研发有限公司 Pose optimization method and device
CN111445526A (en) * 2020-04-22 2020-07-24 清华大学 Estimation method and estimation device for pose between image frames and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAOMIN LIU等: "Robust Keyframe-based Dense SLAM with anRGB-D Camera", ARXIV, pages 1 - 12 *
赵良玉: "基于点线特征融合的双目惯性 SLAM 算法", 航空学报, pages 1 - 15 *
韩健英;王浩;方宝富;: "最小化光度误差先验的视觉SLAM算法", 小型微型计算机系统, no. 10 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113793251A (en) * 2021-08-13 2021-12-14 北京迈格威科技有限公司 Pose determination method and device, electronic equipment and readable storage medium
WO2023016182A1 (en) * 2021-08-13 2023-02-16 北京迈格威科技有限公司 Pose determination method and apparatus, electronic device, and readable storage medium
CN113607160A (en) * 2021-08-24 2021-11-05 湖南国科微电子股份有限公司 Visual positioning recovery method and device, robot and readable storage medium
CN113607160B (en) * 2021-08-24 2023-10-31 湖南国科微电子股份有限公司 Visual positioning recovery method, device, robot and readable storage medium

Similar Documents

Publication Publication Date Title
CN108805917B (en) Method, medium, apparatus and computing device for spatial localization
CN114399597B (en) Method and device for constructing scene space model and storage medium
EP3326156B1 (en) Consistent tessellation via topology-aware surface tracking
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
CN109754464B (en) Method and apparatus for generating information
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN105635588A (en) Image stabilization method and device
CN111161398B (en) Image generation method, device, equipment and storage medium
JP7411114B2 (en) Spatial geometric information estimation model generation method and device
CN113034582A (en) Pose optimization device and method, electronic device and computer readable storage medium
CN114494388B (en) Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment
CN115294275A (en) Method and device for reconstructing three-dimensional model and computer readable storage medium
US10600202B2 (en) Information processing device and method, and program
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
JP2024507727A (en) Rendering a new image of a scene using a geometric shape recognition neural network conditioned on latent variables
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
US11514645B2 (en) Electronic device for providing visual localization based on outdoor three-dimension map information and operating method thereof
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN113610702A (en) Picture construction method and device, electronic equipment and storage medium
US8872832B2 (en) System and method for mesh stabilization of facial motion capture data
CN109816791B (en) Method and apparatus for generating information
CN115294280A (en) Three-dimensional reconstruction method, apparatus, device, storage medium, and program product
CN114049403A (en) Multi-angle three-dimensional face reconstruction method and device and storage medium
CN115937299B (en) Method for placing virtual object in video and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination