CN111507132B - Positioning method, device and equipment - Google Patents

Positioning method, device and equipment Download PDF

Info

Publication number
CN111507132B
CN111507132B CN201910100000.1A CN201910100000A CN111507132B CN 111507132 B CN111507132 B CN 111507132B CN 201910100000 A CN201910100000 A CN 201910100000A CN 111507132 B CN111507132 B CN 111507132B
Authority
CN
China
Prior art keywords
pose
frame
representing
current frame
error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910100000.1A
Other languages
Chinese (zh)
Other versions
CN111507132A (en
Inventor
龙学雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Co Ltd filed Critical Hangzhou Hikrobot Co Ltd
Priority to CN201910100000.1A priority Critical patent/CN111507132B/en
Publication of CN111507132A publication Critical patent/CN111507132A/en
Application granted granted Critical
Publication of CN111507132B publication Critical patent/CN111507132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a positioning method, a positioning device and positioning equipment, which are applied to electronic equipment, wherein the method comprises the following steps: respectively acquiring the pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame and the reference frame; determining a relative pose that minimizes a sum of a photometric error between the reference frame and the current frame and a relative motion error detected by the inertial sensor; obtaining the pose of the electronic equipment when the current frame is obtained by utilizing the pose and the relative pose corresponding to the reference frame; and the optical flow of the current frame can be obtained, under the condition that the optical flow is larger than a preset threshold value, the current frame is used as a new key frame, the previously stored key frames are utilized, the pose of each key frame is optimized in a mode of minimizing the sum of the photometric error and the relative motion error between the key frames of two adjacent frames, and the new key frame with the optimized pose is used as a new reference frame. The scheme provided by the embodiment of the invention can improve the robustness of the positioning result in an extreme environment.

Description

Positioning method, device and equipment
Technical Field
The present invention relates to the field of positioning navigation technologies, and in particular, to a positioning method, apparatus, and device.
Background
Autonomous positioning is a core component of an autonomous navigation system of a robot, and the robot can realize functions of obstacle avoidance, autonomous navigation and the like on the basis of autonomous positioning.
In the prior art, the robot can utilize the visual odometer to carry out autonomous positioning, specifically includes: the camera mounted on the robot is utilized to shoot images in real time, and the position and the posture of the robot are estimated through a visual odometer (comprising a direct method odometer and a characteristic point method odometer), namely the position and the posture of the robot are obtained, so that the autonomous positioning is finished.
Although the robot in the prior art can perform autonomous positioning by utilizing the visual odometer, the camera is easily affected by factors such as illumination change, moving objects, camera shielding, few texture scenes and the like when shooting images, so that deviation and even complete loss occur when the pose of the robot is estimated by the visual odometer. That is, when the robot performs autonomous positioning by using the above method, the positioning result is affected by various extreme factors in the above environment, and the robustness is poor.
Disclosure of Invention
The embodiment of the invention aims to provide a positioning method, a positioning device and positioning equipment so as to improve the robustness of a positioning result. The specific technical scheme is as follows:
In one aspect of the present invention, a positioning method is provided and applied to an electronic device, where the electronic device is provided with an image collector and an inertial sensor for detecting inertial motion information of the electronic device, and the method includes:
acquiring a current frame acquired by the image acquisition device;
acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame;
acquiring the pose of each reference frame of the electronic device, which is detected by the inertial sensor when the image collector collects the reference frames;
determining a relative pose which minimizes the sum of a first photometric error and a relative motion error between the reference frame and a current frame, wherein the first photometric error is changed to the relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents a change amount between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image collector collects the current frame, the second pose is a pose detected when the image collector collects the reference frame, and the relative motion error is an error calculated according to the pose of the current frame, the pose of the reference frame and the relative pose;
And according to the determined relative pose and the second pose, calculating to obtain the first pose, and positioning the electronic equipment.
Optionally, before the step of determining the relative pose that minimizes the sum of the first photometric error and the relative motion error between the reference frame and the current frame, the method further comprises:
obtaining an image quality factor of the current frame, wherein the image quality factor is used for representing gradient changes of gray values of all pixel points in the reference frame;
assigning a weight lambda to the relative motion error according to an assignment rule that the larger the value of the image quality factor is, the smaller the weight of the relative motion error is ef
Accordingly, the step of determining a relative pose that minimizes the sum of the photometric error and the relative motion error between the reference frame and the current frame comprises:
determining such that L_E+lambda ef * The result of mv_e is the smallest relative pose, where mv_e represents the relative motion error between the current frame and the reference frame and l_e represents the photometric error between the current frame and the reference frame.
Optionally, the assigning a weight λ to the relative motion error according to an assignment rule that the larger the value of the image quality factor is, the smaller the weight assigned to the relative motion error is ef Comprises the steps of:
determining a weight lambda of the relative motion error according to the motion parameters of the current frame pose representation and the image quality factors ef
Optionally, the weight lambda of the relative motion error is determined according to the motion parameter represented by the current frame pose and the image quality factor ef Comprises the steps of:
calculating a first weight according to the motion parameters of the current frame pose representation;
calculating a second weight according to the image quality factor;
determining a weight lambda of the relative motion error according to the first weight and the second weight ef
Optionally, the step of calculating the first weight according to the motion parameter represented by the pose of the current frame includes:
acquiring linear acceleration, centripetal acceleration and speed of the electronic equipment when the image collector collects the current frame;
and calculating a first weight value by using the obtained linear acceleration, centripetal acceleration and speed.
Optionally, the calculating the first weight using the obtained linear acceleration, centripetal acceleration, and velocity includes:
the first weight is calculated using the following expression:
λ e =α*exp(-ω*(β 1 *a l2 *a r3 *v))
wherein lambda is e Representing the first weight, a l Representing a linear acceleration of the electronic device, a, when the image collector collects the current frame r Representing the centripetal acceleration of the electronic equipment when the image collector collects the current frame, v representing the speed of the electronic equipment when the image collector collects the current frame, and alpha, w and beta 1 、β 2 、β 3 Is a preset coefficient.
Optionally, the step of calculating the second weight according to the image quality factor includes:
obtaining a quality factor of a target pixel point selected by utilizing a sliding window, wherein the target pixel point is: the gray difference value between the current frame and the adjacent pixel point is larger than a preset value;
determining the proportion of the remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, wherein the mature points are as follows: pixel points of known depth information;
a second weight is calculated based on the obtained quality factor and the determined ratio.
Optionally, the calculating the second weight according to the obtained quality factor and the determined proportion includes:
the second weight is calculated using the following expression:
λ c =f 1 (q image_quality ),
Figure BDA0001965450930000031
wherein lambda is c Representing the second weight, f 1 () An exponential function representing the base of the preset value by taking the quality factor as an independent variable, q image_quality Representing the image quality factor, p pix_quality Representing the quality factor, p ph_ratio Representing the ratio, sigma g Represents the standard deviation of the grid gradient threshold when the target pixel point is selected using a sliding window,
Figure BDA0001965450930000041
represents the average value of grid gradient threshold values when the target pixel point is selected by utilizing a sliding window, n ph Indicating the number of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, n desired And representing the number of mature points in the sliding window expected after the target pixel point is selected by utilizing the sliding window.
Optionally, after the step of calculating the first pose according to the determined relative pose and the second pose and implementing positioning of the electronic device, the method further includes:
obtaining an optical flow of the current frame;
under the condition that the optical flow is larger than a preset threshold value, taking the current frame as a new key frame;
each pose corresponding to the preset number of key frames and the pose corresponding to the new key frames stored in advance are respectively used as an initial pose;
adjusting each initial pose, and determining each adjusted initial pose which minimizes the sum of first residual energy, second residual energy and third residual energy, wherein the first residual energy represents: and a sum of second photometric errors between every two adjacent frames of key frames corresponding to each initial pose after adjustment, wherein the second photometric errors are changed from each initial pose after adjustment and are the difference value of gray values between every two adjacent frames of key frames, and the second residual error energy represents: the image collector collects the sum of motion information represented by the pose of the electronic equipment, which is detected when each key frame is converted into a planar coordinate system, and the third residual energy represents: a sum of respective relative motion constraints, the relative motion constraints representing: for two adjacent frames of key frames, according to the constraints obtained by calculating the pose of the electronic equipment corresponding to the two adjacent frames of key frames detected by the pose of the electronic equipment and the inertial sensor respectively when the image collector collects the two adjacent frames of key frames;
The new key frame is taken as a new reference frame.
Optionally, the second residual energy is obtained by:
determining the pose of an image collector under a world coordinate system;
calculating an error of the plane motion constraint according to the determined pose;
and calculating second residual energy according to the calculated error of the planar motion constraint.
Alternatively, the error of the planar motion constraint and the second residual energy are calculated separately using the following expressions:
Figure BDA0001965450930000051
Figure BDA0001965450930000052
wherein E is g Representing the second residual energy, Ω g Represents a weight matrix, n represents the number of key frames, e g_i Representing errors in plane motion constraint, X -1 Representing observation of planar motion, T ec Representing the conversion relation between the pose of the image collector and the pose of the inertial sensor,
Figure BDA0001965450930000053
representing the pose of an image collector in a world coordinate system, T ce And the conversion relation between the pose of the inertial sensor and the pose of the image collector is represented.
Optionally, the third residual is obtained using the steps of:
for two adjacent frames of key frames, calculating relative motion constraint between the two adjacent frames of key frames according to the pose of the inertial sensor and the image collector under the world coordinate system respectively when the two adjacent frames of key frames are located;
And calculating the third residual error by using the relative motion constraint between every two adjacent frames of key frames.
Optionally, the relative motion constraint between two neighboring frames of key frames and the third residual energy are calculated using the following expressions, respectively:
Figure BDA0001965450930000054
Figure BDA0001965450930000055
wherein E is e Represents a third residual energy, n represents a key frame number, lambda th Represent the weight, Ω e Representing a weight matrix, e e_th Representing the relative motion error between the key frame of the T frame and the key frame of the h frame, T e2w’_t Representing the pose of the inertial sensor in the world coordinate system when the t-th frame is a key frame,
Figure BDA0001965450930000056
representing the pose of an inertial sensor in a world coordinate system when an h frame is a key frame, T c2w_t Representing pose of image collector under world coordinate system when key frame of t frame,/->
Figure BDA0001965450930000057
Representing pose of image collector under world coordinate system when key frame of t frame,T ec Representing the conversion relation between the pose of the image collector and the pose of the inertial sensor, T ce And the conversion relation between the pose of the inertial sensor and the pose of the image collector is represented.
In still another aspect of the present invention, there is further provided a positioning device applied to an electronic device, where the electronic device is provided with an image collector and an inertial sensor for detecting motion information of the electronic device, the device includes:
The first acquisition module is used for acquiring the current frame acquired by the image acquisition device;
the second acquisition module is used for acquiring the current frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the current frame;
the third acquisition module is used for acquiring the reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame;
a first determining module, configured to determine a relative pose that minimizes a sum of a first photometric error and a relative motion error between the reference frame and a current frame, where the first photometric error is a relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents a variation between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image collector collects the current frame, the second pose is a pose of the electronic device detected when the image collector collects the reference frame, and the relative motion error is an error calculated according to the pose of the current frame, the pose of the reference frame, and the relative pose;
and the calculating module is used for calculating the first pose according to the determined relative pose and the second pose so as to realize the positioning of the electronic equipment.
Optionally, the apparatus further includes:
the first obtaining module is used for obtaining an image quality factor of the current frame, wherein the image quality factor is used for representing gradient changes of gray values of all pixel points in the reference frame;
an allocation module for allocating a weight lambda to the relative motion error according to an allocation principle that the larger the value of the image quality factor is, the smaller the weight allocated to the relative motion error is ef
Correspondingly, the first determining module is used for determining that L_E+lambda is enabled ef * The result of mv_e is the smallest relative pose, where mv_e represents the relative motion error between the current frame and the reference frame and l_e represents the photometric error between the current frame and the reference frame.
Optionally, the allocation module includes:
a determining submodule for determining a weight lambda of the relative motion error according to the motion information of the current frame pose representation and the image quality factor ef
Optionally, the determining submodule includes:
the first calculating unit is used for calculating a first weight according to the motion parameters of the current frame pose representation;
a second calculating unit, configured to calculate a second weight according to the image quality factor;
a determining unit for determining the weight lambda of the relative motion error according to the first weight and the second weight ef
Optionally, the first computing unit includes:
a first obtaining subunit, configured to obtain a linear acceleration, a centripetal acceleration, and a speed of the electronic device when the image collector collects the current frame;
and the first calculating subunit calculates a first weight value by using the obtained linear acceleration, centripetal acceleration and speed.
Optionally, the first calculating subunit is specifically configured to calculate the first weight using the following expression:
λ e =α*exp(-ω*(β 1 *a l2 *a r3 *v))
wherein lambda is e Representing the first weightValue of a l Representing a linear acceleration of the electronic device, a, when the image collector collects the current frame r Representing the centripetal acceleration of the electronic equipment when the image collector collects the current frame, v representing the speed of the electronic equipment when the image collector collects the current frame, and alpha, w and beta 1 、β 2 、β 3 Is a preset coefficient.
Optionally, the second computing unit includes:
the second obtaining subunit is configured to obtain a quality factor of a target pixel point selected by using the sliding window, where the target pixel point is: the gray difference value between the current frame and the adjacent pixel point is larger than a preset value;
the determining subunit is configured to determine a proportion of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, where the mature points are: pixel points of known depth information;
And a second calculating subunit for calculating a second weight according to the obtained quality factor and the determined proportion.
The second calculating subunit is specifically configured to calculate a second weight value by using the following expression:
λ c =f 1 (q image_quality )
Figure BDA0001965450930000081
wherein lambda is c Representing the second weight, f 1 () An exponential function representing the base of the preset value by taking the quality factor as an independent variable, q image_quality Representing the image quality factor, p pix_quality Representing the quality factor, p ph _ ratio Representing the ratio, sigma g Represents the standard deviation of the grid gradient threshold when the target pixel point is selected using a sliding window,
Figure BDA0001965450930000082
represents the average value of grid gradient threshold values when the target pixel point is selected by utilizing a sliding window, n ph Indicating the number of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, n desired And representing the number of mature points in the sliding window expected after the target pixel point is selected by utilizing the sliding window.
Optionally, the apparatus further includes:
a second obtaining module, configured to obtain an optical flow of the current frame;
the first module is used for taking the current frame as a new key frame under the condition that the optical flow is larger than a preset threshold value;
the second module is used for taking each pose corresponding to the preset number of key frames and the pose corresponding to the new key frames which are stored in advance as an initial pose respectively;
The second determining module is configured to adjust each initial pose to determine each adjusted initial pose that minimizes a sum of a first residual energy, a second residual energy, and a third residual energy, where the first residual energy represents: and a sum of second photometric errors between every two adjacent frames of key frames corresponding to each adjusted initial pose, wherein the second photometric errors are changed according to each adjusted initial pose and are differences of gray values between every two adjacent frames of key frames, and the second residual energy represents: the image collector collects the sum of motion information of the electronic equipment represented by the pose of the electronic equipment converted into a plane coordinate system, which is detected when each key frame, and the third residual energy represents: a sum of respective relative motion constraints, the relative motion constraints representing: for two adjacent frames of key frames, according to the constraints obtained by calculating the pose of the electronic equipment corresponding to the two adjacent frames of key frames detected by the pose of the electronic equipment and the inertial sensor respectively when the image collector collects the two adjacent frames of key frames;
and thirdly, a module for taking the new key frame as a new reference frame.
Optionally, the second determining module is configured to:
determining the pose of an image collector under a world coordinate system;
calculating an error of the plane motion constraint according to the determined pose;
and calculating second residual energy according to the calculated error of the planar motion constraint.
Alternatively, the error of the planar motion constraint and the second residual energy are represented by the following expressions, respectively:
Figure BDA0001965450930000091
Figure BDA0001965450930000092
wherein E is g Representing the second residual energy, Ω g Represents a weight matrix, n represents the number of key frames, e g_i Representing errors in plane motion constraint, X -1 Representing observation of planar motion, T ec Representing the conversion relation between the pose of the image collector and the pose of the inertial sensor,
Figure BDA0001965450930000093
representing the pose of an image collector in a world coordinate system, T ce And the conversion relation between the pose of the inertial sensor and the pose of the image collector is represented.
Optionally, the second determining module is configured to:
for two adjacent frames of key frames, calculating relative motion constraint between the two adjacent frames of key frames according to the pose of the inertial sensor and the image collector under the world coordinate system respectively when the two adjacent frames of key frames are located;
and calculating the third residual error by using the relative motion constraint between every two adjacent frames of key frames.
Optionally, the relative motion constraint between two neighboring frames of key frames and the third residual energy are represented by the following expressions, respectively:
Figure BDA0001965450930000094
Figure BDA0001965450930000095
wherein E is e Represents a third residual energy, n represents a key frame number, lambda th Represent the weight, Ω e Representing a weight matrix, e e_th Representing the relative motion error between the key frame of the T frame and the key frame of the h frame, T e2w’_t Representing the pose of the inertial sensor in the world coordinate system when the t-th frame is a key frame,
Figure BDA0001965450930000101
representing the pose of an inertial sensor in a world coordinate system when an h frame is a key frame, T c2w_t Representing the pose of the image collector under the world coordinate system when the t-th frame is a key frame,/for the image collector>
Figure BDA0001965450930000102
Representing pose of image collector under world coordinate system when h frame key frame is displayed, T ec Representing the conversion relation between the pose of the image collector and the pose of the inertial sensor, T ce And the conversion relation between the pose of the inertial sensor and the pose of the image collector is represented.
In still another aspect of the present invention, an electronic device is provided, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
And the processor is used for realizing any positioning method when executing the program stored in the memory.
In yet another aspect of the present invention, there is also provided a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements any of the positioning methods described above.
Embodiments of the present invention also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the positioning methods described above.
The positioning method, the positioning device and the positioning equipment provided by the embodiment of the invention can acquire the current frame acquired by the image acquisition device; acquiring relative motion information of the electronic equipment detected by the inertial sensor when the image collector collects the current frame and the reference frame; determining the relative motion and the photometric error the relative pose with the smallest sum of errors; according to the determined relative pose and the pose of the image collector when the image collector collects the reference frame, the pose of the electronic equipment when the image collector collects the current frame is calculated, and the positioning of the electronic equipment is realized.
According to the scheme provided by the embodiment of the invention, in the positioning process, the motion information detected by the inertial sensor for detecting the motion information of the electronic equipment is comprehensively considered, the relative pose detected by the image collector is restrained by utilizing the relative motion information detected by the inertial sensor, namely, after the image collector is influenced by illumination when shooting an image, the accuracy of a positioning result can be ensured by utilizing the motion information detected by the inertial sensor, so that the robustness of the positioning result in an extreme environment can be improved. And after the electronic equipment is positioned, obtaining an optical flow of a current frame, taking the current frame as a new key frame under the condition that the optical flow is larger than a preset threshold value, taking all the poses corresponding to the preset number of key frames and the poses corresponding to the new key frame as initial poses respectively, optimizing the poses corresponding to the key frames by adjusting all the initial poses in a mode of minimizing the sum of photometric errors and relative motion constraints between any two key frames, and taking the new key frames after the pose optimization as new reference frames. In the process of fusing the relative motion errors, different weights are given to the relative motion errors according to the states of the images and the states of the inertial sensors, so that the accuracy and the robustness of positioning are further improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a simple positioning method according to an embodiment of the present invention;
FIG. 2 is a diagram of a relationship between mounting locations according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a pose transformation relationship according to an embodiment of the present invention;
FIG. 4 is a flowchart of a detailed positioning method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an optimization factor relationship provided in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a positioning device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a simple positioning method provided by the embodiment of the invention is applied to an electronic device, wherein an image collector and an inertial sensor for detecting motion information of the electronic device are arranged on the electronic device, and as shown in fig. 2, an installation position relation diagram of the image collector and the inertial sensor on the electronic device provided by the embodiment of the invention is shown, in the diagram, the inertial sensor can be arranged at two sides of the electronic device, and the image collector can be fixed at the front end of the electronic device and fixed in a front view mode.
Specifically, the positioning method comprises the following steps:
s100, acquiring a current frame acquired by an image acquisition unit.
In the running process of the electronic equipment, an image collector arranged on the electronic equipment can collect images of surrounding environment in real time, and accordingly, the electronic equipment can acquire image frames collected by the image collector.
S110, acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame.
S120, acquiring the reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame.
Because the inertial sensor detects the pose of the electronic equipment in real time, that is, the inertial sensor detects the pose of the reference frame of the electronic equipment while the image collector collects the reference frame, the inertial sensor can be directly used when the pose of the reference frame is needed.
In one implementation, the inertial sensor may be a wheel encoder.
In practical application, a first frame image is generally used as a first frame reference frame, when a reference frame is determined subsequently, an image with a motion information difference greater than a threshold value from a previous reference frame is determined first, the determined image is used as a latest key frame, the position of the latest key frame is optimized, and the optimized key frame is used as a new reference frame.
S130, determining a relative pose which enables the sum of a first photometric error and a relative motion error between the reference frame and the current frame to be minimum.
The first luminosity error is changed into a relative pose and is a difference value of gray values between a current frame and a reference frame, the relative pose represents the variation between the first pose and the second pose, the first pose is the pose of the electronic equipment detected when the image collector collects the current frame, the second pose is the pose of the electronic equipment detected when the image collector refers to the frame, and the relative motion error is an error calculated according to the pose of the current frame, the pose of the reference frame and the relative pose. The pose of the electronic device is the position of the electronic device in a plane coordinate system and the pose of the electronic device.
The above-mentioned relative motion error is the difference between the inter-frame relative motion estimated by the image collector and the inter-frame relative motion estimated by the inertial sensor.
In one implementation, a weight λ may be assigned to the relative motion error according to the image quality factor of the current frame before S130 ef
The image quality factors are used for representing the gradient change of the gray values of all pixel points in the reference frame, and the richer the gradient change condition of the gray values of all pixel points in the reference frame is, the larger the corresponding image quality factors are. For example, when the reference frame is an image frame obtained by the image collector collecting a white wall, the pixel values of all pixel points in the reference frame are the same, so that the image quality factor corresponding to the reference frame is minimum.
The smaller the image quality factor, the worse the referenceability of the reference frame and accordingly the worse the accuracy of the motion information detected by the image collector. Therefore, the relative motion error can be assigned a weight lambda according to the assignment principle that the larger the value of the image quality factor is, the smaller the weight assigned to the relative motion error is ef That is, when the motion information detected by the image collector is inaccurate, the weight assigned to the relative motion error may be increased to increase the speaking right of the motion information detected by the inertial sensor.
In one implementation, a weight lambda is assigned to the relative motion error ef After that, lambda can be adjusted according to the magnitude of the relative motion error ef Specifically, when the relative motion error is larger, it indicates that the motion information detected by the inertial sensor is larger than the motion information detected by the image collector, and at this time, it may be caused by inaccurate motion information detected by the image collector, so that λ may be increased ef To increase the speaking right of motion information detected by inertial sensorsThe method comprises the steps of carrying out a first treatment on the surface of the Correspondingly, when the relative motion error is smaller, the motion information detected by the inertial sensor and the motion information detected by the image collector are smaller, and at the moment, the motion information detected by the image collector is accurate, so that lambda can be reduced ef To reduce the speaking right of the motion information detected by the inertial sensor.
Weight lambda for determining relative motion error ef Then, it can be determined that L_E+lambda is ef * And the relative pose with the minimum result of MV_E is further used for realizing the positioning of the electronic equipment according to the determined relative pose and the second pose. Where MV_E represents the relative motion error between the current frame and the reference frame, and L_E represents the photometric error between the current frame and the reference frame.
And S140, calculating to obtain a first pose according to the determined relative pose and the second pose, and positioning the electronic equipment.
The determined relative pose is combined on the basis of the second pose of the electronic device when the image collector collects the reference frame, so that the first pose of the electronic device when the image collector collects the current frame is obtained, the position and the pose of the electronic device are obtained, and the electronic device is positioned.
In one implementation manner of the embodiment of the invention, the weight lambda is allocated to the relative motion error according to the allocation principle that the larger the value of the image quality factor is, the smaller the weight allocated to the relative motion error is ef In the process of (1), the weight lambda of the relative motion error can be determined according to the motion parameters and the image quality factors of the pose representation of the current frame ef
Specifically, the linear acceleration, the centripetal acceleration and the speed of the electronic device when the image collector collects the current frame can be obtained first, and the first weight is calculated by using the obtained linear acceleration, centripetal acceleration and speed.
For example, the first weight may be calculated using the following expression:
λ e =α*exp(-ω*(β 1 *a l2 *a r3 *v))
wherein lambda is e Representing a first weight, a l Representing the linear acceleration of the electronic device when the image collector collects the current frame, a r Representing centripetal acceleration of the electronic device when the image collector collects the current frame, v representing the speed of the electronic device when the image collector collects the current frame, and alpha, w and beta 1 、β 2 、β 3 Is a preset coefficient.
Then, a quality factor of the target pixel point selected by the sliding window can be obtained, a proportion of the remaining mature points in the sliding window after the target pixel point is selected by the sliding window is determined, and a second weight is calculated according to the obtained quality factor and the determined proportion.
For example, the second weight may be calculated using the following expression:
λ c =f 1 (q image_quality ),
Figure BDA0001965450930000141
wherein lambda is c Representing the second weight, f 1 () An exponential function representing the base of the preset value by taking the quality factor as an independent variable, q image_quality Representing the image quality factor, p pix_quality The quality factor representing the target pixel point selected by the sliding window is: pixel point, p, in current frame with gray level difference greater than preset value from adjacent pixel point ph_ratio Representing the ratio of the rest mature points in the sliding window after the target pixel point is selected by the sliding window g Representing the standard deviation of the grid gradient threshold when selecting a target pixel point using a sliding window,
Figure BDA0001965450930000142
represents the average value of grid gradient threshold values when a sliding window is used for selecting target pixel points, n ph Indicating the number of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, n desired Representing the expected sliding after selecting the target pixel using the sliding windowNumber of mature points in the window.
Each image area with preset size in the sliding window can be called a grid, the difference value of gray values of two adjacent pixel points in the grid is called grid gradient, and the sum value of the median of the grid gradient and the preset value is the grid gradient threshold value.
Finally, determining the weight lambda of the relative motion error according to the first weight and the second weight ef For example, the product of the calculated first weight and the second weight is used as the weight lambda ef
Along with the movement of the electronic device, the coordinates of the corresponding pixel points in each image frame acquired by the image acquirer at the same point in space can change, and the gray value of the corresponding pixel point in each image frame acquired by the image acquirer is basically unchanged, so that the coordinates of the pixel points contained in one image frame are kept unchanged in the same coordinate system for two image frames, and correspondingly the difference value of the gray value between the same coordinate pixel points contained in two image frames can change along with the change of the coordinates of the pixel points contained in the other image frame, and therefore, when the difference value of the gray value between the same coordinate pixel points contained in one image frame is minimum, the coordinate change quantity of the pixel points in the image frame with the changed coordinates also characterizes the pose change of the electronic device when the image acquirer acquires the two image frames.
For two image frames, in the process of obtaining the pose variation of the electronic equipment when the image collector collects the two image frames by changing the coordinates of the pixel points contained in one image frame to enable the difference of the gray value between the pixel points of the same coordinates contained in the two image frames to be minimum, specifically, the gray difference can be calculated for all the pixel points contained in the whole image frame, and the gray difference can also be calculated for part of the pixel points in a certain specific area in the image frame.
In one implementation, the photometric error l_e can be represented by the following relationship:
Figure BDA0001965450930000151
wherein P represents P i One pixel point coordinate, P i Representing the set of coordinates of all pixels in the ith frame, obs (p) representing the set of all observations of pixel p, w p Representing weights, I i And I j Respectively representing an ith frame and a jth frame, p' represents the coordinate of p point in the ith frame re-projected onto the jth frame, a i And b i Representing the photometric parameter of the ith frame, a j And b j Representing the photometric parameter of the j-th frame, t i And t j The exposure time of the ith frame and the jth frame are respectively represented;
and the coordinates p' of the p point in the i frame re-projected onto the j frame are:
Figure BDA0001965450930000161
/>
wherein pi c Representing a preset imaging model d p The inverse depth of the point p is represented, R represents the rotation change of the electronic equipment when the image collector collects the current frame relative to the reference frame, t represents the displacement change of the electronic equipment when the image collector collects the current frame relative to the reference frame, and R and t are the relative pose between the pose corresponding to the current frame and the pose corresponding to the reference frame detected by the image collector.
The relative motion error is the relation between the current frame pose detected by the inertial sensor and the reference frame pose and the relative pose of the electronic equipment obtained by using the acquired current frame and reference frame.
In one implementation, the relative motion error mv_e can be represented by the following relationship:
Figure BDA0001965450930000162
wherein the method comprises the steps of,Ω f Representing weights e f Represented as errors.
Figure BDA0001965450930000163
Wherein T is e2w′_r Representing the pose of a reference frame, T e2w′_c Representing the pose of the current frame, T ec Representing a conversion relationship between the pose of the electronic device detected by the inertial sensor and the pose of the electronic device obtained by using the image frames acquired by the image acquisition device, T r2c Representing the relative pose.
The positions of the image collector and the inertial sensor arranged on the electronic equipment are fixed, and accordingly, a fixed conversion relation exists between the pose of the electronic equipment detected by the inertial sensor and the pose of the electronic equipment obtained by utilizing the image frames acquired by the image collector.
As shown in fig. 3, a schematic diagram of a conversion relationship between a pose of an electronic device detected by an inertial sensor and a pose of the electronic device obtained by using an image frame acquired by an image acquisition device according to an embodiment of the present invention is shown;
in the figure, W' is the coordinate system O of the inertial sensor e The origin of the coordinate system is represented by W, and W is represented by O c As the origin of coordinates of the coordinate system, there is a transformation relation Tec between the two. Pose T obtained for image frames acquired by utilizing image acquisition device w2c_t And the pose T detected by the inertial sensor w′2e_t The following conversion relationship exists:
T w′2e_t =T ec T w2c_t T ce
pose T obtained by image frames acquired by image acquisition device c_t2h And the pose T detected by the inertial sensor e_t2h The following conversion relationship exists:
T e_t2h =T ec T c_t2h T ce
according to the positioning method provided by the embodiment of the invention, the motion information detected by the inertial sensor for detecting the motion information of the electronic equipment is comprehensively considered, namely, after the image collector is influenced by illumination when shooting the image, the accuracy of a positioning result can be ensured by utilizing the motion information detected by the inertial sensor, so that the robustness of the positioning result can be improved.
Referring to fig. 4, a flowchart of a detailed positioning method according to an embodiment of the present invention is shown, in which S400-S404 are the same as S100-S140 described above, and are not described herein again. After the positioning of the electronic device is achieved, it may be performed:
s405, obtaining the optical flow of the current frame;
s406, taking the current frame as a new key frame under the condition that the optical flow is larger than a preset threshold value. Accordingly, in case that the optical flow is smaller than the preset threshold, returning to the execution S400;
The larger the optical flow, the larger the difference between the image information contained in the current frame and the image information contained in the reference frame, that is, the larger the reference value of the image information contained in the current frame. Therefore, the current frame may be regarded as a new key frame in the case where the optical flow of the current frame is greater than a preset threshold.
S407, respectively taking all the poses corresponding to the preset number of key frames and the poses corresponding to the new key frames which are stored in advance as an initial pose;
s408, adjusting each initial pose, and determining each adjusted initial pose which enables the sum of the first residual energy, the second residual energy and the third residual energy to be minimum.
Wherein the first residual energy represents: and the sum of second photometric errors between every two adjacent frames of key frames corresponding to each adjusted initial pose, wherein the second photometric errors are changed into each adjusted initial pose and are the difference value of gray values between every two adjacent frames of key frames, and the second residual energy represents: the image collector collects the sum of motion information represented by the pose of the electronic equipment, which is detected when each key frame is converted into a planar coordinate system, and the third residual energy represents: the sum of the respective relative motion constraints, the relative motion constraint representing: and aiming at the adjacent two frames of key frames, according to the constraints obtained by calculating the pose of the electronic equipment corresponding to the adjacent two frames of key frames detected by the pose of the electronic equipment and the inertial sensor respectively when the image collector collects the adjacent two frames of key frames.
After the current frame is used as a new key frame, as the pose of the electronic device when the current frame is acquired, namely the first pose, the first pose can be used as the initial pose of the electronic device when the current frame is acquired, and the pose of the electronic device when the current frame is acquired is further optimized by minimizing the sum of the first residual energy, the second residual energy and the third residual energy, so that the pose of the key frame and the depth of a mature point in the key frame are optimized.
In one implementation manner, in the case that the image collector is a binocular image collector, since the binocular image collector usually collects image frames simultaneously by using two lenses on the left and right sides, when calculating the first residual energy, a first sum of the gray value differences between any two frames of key frames collected by a lens on a certain side can be calculated for the pose of the electronic device when each key frame is collected by a group of image collectors, and a second sum of the gray value differences between the image frames simultaneously collected by the two lenses on the left and right sides can be calculated respectively, and the sum of the first sum and the second sum is used as the first residual energy.
In one implementation, the pose of an image collector in a world coordinate system may be determined; calculating an error of the plane motion constraint according to the determined pose; and calculating second residual energy according to the calculated error of the planar motion constraint.
Specifically, the second residual energy may be represented by the following expression:
Figure BDA0001965450930000181
Figure BDA0001965450930000182
wherein E is g Representation ofSecond residual energy, Ω g Represents an information matrix, n represents the key frame number e g_i Representing errors in plane motion constraint, X -1 Representing observation of planar motion, T ec Representing the relative pose relationship between the image collector and the inertial sensor,
Figure BDA0001965450930000183
representing the pose of an image collector in a world coordinate system, T ce And the relative pose relation between the inertial sensor and the image collector is represented.
In one implementation, for two adjacent frames of key frames, calculating relative motion constraints between the two adjacent frames of key frames according to the poses of the inertial sensor and the image collector in the world coordinate system respectively during the two adjacent frames of key frames; and calculating a third residual error by using the relative motion constraint between every two adjacent key frames.
Specifically, the third residual energy may be represented by the following expression:
Figure BDA0001965450930000191
Figure BDA0001965450930000192
wherein E is e Represents a third residual energy, n represents a key frame number, lambda th Represent the weight, Ω e Representing a weight matrix, e e_th Representing the relative motion constraint between the T-th frame key frame and the h-th frame key frame, T e2w’_t Representing the pose of the inertial sensor in the world coordinate system when the t-th frame is a key frame,
Figure BDA0001965450930000193
Representing the pose of an inertial sensor in a world coordinate system when an h frame is a key frame, T c2w_t Representing the pose of the image collector under the world coordinate system when the t-th frame is a key frame,/for the image collector>
Figure BDA0001965450930000194
Representing pose of image collector under world coordinate system when h frame key frame is displayed, T ec Representing the relative pose relationship between the image collector and the inertial sensor, T ce And the relative pose relation between the inertial sensor and the image collector is represented.
In one implementation, in the process of calculating the third residual error, a weight may be assigned to each calculated relative motion constraint, and the third residual error may be obtained through weighted calculation. Specifically, the assigned weight may be λ corresponding to each of two neighboring key frames ef Is a product of (a) and (b).
S409, taking the new key frame as a new reference frame. And returns to execution S400
And taking the pose corresponding to the key frame as a pose initial value, and optimizing the pose of the key frame and optimizing the depth of a mature point in the key frame by adjusting each pose initial value in a mode of minimizing the sum of the first residual energy, the second residual energy and the third residual energy.
Referring to FIG. 5, a schematic diagram of an optimization factor relationship is provided in an embodiment of the present invention, in which monocular luminosity constraint is E p The method comprises the steps of carrying out a first treatment on the surface of the Binocular visibility constraint is E LR The method comprises the steps of carrying out a first treatment on the surface of the The error of plane motion is E g The method comprises the steps of carrying out a first treatment on the surface of the The error of the motion constraint of the inertial sensor is E e . Compared with the original direct method visual odometer, the visual inertial odometer fused with the inertial sensor data in the technical scheme of the invention has the advantages that more constraints are added, and more robust positioning results can be obtained.
Referring to fig. 6, a schematic structural diagram of a positioning device provided by an embodiment of the present invention is applied to an electronic device, where the electronic device is provided with an image collector and an inertial sensor for detecting motion information of the electronic device, and the device includes:
a first obtaining module 600, configured to obtain a current frame collected by the image collector;
a second obtaining module 610, configured to obtain a current frame pose of the electronic device detected by the inertial sensor when the image collector collects the current frame;
a third obtaining module 620, configured to obtain a reference frame pose of the electronic device detected by the inertial sensor when the image collector collects the reference frame;
a first determining module 630, configured to determine a relative pose that minimizes a sum of a first photometric error between the reference frame and the current frame and a relative motion error detected by the inertial sensor, where the first photometric error is a relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents a change amount between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image collector collects the current frame, the second pose is a pose of the electronic device detected when the image collector collects the reference frame, and the relative motion error is an error calculated according to the pose of the current frame, the pose of the reference frame, and the relative pose;
And the calculating module 640 is configured to calculate the first pose according to the determined relative pose and the second pose, so as to implement positioning of the electronic device.
In an implementation manner of the embodiment of the present invention, the apparatus further includes:
the first obtaining module is used for obtaining an image quality factor of the current frame, wherein the image quality factor is used for representing gradient changes of gray values of all pixel points in the reference frame;
an allocation module for allocating a weight lambda to the relative motion error according to an allocation principle that the larger the value of the image quality factor is, the smaller the weight allocated to the relative motion error is ef
Accordingly, the first determining module 630 is configured to determine that l_e+λ is the same as ef * The result of mv_e is the smallest relative pose, where mv_e represents the relative motion error between the current frame and the reference frame and l_e represents the photometric error between the current frame and the reference frame.
In one implementation manner of the embodiment of the present invention, the allocation module includes:
a determining submodule for determining a weight lambda of the relative motion error according to the motion parameter represented by the current frame pose and the image quality factor ef
In an implementation manner of the embodiment of the present invention, the determining submodule includes:
The first calculating unit is used for calculating a first weight according to the motion parameters of the current frame pose representation;
a second calculating unit, configured to calculate a second weight according to the image quality factor;
a determining unit for determining the weight lambda of the relative motion error according to the first weight and the second weight ef
In one implementation manner of the embodiment of the present invention, the first computing unit includes:
a first obtaining subunit, configured to obtain a linear acceleration, a centripetal acceleration, and a speed of the electronic device when the image collector collects the current frame;
and the first calculating subunit calculates a first weight value by using the obtained linear acceleration, centripetal acceleration and speed.
In one implementation manner of the embodiment of the present invention, the first computing subunit is specifically configured to,
the first weight is calculated using the following expression:
λ e =α*exp(-ω*(β 1 *a l2 *a r3 *v))
wherein lambda is e Representing the first weight, a l Representing a linear acceleration of the electronic device, a, when the image collector collects the current frame r Representing the centripetal acceleration of the electronic equipment when the image collector collects the current frame, v representing the speed of the electronic equipment when the image collector collects the current frame, and alpha, w and beta 1 、β 2 、β 3 Is a preset coefficient.
In an implementation manner of the embodiment of the present invention, the second calculating unit includes:
the second obtaining subunit is configured to obtain a quality factor of a target pixel point selected by using the sliding window, where the target pixel point is: the gray difference value between the current frame and the adjacent pixel point is larger than a preset value;
the determining subunit is configured to determine a proportion of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, where the mature points are: pixel points of known depth information;
and a second calculating subunit for calculating a second weight according to the obtained quality factor and the determined proportion.
In one implementation of the embodiment of the present invention, the second computing subunit is specifically configured to,
the second weight is calculated using the following expression:
λ c =f 1 (q image_quality )
Figure BDA0001965450930000221
wherein lambda is c Representing the second weight, f 1 () An exponential function representing the base of the preset value by taking the quality factor as an independent variable, q image_quality Representing the image quality factor, p pix_quality Representing a quality factor of a target pixel selected by using a sliding window, wherein the target pixel is: the gray difference value between the reference frame and the adjacent pixel point is larger than the preset value, p ph_ratio Representing the ratio of the rest mature points in the sliding window after the target pixel point is selected by utilizing the sliding window, sigma g Represents the standard deviation of the grid gradient threshold when the target pixel point is selected using a sliding window,
Figure BDA0001965450930000222
represents the average value of grid gradient threshold values when the target pixel point is selected by utilizing a sliding window, n ph Indicating selection of the target using a sliding windowThe number of remaining mature points in the sliding window after the pixel point, n desired And representing the number of mature points in the sliding window expected after the target pixel point is selected by utilizing the sliding window.
In an implementation manner of the embodiment of the present invention, the apparatus further includes:
a second obtaining module, configured to obtain an optical flow of the current frame;
the first module is used for taking the current frame as a new key frame under the condition that the optical flow is larger than a preset threshold value;
the second module is used for taking each pose corresponding to the preset number of key frames and the pose corresponding to the new key frames which are stored in advance as an initial pose respectively;
the second determining module is configured to adjust and determine each initial pose so that a sum of the first residual energy, the second residual energy and the third residual energy is minimum, where the first residual energy represents: and a sum of second photometric errors between every two adjacent frames of key frames corresponding to each adjusted initial pose, wherein the second photometric errors are changed according to each adjusted initial pose and are differences of gray values between every two adjacent frames of key frames, and the second residual energy represents: the image collector collects the sum of motion information represented by the pose of the electronic equipment, which is detected when each key frame is converted into a planar coordinate system, and the third residual energy represents: a sum of respective relative motion constraints, the relative motion constraints representing: for two adjacent frames of key frames, the position and the posture of the electronic equipment, which are detected by the inertial sensor respectively when the image collector collects the two adjacent frames of key frames, are constrained by calculation of the position and the posture of the electronic equipment, which are corresponding to the two adjacent frames of key frames;
And thirdly, a module for taking the new key frame as a new reference frame.
In an implementation manner of the embodiment of the present invention, the second determining module is configured to:
determining the pose of an image collector under a world coordinate system;
calculating an error of the plane motion constraint according to the determined pose;
and calculating second residual energy according to the calculated error of the planar motion constraint.
In one implementation, the error and the second residual energy of the planar motion constraint are represented separately using the following expressions:
Figure BDA0001965450930000231
Figure BDA0001965450930000233
wherein E is g Representing the second residual energy, Ω g Represents a weight matrix, n represents the number of key frames, e g_i Representing errors in plane motion constraint, X -1 Representing observation of planar motion, T ec Representing the conversion relation between the pose of the image collector and the pose of the inertial sensor,
Figure BDA0001965450930000232
representing the pose of an image collector in a world coordinate system, T ce And the conversion relation between the pose of the inertial sensor and the pose of the image collector is represented.
In an implementation manner of the embodiment of the present invention, optionally, the second determining module is configured to:
for two adjacent frames of key frames, calculating relative motion constraint between the two adjacent frames of key frames according to the pose of the inertial sensor and the image collector under the world coordinate system respectively when the two adjacent frames of key frames are located;
And calculating the third residual error by using the relative motion constraint between every two adjacent frames of key frames.
In one implementation, the relative motion constraint between two neighboring frames of key frames and the third residual energy are represented by the following expressions, respectively:
Figure BDA0001965450930000241
Figure BDA0001965450930000242
wherein E is e Represents a third residual energy, n represents a key frame number, lambda th Represent the weight, Ω e Representing a weight matrix, e e_th Representing the relative motion error between the key frame of the T frame and the key frame of the h frame, T e2w’_t Representing the pose of the inertial sensor in the world coordinate system when the t-th frame is a key frame,
Figure BDA0001965450930000243
representing the pose of an inertial sensor in a world coordinate system when an h frame is a key frame, T c2w_t Representing the pose of the image collector under the world coordinate system when the t-th frame is a key frame,/for the image collector>
Figure BDA0001965450930000244
Representing pose of image collector under world coordinate system when h frame key frame is displayed, T ec Representing the conversion relation between the pose of the image collector and the pose of the inertial sensor, T ce And the conversion relation between the pose of the inertial sensor and the pose of the image collector is represented.
The embodiment of the invention also provides an electronic device, as shown in fig. 7, which comprises a processor 001, a communication interface 002, a memory 003 and a communication bus 004, wherein the processor 001, the communication interface 002 and the memory 003 complete communication with each other through the communication bus 004,
A memory 003 for storing a computer program;
the processor 001 is configured to implement the positioning method provided by the embodiment of the present invention when executing the program stored in the memory 003.
Specifically, the positioning method is applied to an electronic device, wherein an image collector and an inertial sensor for detecting motion information of the electronic device are arranged on the electronic device, and the method comprises the following steps:
acquiring a current frame acquired by the image acquisition device;
acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame;
acquiring a reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame;
determining a relative pose which minimizes the sum of a first photometric error between the reference frame and a current frame and a relative motion error detected by the inertial sensor, wherein the first photometric error is changed to be a relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents a change amount between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image collector collects the current frame, the second pose is a pose of the electronic device detected when the image collector collects the reference frame, and the relative motion error is an error calculated according to the pose of the current frame, the pose of the reference frame and the relative pose;
And according to the determined relative pose and the second pose, calculating to obtain the first pose, and positioning the electronic equipment.
It should be noted that, other embodiments of the method for detecting a device applied to a video signal transmitting end implemented by the processor 001 executing the program stored in the memory 003 are the same as those provided in the foregoing method embodiment, and will not be repeated here.
In each scheme provided by the embodiment of the invention, in the positioning process, the motion information detected by the inertial sensor for detecting the motion information of the electronic equipment is comprehensively considered, namely, after the image collector is influenced by illumination in the process of shooting the image, the accuracy of a positioning result can be ensured by utilizing the motion information detected by the inertial sensor, so that the robustness of the positioning result can be improved.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another aspect of the present invention, there is also provided a computer readable storage medium having instructions stored therein, which when executed on a computer, cause the computer to perform the positioning method provided by the embodiment of the present invention.
Specifically, the positioning method is applied to an electronic device, wherein an image collector and an inertial sensor for detecting motion information of the electronic device are arranged on the electronic device, and the method comprises the following steps:
Acquiring a current frame acquired by the image acquisition device;
acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame;
acquiring a reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame;
determining a relative pose which minimizes the sum of a first photometric error between the reference frame and a current frame and a relative motion error detected by the inertial sensor, wherein the first photometric error is changed to be a relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents a change amount between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image collector collects the current frame, the second pose is a pose of the electronic device detected when the image collector collects the reference frame, and the relative motion error is an error calculated according to the pose of the current frame, the pose of the reference frame and the relative pose;
and according to the determined relative pose and the second pose, calculating to obtain the first pose, and positioning the electronic equipment.
It should be noted that, other embodiments of the positioning method implemented by the computer readable storage medium are the same as the embodiments provided in the foregoing method embodiment, and are not repeated here.
In each scheme provided by the embodiment of the invention, in the positioning process, the motion information detected by the inertial sensor for detecting the motion information of the electronic equipment is comprehensively considered, namely, after the image collector is influenced by illumination in the process of shooting the image, the accuracy of a positioning result can be ensured by utilizing the motion information detected by the inertial sensor, so that the robustness of the positioning result can be improved.
In yet another aspect of the present invention, embodiments of the present invention also provide a computer program product containing instructions that, when moved on a computer, cause the computer to perform the positioning method provided by the embodiments of the present invention.
Specifically, the positioning method is applied to an electronic device, wherein an image collector and an inertial sensor for detecting motion information of the electronic device are arranged on the electronic device, and the method comprises the following steps:
acquiring a current frame acquired by the image acquisition device;
acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame;
Acquiring a reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame;
determining a relative pose which minimizes the sum of a first photometric error between the reference frame and a current frame and a relative motion error detected by the inertial sensor, wherein the first photometric error is changed to the relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents relative motion between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image collector collects the current frame, the second pose is a pose of the electronic device detected when the image collector collects the reference frame, and the relative motion error is an error calculated according to the pose of the current frame, the pose of the reference frame and the relative pose;
and according to the determined relative pose and the second pose, calculating to obtain the first pose, and positioning the electronic equipment.
It should be noted that, other embodiments of the positioning method implemented by the computer program product are the same as the embodiments provided by the foregoing method embodiment section, and will not be repeated here.
In each scheme provided by the embodiment of the invention, the motion information detected by the inertial sensor for detecting the motion information of the electronic equipment is comprehensively considered in the positioning process, namely, after the image collector is influenced by illumination in the process of shooting the image, the accuracy of a positioning result can be ensured by utilizing the motion information detected by the inertial sensor, so that the robustness of the positioning result can be improved, and the method is suitable for various extreme environments.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus, electronic device, computer readable storage medium, and computer program product embodiments, the description is relatively simple, as relevant to the method embodiments being referred to in the section of the description of the method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (18)

1. A positioning method, characterized in that it is applied to an electronic device, where the electronic device is provided with an image collector and an inertial sensor for detecting inertial motion information of the electronic device, the method comprising:
acquiring a current frame acquired by the image acquisition device;
acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame;
Acquiring a reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects a reference frame;
determining a relative pose which minimizes the sum of a first photometric error and a relative motion error between the reference frame and a current frame, wherein the first photometric error is changed to the relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents a change amount between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image collector collects the current frame, the second pose is a pose detected when the image collector collects the reference frame, and the relative motion error is an error calculated according to the pose of the current frame, the pose of the reference frame and the relative pose;
according to the determined relative pose and the second pose, calculating to obtain the first pose, and positioning the electronic equipment;
the relative motion error is calculated according to the current frame pose, the reference frame pose and the relative pose by using a target formula, wherein the target formula is as follows:
Figure FDA0004240207380000011
wherein MV_E represents the relative motion error, Ω f Representing weights e f Expressed as error;
Figure FDA0004240207380000012
wherein T is e2w′_ Representing the pose of the reference frame, T e2w′_ Representing the pose of the current frame, T ec Representing a conversion relation between the pose of the electronic equipment detected by the inertial sensor and the pose of the electronic equipment obtained by utilizing the image frames acquired by the image acquisition device, T r2c Representing the relative pose, T ce Representing the pose and pose of the electronic device obtained by using the image frames acquired by the image acquisition deviceAnd the inertial sensor detects the conversion relation between the pose of the electronic equipment.
2. The method of claim 1, further comprising, prior to the step of determining a relative pose that minimizes a sum of a first photometric error and a relative motion error between the reference frame and a current frame:
obtaining an image quality factor of the current frame, wherein the image quality factor is used for representing gradient changes of gray values of all pixel points in the reference frame;
assigning a weight lambda to the relative motion error according to an assignment rule that the larger the value of the image quality factor is, the smaller the weight of the relative motion error is ef
Accordingly, the step of determining a relative pose that minimizes the sum of the photometric error and the relative motion error between the reference frame and the current frame comprises:
Determining such that L_E+lambda ef * The result of mv_e is the smallest relative pose, where mv_e represents the relative motion error between the current frame and the reference frame and l_e represents the first photometric error between the current frame and the reference frame.
3. The method according to claim 2, wherein the assigning a weight λ to the relative motion error is based on an assigning principle that the larger the value of the image quality factor is, the smaller the weight assigned to the relative motion error is ef Comprises the steps of:
determining a weight lambda of the relative motion error according to the motion parameters of the current frame pose representation and the image quality factors ef
4. A method according to claim 3, wherein the weight λ of the relative motion error is determined based on the motion parameters characterized by the current frame pose and the image quality factor ef Comprises the steps of:
calculating a first weight according to the motion parameters of the current frame pose representation;
calculating a second weight according to the image quality factor;
determining a weight lambda of the relative motion error according to the first weight and the second weight ef
5. The method of claim 4, wherein the step of calculating a first weight based on the motion parameters characterized by the current frame pose comprises:
Acquiring linear acceleration, centripetal acceleration and speed of the electronic equipment when the image collector collects the current frame;
and calculating a first weight value by using the obtained linear acceleration, centripetal acceleration and speed.
6. The method of claim 5, wherein calculating the first weight using the obtained linear acceleration, centripetal acceleration, and velocity comprises:
the first weight is calculated using the following expression:
λ e =α*exp(-ω*(β 1 *a l2 *a r3 *ν))
wherein lambda is e Representing the first weight, a l Representing a linear acceleration of the electronic device, a, when the image collector collects the current frame r Representing centripetal acceleration of the electronic device when the image collector collects the current frame, v representing the speed of the electronic device when the image collector collects the current frame, and alpha, w and beta 1 、β 2 、β 3 Is a preset coefficient.
7. The method of claim 4, wherein the step of calculating a second weight from the image quality factor comprises:
obtaining a quality factor of a target pixel point selected by utilizing a sliding window, wherein the target pixel point is: the gray difference value between the current frame and the adjacent pixel point is larger than a preset value;
Determining the proportion of the remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, wherein the mature points are as follows: pixel points of known depth information;
a second weight is calculated based on the obtained quality factor and the determined ratio.
8. The method of claim 7, wherein calculating the second weight based on the obtained quality factor and the determined ratio comprises:
the second weight is calculated using the following expression:
λ c =f 1 (q image_quality ),
Figure FDA0004240207380000031
wherein lambda is c Representing the second weight, f 1 () An exponential function representing the base of the preset value by taking the quality factor as an independent variable, q image_quality Representing the image quality factor, p pix_quality Representing the quality factor, p ph_ratio Representing the ratio, sigma g Represents the standard deviation of the grid gradient threshold when the target pixel point is selected using a sliding window,
Figure FDA0004240207380000032
represents the average value of grid gradient threshold values when the target pixel point is selected by utilizing a sliding window, n ph Indicating the number of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, n desired And representing the number of mature points in the sliding window expected after the target pixel point is selected by utilizing the sliding window.
9. The method of claim 1, wherein the step of calculating the first pose based on the determined relative pose and the second pose, after implementing the step of positioning the electronic device, further comprises:
Obtaining an optical flow of the current frame;
under the condition that the optical flow is larger than a preset threshold value, taking the current frame as a new key frame;
each pose corresponding to the preset number of key frames and the pose corresponding to the new key frames stored in advance are respectively used as an initial pose;
adjusting each initial pose, and determining each adjusted initial pose which minimizes the sum of first residual energy, second residual energy and third residual energy, wherein the first residual energy represents: and a sum of second photometric errors between every two adjacent frames of key frames corresponding to each adjusted initial pose, wherein the second photometric errors are changed according to each adjusted initial pose and are differences of gray values between every two adjacent frames of key frames, and the second residual energy represents: the image collector collects the sum of motion information represented by the pose of the electronic equipment, which is detected when each key frame is converted into a planar coordinate system, and the third residual energy represents: a sum of respective relative motion constraints, the relative motion constraints representing: for two adjacent frames of key frames, according to the constraints obtained by calculating the pose of the electronic equipment corresponding to the two adjacent frames of key frames detected by the pose of the electronic equipment and the inertial sensor respectively when the image collector collects the two adjacent frames of key frames;
The new key frame is taken as a new reference frame.
10. The method of claim 9, wherein the second residual energy is obtained using the steps of:
determining the pose of an image collector under a world coordinate system;
calculating an error of the plane motion constraint according to the determined pose;
and calculating second residual energy according to the calculated error of the planar motion constraint.
11. The method of claim 10, wherein the error of the planar motion constraint and the second residual energy are represented by the following expressions, respectively:
Figure FDA0004240207380000051
Figure FDA0004240207380000052
wherein E is g Representing the second residual energy, Ω g Represents a weight matrix, n represents the number of key frames, e g_i Representing errors in plane motion constraint, X -1 Representing an observation of the planar motion,
Figure FDA0004240207380000053
representing the existence of an image collector pose under the world coordinate system.
12. The method of claim 9, wherein the third residual is obtained using the steps of:
for two adjacent frames of key frames, calculating relative motion constraint between the two adjacent frames of key frames according to the pose of the inertial sensor and the image collector under the world coordinate system respectively when the two adjacent frames of key frames are located;
and calculating the third residual error by using the relative motion constraint between every two adjacent frames of key frames.
13. The method of claim 12, wherein the third residual energy and the relative motion constraint between two neighboring frames of key frames are represented by the following expressions, respectively:
Figure FDA0004240207380000054
Figure FDA0004240207380000055
wherein E is e Represents a third residual energy, n represents a key frame number, lambda th Represent the weight, Ω e Representing a weight matrix, e e_th Representing the relative motion constraint between the T-th frame key frame and the h-th frame key frame, T e2w,_ Representing the pose of the inertial sensor in the world coordinate system when the t-th frame is a key frame,
Figure FDA0004240207380000056
representing the pose of an inertial sensor in a world coordinate system when an h frame is a key frame, T c2w_t Representing pose of image collector under world coordinate system when key frame of t frame,/->
Figure FDA0004240207380000057
And representing the pose of the image collector under the world coordinate system when the t-th frame is a key frame.
14. A positioning device, characterized in that it is applied to an electronic apparatus, wherein an image collector and an inertial sensor for detecting motion information of the electronic apparatus are disposed on the electronic apparatus, the device comprising:
the first acquisition module is used for acquiring the current frame acquired by the image acquisition device;
the second acquisition module is used for acquiring the current frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the current frame;
The third acquisition module is used for acquiring the reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame;
a first determining module, configured to determine a relative pose that minimizes a sum of a first photometric error and a relative motion error between the reference frame and a current frame, where the first photometric error is a relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents a variation between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image collector collects the current frame, the second pose is a pose of the electronic device detected when the image collector collects the reference frame, and the relative motion error is an error calculated according to the pose of the current frame, the pose of the reference frame, and the relative pose;
the computing module is used for computing the first pose according to the determined relative pose and the second pose so as to realize the positioning of the electronic equipment;
the relative motion error is calculated according to the current frame pose, the reference frame pose and the relative pose by using a target formula, wherein the target formula is as follows:
Figure FDA0004240207380000061
Wherein MV_E represents the relative motion error, Ω f Representing weights e f Expressed as error;
Figure FDA0004240207380000062
wherein T is e2w′_ Representing the pose of the reference frame, T e2w′_ Representing the pose of the current frame, T ec Representing a conversion relation between the pose of the electronic equipment detected by the inertial sensor and the pose of the electronic equipment obtained by utilizing the image frames acquired by the image acquisition device, T r2c Representing the relative pose, T ce And representing the conversion relation between the pose of the electronic equipment obtained by utilizing the image frames acquired by the image acquisition device and the pose of the electronic equipment detected by the inertial sensor.
15. The apparatus of claim 14, wherein the apparatus further comprises:
the first obtaining module is used for obtaining an image quality factor of the current frame, wherein the image quality factor is used for representing gradient changes of gray values of all pixel points in the reference frame;
an allocation module for allocating a weight lambda to the relative motion error according to an allocation principle that the larger the value of the image quality factor is, the smaller the weight allocated to the relative motion error is ef
Correspondingly, the first determining module is used for determining that L_E+lambda is enabled ef * The result of mv_e is the smallest relative pose, where mv_e represents the relative motion error between the current frame and the reference frame and l_e represents the first photometric error between the current frame and the reference frame.
16. The apparatus of claim 14, wherein the apparatus further comprises:
a second obtaining module, configured to obtain an optical flow of the current frame;
the first module is used for taking the current frame as a new key frame under the condition that the optical flow is larger than a preset threshold value;
the second module is used for taking each pose corresponding to the preset number of key frames and the pose corresponding to the new key frames which are stored in advance as an initial pose respectively;
the second determining module is configured to adjust each initial pose to determine each adjusted initial pose that minimizes a sum of a first residual energy, a second residual energy, and a third residual energy, where the first residual energy represents: and a sum of second photometric errors between every two adjacent frames of key frames corresponding to each adjusted initial pose, wherein the second photometric errors are changed according to each adjusted initial pose and are differences of gray values between every two adjacent frames of key frames, and the second residual energy represents: the image collector collects the sum of motion information of the electronic equipment represented by the pose of the electronic equipment converted into a plane coordinate system, which is detected when each key frame, and the third residual energy represents: a sum of respective relative motion constraints, the relative motion constraints representing: for two adjacent frames of key frames, according to the constraints obtained by calculating the pose of the electronic equipment corresponding to the two adjacent frames of key frames detected by the pose of the electronic equipment and the inertial sensor respectively when the image collector collects the two adjacent frames of key frames;
And thirdly, a module for taking the new key frame as a new reference frame.
17. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for carrying out the method steps of any one of claims 1-13 when executing a program stored on a memory.
18. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-13.
CN201910100000.1A 2019-01-31 2019-01-31 Positioning method, device and equipment Active CN111507132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910100000.1A CN111507132B (en) 2019-01-31 2019-01-31 Positioning method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910100000.1A CN111507132B (en) 2019-01-31 2019-01-31 Positioning method, device and equipment

Publications (2)

Publication Number Publication Date
CN111507132A CN111507132A (en) 2020-08-07
CN111507132B true CN111507132B (en) 2023-07-07

Family

ID=71873978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910100000.1A Active CN111507132B (en) 2019-01-31 2019-01-31 Positioning method, device and equipment

Country Status (1)

Country Link
CN (1) CN111507132B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112179355B (en) * 2020-09-02 2023-05-26 西安交通大学 Attitude estimation method aiming at typical characteristics of luminosity curve
CN113409391B (en) * 2021-06-25 2023-03-03 浙江商汤科技开发有限公司 Visual positioning method and related device, equipment and storage medium
CN113701760B (en) * 2021-09-01 2024-02-27 火种源码(中山)科技有限公司 Robot anti-interference positioning method and device based on sliding window pose diagram optimization
CN113847907A (en) * 2021-09-29 2021-12-28 深圳市慧鲤科技有限公司 Positioning method and device, equipment and storage medium
CN113899364B (en) * 2021-09-29 2022-12-27 深圳市慧鲤科技有限公司 Positioning method and device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN108492316A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 A kind of localization method and device of terminal
CN108827315A (en) * 2018-08-17 2018-11-16 华南理工大学 Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075609A1 (en) * 2016-09-12 2018-03-15 DunAn Precision, Inc. Method of Estimating Relative Motion Using a Visual-Inertial Sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN108492316A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 A kind of localization method and device of terminal
CN108827315A (en) * 2018-08-17 2018-11-16 华南理工大学 Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices;Jin-Chun Piao et al.;《Sensors》;20171107;全文 *
基于Vision-IMU的机器人同时定位与地图创建算法;姚二亮等;《仪器仪表学报》;20180415(第04期);全文 *
室内环境下基于图优化的视觉惯性SLAM方法;徐晓苏 等;《中国惯性技术学报》;20170630;第25卷(第3期);全文 *

Also Published As

Publication number Publication date
CN111507132A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN111507132B (en) Positioning method, device and equipment
US10755428B2 (en) Apparatuses and methods for machine vision system including creation of a point cloud model and/or three dimensional model
JP6734940B2 (en) Three-dimensional measuring device
JP6415066B2 (en) Information processing apparatus, information processing method, position and orientation estimation apparatus, robot system
JP4809291B2 (en) Measuring device and program
CN109544630B (en) Pose information determination method and device and visual point cloud construction method and device
CN107223330A (en) A kind of depth information acquisition method, device and image capture device
JP2011172226A (en) Method and system for obtaining point spread function using motion information, and computer program
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
JP7173471B2 (en) 3D position estimation device and program
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
JP2019197350A (en) Self-position estimation system, autonomous mobile system and self-position estimation method
CN110428461B (en) Monocular SLAM method and device combined with deep learning
JP3633469B2 (en) Inter-vehicle distance setting device
CN107945166B (en) Binocular vision-based method for measuring three-dimensional vibration track of object to be measured
JP2008298589A (en) Device and method for detecting positions
CN111563936A (en) Camera external parameter automatic calibration method and automobile data recorder
CN111899277A (en) Moving object detection method and device, storage medium and electronic device
JP6602089B2 (en) Image processing apparatus and control method thereof
CN112967228B (en) Determination method and device of target optical flow information, electronic equipment and storage medium
JP6639155B2 (en) Image processing apparatus and image processing method
CN115375772B (en) Camera calibration method, device, equipment and storage medium
JP7258250B2 (en) Position/posture estimation device, position/posture estimation method, and program
CN115512242B (en) Scene change detection method and flight device
CN115128655B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310052 5 / F, building 1, building 2, no.700 Dongliu Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant