CN111507132A - Positioning method, device and equipment - Google Patents

Positioning method, device and equipment Download PDF

Info

Publication number
CN111507132A
CN111507132A CN201910100000.1A CN201910100000A CN111507132A CN 111507132 A CN111507132 A CN 111507132A CN 201910100000 A CN201910100000 A CN 201910100000A CN 111507132 A CN111507132 A CN 111507132A
Authority
CN
China
Prior art keywords
pose
frame
representing
current frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910100000.1A
Other languages
Chinese (zh)
Other versions
CN111507132B (en
Inventor
龙学雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Technology Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN201910100000.1A priority Critical patent/CN111507132B/en
Publication of CN111507132A publication Critical patent/CN111507132A/en
Application granted granted Critical
Publication of CN111507132B publication Critical patent/CN111507132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a positioning method, a positioning device and positioning equipment, which are applied to electronic equipment, wherein the method comprises the following steps: respectively acquiring the poses of the electronic equipment detected by an inertial sensor when an image collector collects a current frame and a reference frame; determining a relative pose that minimizes the sum of a photometric error between the reference frame and the current frame and a relative motion error detected by the inertial sensor; obtaining the pose of the electronic equipment at the current frame time by using the corresponding pose and the relative pose of the reference frame; and the optical flow of the current frame can be obtained, the current frame is used as a new key frame and the previously stored key frames are utilized under the condition that the optical flow is larger than a preset threshold value, the pose of each key frame is optimized in a mode of minimizing the sum of the luminosity error and the relative motion error between two adjacent key frames, and the new key frame after pose optimization is used as a new reference frame. By applying the scheme provided by the embodiment of the invention, the robustness of the positioning result in an extreme environment can be improved.

Description

Positioning method, device and equipment
Technical Field
The present invention relates to the field of positioning and navigation technologies, and in particular, to a positioning method, apparatus, and device.
Background
The autonomous positioning is a core component of an autonomous navigation system of the robot, and the robot can achieve the functions of obstacle avoidance, autonomous navigation and the like on the basis of the autonomous positioning.
In the prior art, the robot can utilize the visual odometer to carry out autonomous positioning, specifically include: the self pose is estimated by a visual odometer (including a direct method odometer and a characteristic point method odometer) by utilizing a camera arranged on the robot to shoot images in real time, namely the self position and the self pose are obtained, and the autonomous positioning is finished.
Although the robot in the prior art can perform autonomous positioning by using the visual odometer, the camera is easily affected by factors such as illumination change, moving objects, camera occlusion, few-texture scenes and the like when shooting images, so that the robot deviates or even completely loses when estimating the self pose by using the visual odometer. That is to say, when the robot performs autonomous positioning by applying the above method, the positioning result is affected by various extreme factors in the above environment, and the robustness is poor.
Disclosure of Invention
The embodiment of the invention aims to provide a positioning method, a positioning device and positioning equipment so as to improve the robustness of a positioning result. The specific technical scheme is as follows:
in one aspect of the present invention, a positioning method is provided, which is applied to an electronic device, where the electronic device is provided with an image collector and an inertial sensor for detecting inertial motion information of the electronic device, and the method includes:
acquiring a current frame acquired by the image acquisition device;
acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame;
acquiring the reference frame poses of the electronic devices detected by the inertial sensor when the image collector collects the reference frames;
determining a relative pose which minimizes the sum of a first luminosity error and a relative motion error between the reference frame and the current frame, wherein the first luminosity error is a difference value of gray values between the current frame and the reference frame, the relative pose represents a variation between a first pose and a second pose, the first pose is a pose of the electronic equipment detected when the image collector collects the current frame, the second pose is a pose detected when the image collector collects the reference frame, and the relative motion error is an error calculated according to the current frame pose, the reference frame pose and the relative pose;
and calculating to obtain the first pose according to the determined relative pose and the second pose, so as to realize the positioning of the electronic equipment.
Optionally, before the step of determining the relative pose that minimizes the sum of the first photometric error and the relative motion error between the reference frame and the current frame, the method further includes:
obtaining an image quality factor of the current frame, wherein the image quality factor is used for representing gradient change of gray values of all pixel points in the reference frame;
according to the distribution principle that the larger the value of the image quality factor is, the smaller the weight value distributed to the relative motion error is, distributing the weight value lambda to the relative motion erroref
Accordingly, the step of determining the relative pose that minimizes the sum of the photometric error and the relative motion error between the reference frame and the current frame comprises:
it is determined so that L _ E + λefThe resulting minimum relative pose of MV _ E, where MV _ E represents the relative motion error between the current frame and the reference frame, and L _ E represents the photometric error between the current frame and the reference frame.
Optionally, the distribution rule that the larger the value of the image quality factor is, the smaller the weight value distributed to the relative motion error is, the distribution weight value λ of the relative motion error is distributed to the relative motion errorefThe method comprises the following steps:
determining the weight lambda of the relative motion error according to the motion parameter represented by the pose of the current frame and the image quality factoref
Optionally, the motion parameter characterized according to the pose of the current frame and the imageQuality factor, determining weight lambda of the relative motion errorefThe method comprises the following steps:
calculating a first weight according to the motion parameter represented by the pose of the current frame;
calculating a second weight according to the image quality factor;
determining the weight lambda of the relative motion error according to the first weight and the second weightef
Optionally, the step of calculating a first weight according to the motion parameter represented by the pose of the current frame includes:
acquiring the linear acceleration, centripetal acceleration and speed of the electronic equipment when the image collector collects the current frame;
and calculating a first weight value by using the obtained linear acceleration, centripetal acceleration and speed.
Optionally, the calculating a first weight value by using the obtained linear acceleration, centripetal acceleration and speed includes:
calculating the first weight using the following expression:
λe=α*exp(-ω*(β1*al2*ar3*v))
wherein λ iseRepresents the first weight value, alRepresents the linear acceleration of the electronic equipment when the image collector collects the current frame, arRepresenting the centripetal acceleration of the electronic device when the image collector collects the current frame, v representing the speed of the electronic device when the image collector collects the current frame, α, w, β1、β2、β3Is a preset coefficient.
Optionally, the step of calculating a second weight according to the image quality factor includes:
obtaining a quality factor of a target pixel point selected by using a sliding window, wherein the target pixel point is as follows: the pixel points in the current frame, the gray difference value of which with the adjacent pixel points is greater than a preset value, are obtained;
determining the proportion of the remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, wherein the mature points are as follows: a pixel point for which depth information is known;
and calculating a second weight according to the obtained quality factor and the determined proportion.
Optionally, the calculating a second weight according to the obtained quality factor and the determined ratio includes:
calculating a second weight using the following expression:
λc=f1(qimage_quality),
Figure BDA0001965450930000031
wherein λ iscRepresenting said second weight, f1() Representing an exponential function with said quality factor as an argument and a predetermined value as a base qimage_qualityRepresenting said image quality factor, ppix_qualityRepresents said quality factor, pph_ratioRepresenting said ratio, σgA standard deviation representing a grid gradient threshold when selecting the target pixel point using a sliding window,
Figure BDA0001965450930000041
means, n, representing the grid gradient threshold when selecting said target pixel point using a sliding windowphRepresenting the number of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, ndesiredAnd expressing the number of the mature points in the expected sliding window after the target pixel point is selected by using the sliding window.
Optionally, after the step of calculating the first pose according to the determined relative pose and the second pose to position the electronic device, the method further includes:
obtaining an optical flow of the current frame;
taking the current frame as a new key frame when the optical flow is larger than a preset threshold value;
respectively taking the poses corresponding to the key frames of the preset number of frames and the pose corresponding to the new key frame as an initial pose;
adjusting each initial pose, and determining each adjusted initial pose which minimizes the sum of a first residual energy, a second residual energy and a third residual energy, wherein the first residual energy represents: the sum of second luminosity errors between every two adjacent frames of key frames corresponding to the adjusted initial poses, wherein the second luminosity errors are caused by the adjusted initial poses and are the difference values of the gray values between the two adjacent frames of key frames, and the second residual energy represents: the sum of motion information represented by the pose of the electronic device detected by the image acquirer when acquiring each key frame is converted into a plane coordinate system, and the third residual energy represents: a sum of relative motion constraints representing: aiming at two adjacent frames of key frames, calculating a constraint according to the pose of the electronic equipment when the image collector collects the two adjacent frames of key frames and the pose of the electronic equipment corresponding to the two adjacent frames of key frames respectively detected by an inertial sensor;
and taking the new key frame as a new reference frame.
Optionally, the second residual energy is obtained by the following steps:
determining the pose of the image collector under a world coordinate system;
calculating the error of plane motion constraint according to the determined pose;
a second residual energy is calculated from the calculated error of the planar motion constraint.
Optionally, the error of the plane motion constraint and the second residual energy are respectively calculated by using the following expressions:
Figure BDA0001965450930000051
Figure BDA0001965450930000052
wherein E isgRepresents the second residual energy, ΩgRepresenting a weight matrix, n representing the number of key frames, eg_iError, X, representing planar motion constraint-1Representing observation of planar motion, TecRepresenting the conversion relation between the pose of the image collector and the pose of the inertial sensor,
Figure BDA0001965450930000053
representing the pose, T, of the image grabber in the world coordinate systemceAnd representing the conversion relation between the pose of the inertial sensor and the pose of the image collector.
Optionally, the third residual is obtained by the following steps:
aiming at two adjacent frames of key frames, calculating the relative motion constraint between the two adjacent frames of key frames according to the poses of the inertial sensor and the image collector under the world coordinate system respectively when the two adjacent frames of key frames exist;
and calculating the third residual by using the relative motion constraint between each two adjacent key frames.
Optionally, the relative motion constraint between two adjacent key frames and the third residual energy are respectively calculated by using the following expressions:
Figure BDA0001965450930000054
Figure BDA0001965450930000055
wherein E iseRepresenting the third residual energy, n representing the number of key frames, λthRepresents the weight, ΩeRepresenting a weight matrix, ee_thRepresenting the relative motion error between the key frame of the T-th frame and the key frame of the h-th frame, Te2w’_tRepresenting the pose of the inertial sensor in the world coordinate system at the time of the key frame of the t-th frame,
Figure BDA0001965450930000056
representing the pose of the inertial sensor in the world coordinate system in the h frame key frame, Tc2w_tRepresenting the pose of the image collector under the world coordinate system when the key frame of the t-th frame is displayed,
Figure BDA0001965450930000057
representing the pose of the image collector in the world coordinate system when the key frame of the T-th frame is represented, TecRepresenting the transformation relationship between the pose of the image collector and the pose of the inertial sensor, TceAnd representing the conversion relation between the pose of the inertial sensor and the pose of the image collector.
In another aspect of the present invention, a positioning apparatus is further provided, which is applied to an electronic device, wherein an image collector and an inertial sensor for detecting motion information of the electronic device are disposed on the electronic device, and the apparatus includes:
the first acquisition module is used for acquiring the current frame acquired by the image acquisition device;
the second acquisition module is used for acquiring the current frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the current frame;
the third acquisition module is used for acquiring the reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame;
a first determining module, configured to determine a relative pose that minimizes a sum of a first photometric error and a relative motion error between the reference frame and the current frame, where the first photometric error is a function of the relative pose and is a difference between gray values of the current frame and the reference frame, the relative pose represents a variation between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image acquirer acquires the current frame, the second pose is a pose of the electronic device detected when the image acquirer acquires the reference frame, and the relative motion error is an error calculated according to the current frame, the pose of the reference frame, and the relative pose;
and the calculation module is used for calculating the first pose according to the determined relative pose and the second pose so as to realize the positioning of the electronic equipment.
Optionally, the apparatus further comprises:
a first obtaining module, configured to obtain an image quality factor of the current frame, where the image quality factor is used to represent a gradient change of a gray value of each pixel in the reference frame;
a distribution module, configured to distribute a weight λ to the relative motion error according to a distribution principle that the larger the value of the image quality factor is, the smaller the weight is distributed to the relative motion erroref
Accordingly, the first determining module is used for determining L _ E + lambdaefThe resulting minimum relative pose of MV _ E, where MV _ E represents the relative motion error between the current frame and the reference frame, and L _ E represents the photometric error between the current frame and the reference frame.
Optionally, the allocating module includes:
a determining submodule for determining the weight lambda of the relative motion error according to the motion information represented by the pose of the current frame and the image quality factoref
Optionally, the determining sub-module includes:
the first calculation unit is used for calculating a first weight according to the motion parameter represented by the pose of the current frame;
the second calculation unit is used for calculating a second weight according to the image quality factor;
a determining unit, configured to determine a weight λ of the relative motion error according to the first weight and the second weightef
Optionally, the first computing unit includes:
the first obtaining subunit is used for obtaining the linear acceleration, the centripetal acceleration and the speed of the electronic equipment when the image collector collects the current frame;
and the first calculating subunit calculates a first weight by using the obtained linear acceleration, centripetal acceleration and speed.
Optionally, the first calculating subunit is specifically configured to calculate the first weight by using the following expression:
λe=α*exp(-ω*(β1*al2*ar3*v))
wherein λ iseRepresents the first weight value, alRepresents the linear acceleration of the electronic equipment when the image collector collects the current frame, arRepresenting the centripetal acceleration of the electronic device when the image collector collects the current frame, v representing the speed of the electronic device when the image collector collects the current frame, α, w, β1、β2、β3Is a preset coefficient.
Optionally, the second computing unit includes:
a second obtaining subunit, configured to obtain a quality factor of a target pixel selected by using a sliding window, where the target pixel is: the pixel points in the current frame, the gray difference value of which with the adjacent pixel points is greater than a preset value, are obtained;
a determining subunit, configured to determine a ratio of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, where the mature points are: a pixel point for which depth information is known;
and the second calculating subunit is used for calculating a second weight according to the obtained quality factor and the determined proportion.
The second calculating subunit is specifically configured to calculate a second weight by using the following expression:
λc=f1(qimage_quality)
Figure BDA0001965450930000081
wherein λ iscRepresenting said second weight, f1() Representing an exponential function with said quality factor as an argument and a predetermined value as a base qimage_qualityRepresenting said image quality factor, ppix_qualityRepresents said quality factor, pph_ratioRepresenting said ratio, σgA standard deviation representing a grid gradient threshold when selecting the target pixel point using a sliding window,
Figure BDA0001965450930000082
means, n, representing the grid gradient threshold when selecting said target pixel point using a sliding windowphRepresenting the number of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, ndesiredAnd expressing the number of the mature points in the expected sliding window after the target pixel point is selected by using the sliding window.
Optionally, the apparatus further comprises:
a second obtaining module, configured to obtain an optical flow of the current frame;
a first as module, configured to, when the optical flow is greater than a preset threshold, take the current frame as a new key frame;
the second acting module is used for respectively taking the poses corresponding to the key frames of the preset number of frames and the poses corresponding to the new key frames which are stored in advance as initial poses;
a second determining module, configured to adjust each initial pose to determine each adjusted initial pose with a minimum sum of a first residual energy, a second residual energy, and a third residual energy, where the first residual energy represents: the sum of second luminosity errors between every two adjacent frames of key frames corresponding to the adjusted initial poses, wherein the second luminosity errors are caused by the adjusted initial poses and are the difference values of the gray values between the two adjacent frames of key frames, and the second residual energy represents: the sum of motion information of the electronic device represented by the pose after the pose of the electronic device detected by the image acquirer is converted into the plane coordinate system when each key frame is acquired by the image acquirer, and the third residual energy represents: a sum of relative motion constraints representing: aiming at two adjacent frames of key frames, calculating a constraint according to the pose of the electronic equipment when the image collector collects the two adjacent frames of key frames and the pose of the electronic equipment corresponding to the two adjacent frames of key frames respectively detected by an inertial sensor;
and the third is a module for using the new key frame as a new reference frame.
Optionally, the second determining module is configured to:
determining the pose of the image collector under a world coordinate system;
calculating the error of plane motion constraint according to the determined pose;
a second residual energy is calculated from the calculated error of the planar motion constraint.
Optionally, the error of the plane motion constraint and the second residual energy are respectively expressed by the following expressions:
Figure BDA0001965450930000091
Figure BDA0001965450930000092
wherein E isgRepresents the second residual energy, ΩgRepresenting a weight matrix, n representing the number of key frames, eg_iError, X, representing planar motion constraint-1Representing observation of planar motion, TecRepresenting the conversion relation between the pose of the image collector and the pose of the inertial sensor,
Figure BDA0001965450930000093
representing the pose, T, of the image grabber in the world coordinate systemceAnd representing the conversion relation between the pose of the inertial sensor and the pose of the image collector.
Optionally, the second determining module is configured to:
aiming at two adjacent frames of key frames, calculating the relative motion constraint between the two adjacent frames of key frames according to the poses of the inertial sensor and the image collector under the world coordinate system respectively when the two adjacent frames of key frames exist;
and calculating the third residual by using the relative motion constraint between each two adjacent key frames.
Optionally, the relative motion constraint between two adjacent key frames and the third residual energy are respectively expressed by the following expressions:
Figure BDA0001965450930000094
Figure BDA0001965450930000095
wherein E iseRepresenting the third residual energy, n representing the number of key frames, λthRepresents the weight, ΩeRepresenting a weight matrix, ee_thRepresenting the relative motion error between the key frame of the T-th frame and the key frame of the h-th frame, Te2w’_tRepresenting the pose of the inertial sensor in the world coordinate system at the time of the key frame of the t-th frame,
Figure BDA0001965450930000101
representing the pose of the inertial sensor in the world coordinate system in the h frame key frame, Tc2w_tRepresenting the pose of the image collector to the world coordinate system when the key frame of the t-th frame is displayed,
Figure BDA0001965450930000102
representing the pose, T, of the image collector in the world coordinate system during the h-th frame key frameecRepresenting the transformation relationship between the pose of the image collector and the pose of the inertial sensor, TceAnd representing the conversion relation between the pose of the inertial sensor and the pose of the image collector.
In another aspect of the present invention, an electronic device is further provided, which includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and a processor for implementing any of the above positioning methods when executing the program stored in the memory.
In another aspect of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, and the computer program is executed by a processor to implement any of the above positioning methods.
Embodiments of the present invention further provide a computer program product containing instructions, which, when moving on a computer, causes the computer to execute any of the above-mentioned positioning methods.
According to the positioning method, the positioning device and the positioning equipment provided by the embodiment of the invention, the current frame acquired by the image acquisition device can be acquired; acquiring relative motion information of the electronic equipment detected by an inertial sensor when an image collector collects a current frame and a reference frame; determining a relative pose that minimizes a sum of the photometric error and the relative motion error; and calculating the pose of the electronic equipment when the image collector collects the current frame according to the determined relative pose and the pose when the image collector collects the reference frame, so as to realize the positioning of the electronic equipment.
In the positioning process, the scheme provided by the embodiment of the invention comprehensively considers the motion information detected by the inertial sensor for detecting the motion information of the electronic equipment, and utilizes the relative motion information detected by the inertial sensor to constrain the relative pose detected by the image collector, namely after the image collector is influenced by illumination when shooting images, the motion information detected by the inertial sensor can be utilized to ensure the accuracy of the positioning result, so that the robustness of the positioning result in an extreme environment can be improved. And after the electronic equipment is positioned, acquiring the optical flow of the current frame, taking the current frame as a new key frame under the condition that the optical flow is greater than a preset threshold value, respectively taking each pose corresponding to a preset number of frame key frames and the pose corresponding to the new key frame which are stored in advance as an initial pose, optimizing the pose corresponding to each key frame in a mode of adjusting each initial pose to minimize the sum of luminosity error and relative motion constraint between any two frame key frames, and taking the new key frame after pose optimization as a new reference frame. In the process of fusing the relative motion errors, different weights are given to the relative motion errors according to the states of the images and the states of the inertial sensors, so that the positioning accuracy and robustness are further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a simple positioning method according to an embodiment of the present invention;
FIG. 2 is a diagram of an installation position provided by an embodiment of the present invention;
fig. 3 is a schematic diagram of a pose transformation relationship provided in an embodiment of the present invention;
fig. 4 is a schematic flowchart of a detailed positioning method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an optimization factor relationship provided in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a positioning device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a simple positioning method provided in an embodiment of the present invention is applied to an electronic device, where an image collector and an inertial sensor for detecting motion information of the electronic device are disposed on the electronic device, and fig. 2 is a diagram of an installation position relationship between the image collector and the inertial sensor on the electronic device provided in an embodiment of the present invention.
Specifically, the positioning method includes:
and S100, acquiring the current frame acquired by the image acquisition device.
In the operation process of the electronic equipment, the image collector arranged on the electronic equipment can collect images of the surrounding environment in real time, and accordingly, the electronic equipment can also obtain image frames collected by the image collector.
And S110, acquiring the current frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the current frame.
And S120, acquiring the reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame.
The inertial sensor detects the pose of the electronic equipment in real time, namely the inertial sensor detects the pose of a reference frame of the electronic equipment when the image collector collects the reference frame, so that the inertial sensor can be directly used when the pose of the reference frame needs to be used.
In one implementation, the inertial sensor may be a wheel encoder.
In practical application, a first frame image is usually used as a first frame reference frame, when a reference frame is subsequently determined, an image with a motion information difference larger than a threshold value from a previous latest frame reference frame needs to be determined, the determined image is used as a latest key frame, the position of the latest key frame is optimized, and the optimized key frame is used as a new reference frame.
And S130, determining a relative pose which enables the sum of the first photometric error and the relative motion error between the reference frame and the current frame to be minimum.
The first luminosity error is caused to be a relative pose and is a difference value of gray values between a current frame and a reference frame, the relative pose represents a variation between the first pose and a second pose, the first pose is a pose of the electronic equipment detected when the image collector collects the current frame, the second pose is a pose of the electronic equipment detected when the image collector refers to the frame, and the relative motion error is an error calculated according to the pose of the current frame, the pose of the reference frame and the relative pose. The pose of the electronic device is the position of the electronic device in the plane coordinate system and the pose of the electronic device.
The relative motion error is the difference between the relative motion between frames estimated by the image acquisition device and the relative motion between frames estimated by the inertial sensor.
In one implementation, a weight λ may be assigned to the relative motion error according to an image quality factor of the current frame before S130ef
The image quality factor is used for representing the gradient change of the gray value of each pixel point in the reference frame, and the gradient change condition of the gray value of each pixel point in the reference frame is richer, and the corresponding image quality factor is larger. For example, when the reference frame is an image frame acquired by an image acquisition device by acquiring a white wall, the image quality factor corresponding to the reference frame is the minimum because the pixel values of the pixel points in the reference frame are the same.
The smaller the image quality factor, the worse the reference of the reference frame, and correspondingly the worse the accuracy of the motion information detected by the image collector. Therefore, the weight λ can be assigned to the relative motion error according to the assignment rule that the larger the value of the image quality factor is, the smaller the weight assigned to the relative motion error isefThat is, when the motion information detected by the image capturing device is inaccurate, the speaking right of the motion information detected by the inertial sensor can be increased by increasing the weight value allocated to the relative motion error.
In one implementation, the weight λ is assigned to the relative motion errorefThen, lambda can be adjusted according to the magnitude of the relative motion errorefSpecifically, when the relative motion error is large, the inertial transmission is indicatedThe motion information detected by the sensor is greatly different from the motion information detected by the image collector, and at this time, the motion information detected by the image collector is possibly inaccurate, so that the lambda can be increasedefTo increase the speaking right of the motion information detected by the inertial sensor; correspondingly, when the relative motion error is smaller, the difference between the motion information detected by the inertial sensor and the motion information detected by the image collector is smaller, and at the moment, the difference between the motion information detected by the image collector and the motion information detected by the image collector is accurate, so that the lambda can be reducedefTo reduce the speaking right of the motion information detected by the inertial sensor.
Determining a weight lambda of the relative motion errorefThereafter, it may be determined to make L _ E + λefAnd MV _ E represents the relative motion error between the current frame and the reference frame, and L _ E represents the luminosity error between the current frame and the reference frame.
And S140, calculating to obtain a first position posture according to the determined relative position posture and the second position posture, and realizing the positioning of the electronic equipment.
And the first position and posture of the electronic equipment when the image collector collects the current frame are obtained by combining the determined relative position and posture on the basis of the second position and posture of the electronic equipment when the image collector collects the reference frame, so that the position and posture of the electronic equipment are obtained, and the electronic equipment is positioned.
In an implementation manner of the embodiment of the present invention, a weight λ is assigned to a relative motion error according to an assignment rule that a larger value of an image quality factor is assigned to a smaller weight of the relative motion errorefIn the process, the weight lambda of the relative motion error can be determined according to the motion parameter and the image quality factor represented by the pose of the current frameef
Specifically, the linear acceleration, the centripetal acceleration and the speed of the electronic device when the image acquisition device acquires the current frame can be obtained, and the first weight is calculated by using the obtained linear acceleration, centripetal acceleration and speed.
For example, the first weight may be calculated using the following expression:
λe=α*exp(-ω*(β1*al2*ar3*v))
wherein λ iseRepresents a first weight value, alRepresents the linear acceleration of the electronic device when the image collector collects the current frame, arRepresenting the centripetal acceleration of the electronic device when the image collector collects the current frame, v representing the speed of the electronic device when the image collector collects the current frame, α, w, β1、β2、β3Is a preset coefficient.
Then, the quality factor of the target pixel point selected by using the sliding window can be obtained, the proportion of the remaining mature points in the sliding window after the target pixel point is selected by using the sliding window is determined, and the second weight is calculated according to the obtained quality factor and the determined proportion.
For example, the second weight may be calculated using the following expression:
λc=f1(qimage_quality),
Figure BDA0001965450930000141
wherein λ iscRepresenting said second weight, f1() Representing an exponential function with said quality factor as an argument and a predetermined value as a base qimage_qualityRepresenting the picture quality factor, ppix_qualityRepresenting the quality factor of a target pixel point selected by using a sliding window, wherein the target pixel point is as follows: the pixel point in the current frame, p, whose gray difference value from the adjacent pixel point is greater than the preset valueph_ratioRepresenting the proportion, σ, of remaining mature points in a sliding window after a target pixel point is selected using the sliding windowgIndicating the standard deviation of the grid gradient threshold when the target pixel point is selected using a sliding window,
Figure BDA0001965450930000142
presentation using sliding window selectionMean value of grid gradient threshold, n, when selecting target pixelphRepresenting the number of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, ndesiredThe number of mature points in the sliding window expected after the target pixel point is selected by the sliding window is represented.
Each image area with the preset size in the sliding window can be called a grating, the difference value of the gray values of two adjacent pixel points in the grating is called a grating gradient, and the sum value of the median of the grating gradient and a preset numerical value is a grating gradient threshold value.
Finally, determining the weight lambda of the relative motion error according to the first weight and the second weightefFor example, the product of the first weight and the second weight obtained by calculation is used as the weight λef
Along with the movement of the electronic equipment, the coordinates of the corresponding pixel points of the same point in the space in each image frame collected by the image collector can be changed, the gray values of the corresponding pixel points in each image frame collected by the image collector are basically kept unchanged, and based on the gray values, for two image frames, the coordinates of the pixel points contained in one frame of image frame are kept unchanged in the same coordinate system, along with the change of the coordinates of the pixel points contained in the other frame of image frame, the difference value of the gray value between the pixel points with the same coordinate contained in the two frames of image frames can be changed correspondingly, therefore, by changing the coordinates of the pixel points contained in one of the image frames, when the difference of the gray values between the pixel points of the same coordinate contained in the two image frames is minimized, the coordinate variation of the pixel points in the image frame with the changed coordinates represents the pose variation of the electronic equipment when the image collector collects the two image frames.
Specifically, in the process of obtaining the pose variation of the electronic device when the image collector collects the two image frames by changing the coordinates of the pixel points included in one of the two image frames so that the difference value of the gray values between the pixel points of the same coordinate included in the two image frames is the minimum, the gray difference can be calculated for all the pixel points included in the whole image frame, and the gray difference can also be calculated for part of the pixel points in a certain specific area in the image frame.
In one implementation, the photometric error L _ E can be represented by the following relationship:
Figure BDA0001965450930000151
wherein P represents PiOne pixel point coordinate of (1), PiSet representing all pixel coordinates in frame i, obs (p) set representing all observations of pixel p, wpRepresents a weight, IiAnd IjRespectively representing the ith frame and the jth frame, p' represents the coordinate of the point p in the ith frame re-projected on the jth frame, aiAnd biPhotometric parameter representing the ith frame, ajAnd bjRepresenting the luminance parameter, t, of the jth frameiAnd tjRespectively representing the exposure time of the ith frame and the jth frame;
and the coordinate p' of the point p in the ith frame re-projected on the jth frame is:
Figure BDA0001965450930000161
wherein, picRepresenting a predetermined imaging model, dpAnd R and t are relative poses between the pose corresponding to the current frame detected by the image collector and the pose corresponding to the reference frame.
The relative motion error is a relation representing the current frame pose and the reference frame pose detected by the inertial sensor and the relative pose of the electronic device obtained by using the obtained current frame and the reference frame.
In one implementation, the relative motion error MV _ E can be represented by the following relationship:
Figure BDA0001965450930000162
wherein omegafRepresents a weight, efExpressed as an error.
Figure BDA0001965450930000163
Wherein, Te2w′_rRepresenting the pose of the reference frame, Te2w′_cRepresenting the pose of the current frame, TecRepresenting a conversion relationship, T, between the pose of the electronic device detected by the inertial sensor and the pose of the electronic device obtained using the image frames acquired by the image acquirerr2cRepresenting the relative pose.
The positions of an image collector and an inertial sensor which are installed on the electronic equipment are fixed, and accordingly a fixed conversion relation exists between the pose of the electronic equipment detected by the inertial sensor and the pose of the electronic equipment obtained by utilizing the image frames collected by the image collector.
As shown in fig. 3, a schematic diagram of a conversion relationship between a pose of an electronic device detected by an inertial sensor and a pose of the electronic device obtained by using an image frame acquired by an image acquirer according to an embodiment of the present invention is shown;
in the figure, W' is the coordinate system O of the inertial sensoreIs the coordinate origin of the coordinate system, and W is the coordinate system O of the image collectorcThere is a transformation relationship Tec between the two, which is the origin of the coordinates of the coordinate system. Pose T obtained for image frame collected by image collectorw2c_tAnd the pose T detected by the inertial sensorw′2e_tThe following conversion relationship exists:
Tw′2e_t=TecTw2c_tTce
pose T obtained by utilizing image frames acquired by image acquisition devicec_t2hAnd the pose T detected by the inertial sensore_t2hThe following conversion relationship exists:
Te_t2h=TecTc_t2hTce
according to the positioning method provided by the embodiment of the invention, the motion information detected by the inertial sensor for detecting the motion information of the electronic equipment is comprehensively considered, namely, the motion information detected by the inertial sensor can be utilized to ensure the accuracy of the positioning result after the image collector is influenced by illumination when shooting the image, so that the robustness of the positioning result can be improved.
Referring to fig. 4, which is a flowchart illustrating a detailed positioning method according to an embodiment of the present invention, S400 to S404 in the drawing are the same as S100 to S140 described above, and are not repeated herein. After enabling the positioning of the electronic device, it may be performed:
s405, obtaining an optical flow of the current frame;
and S406, taking the current frame as a new key frame when the optical flow is larger than a preset threshold value. Correspondingly, when the optical flow is smaller than the preset threshold value, returning to execute S400;
the larger the optical flow is, the larger the difference between the image information contained in the current frame and the image information contained in the reference frame is, that is, the higher the reference value of the image information contained in the current frame is. Therefore, the current frame can be regarded as a new key frame when the optical flow of the current frame is greater than the preset threshold.
S407, respectively taking the poses corresponding to the key frames of the preset number of frames and the poses corresponding to the new key frames which are stored in advance as initial poses;
s408, adjusting each initial pose, and determining each adjusted initial pose which enables the sum of the first residual energy, the second residual energy and the third residual energy to be minimum.
Wherein the first residual energy represents: the sum of second luminosity errors between every two adjacent frames of key frames corresponding to the adjusted initial poses, the second luminosity errors are changed from the adjusted initial poses and are the difference values of the gray values between the two adjacent frames of key frames, and second residual energy represents: the sum of motion information represented by the pose of the electronic equipment after the pose of the electronic equipment detected by the image collector when each key frame is collected is converted into a plane coordinate system, and the third residual energy represents: a sum of the respective relative motion constraints, the relative motion constraint representing: and aiming at two adjacent frames of key frames, calculating the obtained constraint according to the pose of the electronic equipment when the image collector collects the two adjacent frames of key frames and the pose of the electronic equipment corresponding to the two adjacent frames of key frames respectively detected by the inertial sensor.
After the current frame is used as a new key frame, because the pose of the electronic equipment when the current frame is collected, namely the first pose is obtained, the pose of the electronic equipment when the current frame is collected can be further optimized by minimizing the sum of the first residual energy, the second residual energy and the third residual energy, so that the pose of the key frame is optimized and the depth of a mature point in the key frame is optimized.
In one implementation, in a case that the image collector is a binocular image collector, since the binocular image collector generally collects image frames by using two lenses on the left and right sides, when calculating the first residual energy, the pose of the electronic device when a group of image collectors collects each key frame, a first sum of gray-scale values between any two key frames collected by the lens on one side may be calculated, a second sum of gray-scale values between image frames collected by the two lenses on the left and right sides may be calculated, and the sum of the first sum and the second sum is used as the first residual energy.
In one implementation, the pose of the image collector in a world coordinate system can be determined; calculating the error of plane motion constraint according to the determined pose; a second residual energy is calculated from the calculated error of the planar motion constraint.
Specifically, the second residual energy may be represented by the following expression:
Figure BDA0001965450930000181
Figure BDA0001965450930000182
wherein E isgRepresents the second residual energy, ΩgRepresenting the information matrix, n representing the number of key frames eg_iError, X, representing planar motion constraint-1Representing observation of planar motion, TecShows the relative pose relationship between the image collector and the inertial sensor,
Figure BDA0001965450930000183
representing the pose, T, of the image grabber in the world coordinate systemceAnd the relative pose relationship between the inertial sensor and the image collector is represented.
In one implementation mode, aiming at two adjacent frames of key frames, calculating the relative motion constraint between the two adjacent frames of key frames according to the poses of the inertial sensor and the image collector under the world coordinate system respectively when the two adjacent frames of key frames exist; and calculating a third residual by using the relative motion constraint between each two adjacent key frames.
Specifically, the third residual energy may be represented by the following expression:
Figure BDA0001965450930000191
Figure BDA0001965450930000192
wherein E iseRepresenting the third residual energy, n representing the number of key frames, λthRepresents the weight, ΩeRepresenting a weight matrix, ee_thRepresenting a relative motion constraint between the T frame key frame and the h frame key frame, Te2w’_tRepresenting the pose of the inertial sensor in the world coordinate system at the time of the key frame of the t-th frame,
Figure BDA0001965450930000193
representing the pose of the inertial sensor in the world coordinate system in the h frame key frame, Tc2w_tWorld coordinates of image collector when representing key frame of t-th frameThe position and posture of the user under the system,
Figure BDA0001965450930000194
representing the pose, T, of the image collector in the world coordinate system during the h-th frame key frameecRepresenting the relative pose relationship, T, between the image grabber and the inertial sensorceAnd the relative pose relationship between the inertial sensor and the image collector is represented.
In one implementation, in the process of calculating the third residual, a weight may be allocated to each calculated relative motion constraint, and the third residual is obtained through weighting calculation. Specifically, the assigned weight may be λ corresponding to each of two adjacent key framesefThe product of (a).
And S409, taking the new key frame as a new reference frame. And returns to execution S400
And taking the pose corresponding to the key frame as pose initial values, optimizing the pose of the key frame and optimizing the depth of a mature point in the key frame by adjusting each pose initial value to minimize the sum of the first residual energy, the second residual energy and the third residual energy.
Fig. 5 is a schematic diagram of an optimization factor relationship provided in the embodiment of the present invention, in which a monocular luminance constraint is Ep(ii) a Binocular luminosity constraint is ELR(ii) a Error of plane motion is Eg(ii) a Inertial sensor motion constraint error is Ee. Compared with the original direct method visual odometer, the visual inertial odometer fusing the inertial sensor data in the technical scheme of the invention adds more constraints and can obtain a more robust positioning result.
Referring to fig. 6, a schematic structural diagram of a positioning apparatus provided in an embodiment of the present invention is applied to an electronic device, where the electronic device is provided with an image collector and an inertial sensor for detecting motion information of the electronic device, and the apparatus includes:
a first obtaining module 600, configured to obtain a current frame collected by the image collector;
a second obtaining module 610, configured to obtain a current frame pose of the electronic device detected by the inertial sensor when the image collector collects the current frame;
a third obtaining module 620, configured to obtain a reference frame pose of the electronic device detected by the inertial sensor when the image acquirer acquires the reference frame;
a first determining module 630, configured to determine a relative pose that minimizes a sum of a first photometric error between the reference frame and the current frame and a relative motion error detected by the inertial sensor, where the first photometric error is a function of the relative pose and is a difference between gray values of the current frame and the reference frame, the relative pose represents an amount of change between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image acquirer acquires the current frame, the second pose is a pose of the electronic device detected when the image acquirer acquires the reference frame, and the relative motion error is an error calculated according to the current frame pose, the reference frame pose, and the relative pose;
and the calculating module 640 is configured to calculate the first pose according to the determined relative pose and the second pose, so as to position the electronic device.
In an implementation manner of the embodiment of the present invention, the apparatus further includes:
a first obtaining module, configured to obtain an image quality factor of the current frame, where the image quality factor is used to represent a gradient change of a gray value of each pixel in the reference frame;
a distribution module, configured to distribute a weight λ to the relative motion error according to a distribution principle that the larger the value of the image quality factor is, the smaller the weight is distributed to the relative motion erroref
Accordingly, the first determining module 630 is used for determining L _ E + λefThe resulting minimum relative pose of MV _ E, where MV _ E represents the relative motion error between the current frame and the reference frame, and L _ E represents the photometric error between the current frame and the reference frame.
In an implementation manner of the embodiment of the present invention, the allocation module includes:
a determining submodule for determining the weight lambda of the relative motion error according to the motion parameter represented by the pose of the current frame and the image quality factoref
In an implementation manner of the embodiment of the present invention, the determining sub-module includes:
the first calculation unit is used for calculating a first weight according to the motion parameter represented by the pose of the current frame;
the second calculation unit is used for calculating a second weight according to the image quality factor;
a determining unit, configured to determine a weight λ of the relative motion error according to the first weight and the second weightef
In an implementation manner of the embodiment of the present invention, the first calculating unit includes:
the first obtaining subunit is used for obtaining the linear acceleration, the centripetal acceleration and the speed of the electronic equipment when the image collector collects the current frame;
and the first calculating subunit calculates a first weight by using the obtained linear acceleration, centripetal acceleration and speed.
In one implementation manner of the embodiment of the present invention, the first calculating subunit is specifically configured to,
calculating the first weight using the following expression:
λe=α*exp(-ω*(β1*al2*ar3*v))
wherein λ iseRepresents the first weight value, alRepresents the linear acceleration of the electronic equipment when the image collector collects the current frame, arRepresenting the centripetal acceleration of the electronic device when the image collector collects the current frame, v representing the speed of the electronic device when the image collector collects the current frame, α, w, β1、β2、β3Is a preset coefficient.
In an implementation manner of the embodiment of the present invention, the second calculating unit includes:
a second obtaining subunit, configured to obtain a quality factor of a target pixel selected by using a sliding window, where the target pixel is: the pixel points in the current frame, the gray difference value of which with the adjacent pixel points is greater than a preset value, are obtained;
a determining subunit, configured to determine a ratio of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, where the mature points are: a pixel point for which depth information is known;
and the second calculating subunit is used for calculating a second weight according to the obtained quality factor and the determined proportion.
In an implementation manner of the embodiment of the present invention, the second calculating subunit is specifically configured to,
calculating a second weight using the following expression:
λc=f1(qimage_quality)
Figure BDA0001965450930000221
wherein λ iscRepresenting said second weight, f1() Representing an exponential function with said quality factor as an argument and a predetermined value as a base qimage_qualityRepresenting said image quality factor, ppix_qualityRepresenting the quality factor of a target pixel point selected by using a sliding window, wherein the target pixel point is as follows: the pixel point, p, in the reference frame, of which the gray difference value with the adjacent pixel point is greater than a preset valueph_ratioRepresenting the proportion, sigma, of the remaining mature points in the sliding window after the target pixel point is selected by the sliding windowgA standard deviation representing a grid gradient threshold when selecting the target pixel point using a sliding window,
Figure BDA0001965450930000222
means, n, representing the grid gradient threshold when selecting said target pixel point using a sliding windowphTo representSelecting the number of the remaining mature points in the target pixel point by using a sliding window, ndesiredAnd expressing the number of the mature points in the expected sliding window after the target pixel point is selected by using the sliding window.
In an implementation manner of the embodiment of the present invention, the apparatus further includes:
a second obtaining module, configured to obtain an optical flow of the current frame;
a first as module, configured to, when the optical flow is greater than a preset threshold, take the current frame as a new key frame;
the second acting module is used for respectively taking the poses corresponding to the key frames of the preset number of frames and the poses corresponding to the new key frames which are stored in advance as initial poses;
a second determining module, configured to adjust each initial pose to determine each adjusted initial pose with a minimum sum of a first residual energy, a second residual energy, and a third residual energy, where the first residual energy represents: the sum of second luminosity errors between every two adjacent frames of key frames corresponding to the adjusted initial poses, wherein the second luminosity errors are caused by the adjusted initial poses and are the difference values of the gray values between the two adjacent frames of key frames, and the second residual energy represents: the sum of motion information represented by the pose of the electronic device detected by the image acquirer when acquiring each key frame is converted into a plane coordinate system, and the third residual energy represents: a sum of relative motion constraints representing: aiming at two adjacent frames of key frames, calculating the pose of the electronic equipment when the image collector collects the two adjacent frames of key frames and the pose of the electronic equipment corresponding to the two adjacent frames of key frames respectively detected by the inertial sensor to obtain constraints;
and the third is a module for using the new key frame as a new reference frame.
In an implementation manner of the embodiment of the present invention, the second determining module is configured to:
determining the pose of the image collector under a world coordinate system;
calculating the error of plane motion constraint according to the determined pose;
a second residual energy is calculated from the calculated error of the planar motion constraint.
In one implementation, the error and the second residual energy of the planar motion constraint are represented by the following expressions:
Figure BDA0001965450930000231
Figure BDA0001965450930000233
wherein E isgRepresents the second residual energy, ΩgRepresenting a weight matrix, n representing the number of key frames, eg_iError, X, representing planar motion constraint-1Representing observation of planar motion, TecRepresenting the conversion relation between the pose of the image collector and the pose of the inertial sensor,
Figure BDA0001965450930000232
representing the pose, T, of the image grabber in the world coordinate systemceAnd representing the conversion relation between the pose of the inertial sensor and the pose of the image collector.
In an implementation manner of the embodiment of the present invention, optionally, the second determining module is configured to:
aiming at two adjacent frames of key frames, calculating the relative motion constraint between the two adjacent frames of key frames according to the poses of the inertial sensor and the image collector under the world coordinate system respectively when the two adjacent frames of key frames exist;
and calculating the third residual by using the relative motion constraint between each two adjacent key frames.
In one implementation, the relative motion constraint between two adjacent key frames and the third residual energy are respectively expressed by the following expressions:
Figure BDA0001965450930000241
Figure BDA0001965450930000242
wherein E iseRepresenting the third residual energy, n representing the number of key frames, λthRepresents the weight, ΩeRepresenting a weight matrix, ee_thRepresenting the relative motion error between the key frame of the T-th frame and the key frame of the h-th frame, Te2w’_tRepresenting the pose of the inertial sensor in the world coordinate system at the time of the key frame of the t-th frame,
Figure BDA0001965450930000243
representing the pose of the inertial sensor in the world coordinate system in the h frame key frame, Tc2w_tRepresenting the pose of the image collector to the world coordinate system when the key frame of the t-th frame is displayed,
Figure BDA0001965450930000244
representing the pose, T, of the image collector in the world coordinate system during the h-th frame key frameecRepresenting the transformation relationship between the pose of the image collector and the pose of the inertial sensor, TceAnd representing the conversion relation between the pose of the inertial sensor and the pose of the image collector.
An embodiment of the present invention further provides an electronic device, as shown in fig. 7, including a processor 001, a communication interface 002, a memory 003 and a communication bus 004, where the processor 001, the communication interface 002 and the memory 003 complete mutual communication through the communication bus 004,
a memory 003 for storing a computer program;
the processor 001 is configured to implement the positioning method provided in the embodiment of the present invention when executing the program stored in the memory 003.
Specifically, the positioning method is applied to an electronic device, wherein an image collector and an inertial sensor for detecting motion information of the electronic device are arranged on the electronic device, and the method includes:
acquiring a current frame acquired by the image acquisition device;
acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame;
acquiring a reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame;
determining a relative pose which minimizes the sum of a first photometric error between the reference frame and the current frame and a relative motion error detected by the inertial sensor, wherein the first photometric error is a function of the relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents a variation between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image acquirer acquires the current frame, the second pose is a pose of the electronic device detected when the image acquirer acquires the reference frame, and the relative motion error is an error calculated according to the current frame of the pose, the pose of the reference frame and the relative pose;
and calculating to obtain the first pose according to the determined relative pose and the second pose, so as to realize the positioning of the electronic equipment.
It should be noted that, the processor 001 executes the program stored in the memory 003 to implement other embodiments of the method for detecting a device applied to a video signal transmitting end, which are the same as the embodiments provided in the foregoing method embodiments and are not described again here.
In each scheme provided by the embodiment of the invention, in the positioning process, the motion information detected by the inertial sensor for detecting the motion information of the electronic equipment is comprehensively considered, namely, after the image collector is influenced by illumination when shooting the image, the motion information detected by the inertial sensor can be utilized to ensure the accuracy of the positioning result, so that the robustness of the positioning result can be improved.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In yet another aspect of the present invention, there is also provided a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to execute the positioning method provided by the embodiment of the present invention.
Specifically, the positioning method is applied to an electronic device, wherein an image collector and an inertial sensor for detecting motion information of the electronic device are arranged on the electronic device, and the method includes:
acquiring a current frame acquired by the image acquisition device;
acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame;
acquiring a reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame;
determining a relative pose which minimizes the sum of a first photometric error between the reference frame and the current frame and a relative motion error detected by the inertial sensor, wherein the first photometric error is a function of the relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents a variation between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image acquirer acquires the current frame, the second pose is a pose of the electronic device detected when the image acquirer acquires the reference frame, and the relative motion error is an error calculated according to the current frame of the pose, the pose of the reference frame and the relative pose;
and calculating to obtain the first pose according to the determined relative pose and the second pose, so as to realize the positioning of the electronic equipment.
It should be noted that other embodiments of the positioning method implemented by the computer-readable storage medium are the same as the embodiments provided in the foregoing method embodiments, and are not described herein again.
In each scheme provided by the embodiment of the invention, in the positioning process, the motion information detected by the inertial sensor for detecting the motion information of the electronic equipment is comprehensively considered, namely, after the image collector is influenced by illumination when shooting the image, the motion information detected by the inertial sensor can be utilized to ensure the accuracy of the positioning result, so that the robustness of the positioning result can be improved.
In another aspect of the present invention, the present invention also provides a computer program product containing instructions, which, when moving on a computer, causes the computer to execute the positioning method provided by the present invention.
Specifically, the positioning method is applied to an electronic device, wherein an image collector and an inertial sensor for detecting motion information of the electronic device are arranged on the electronic device, and the method includes:
acquiring a current frame acquired by the image acquisition device;
acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame;
acquiring a reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame;
determining a relative pose which minimizes the sum of a first photometric error between the reference frame and the current frame and a relative motion error detected by the inertial sensor, wherein the first photometric error is a function of the relative pose and is a difference value of gray values between the current frame and the reference frame, the relative pose represents a relative motion between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image acquirer acquires the current frame, the second pose is a pose of the electronic device detected when the image acquirer acquires the reference frame, and the relative motion error is an error calculated according to the current frame, the reference frame pose and the relative pose;
and calculating to obtain the first pose according to the determined relative pose and the second pose, so as to realize the positioning of the electronic equipment.
It should be noted that other embodiments of the positioning method implemented by the computer program product are the same as the embodiments provided in the foregoing method embodiments, and are not described again here.
In each scheme provided by the embodiment of the invention, in the positioning process, the motion information detected by the inertial sensor for detecting the motion information of the electronic equipment is comprehensively considered, namely, after the image collector is influenced by illumination when shooting images, the motion information detected by the inertial sensor can be utilized to ensure the accuracy of the positioning result, so that the robustness of the positioning result can be improved, and the method and the device are suitable for various extreme environments.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, the electronic device, the computer-readable storage medium, and the computer program product embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (18)

1. The positioning method is applied to electronic equipment, wherein an image collector and an inertial sensor for detecting inertial motion information of the electronic equipment are arranged on the electronic equipment, and the method comprises the following steps:
acquiring a current frame acquired by the image acquisition device;
acquiring the current frame pose of the electronic equipment detected by the inertial sensor when the image collector collects the current frame;
acquiring a reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects a reference frame;
determining a relative pose which minimizes the sum of a first luminosity error and a relative motion error between the reference frame and the current frame, wherein the first luminosity error is a difference value of gray values between the current frame and the reference frame, the relative pose represents a variation between a first pose and a second pose, the first pose is a pose of the electronic equipment detected when the image collector collects the current frame, the second pose is a pose detected when the image collector collects the reference frame, and the relative motion error is an error calculated according to the current frame pose, the reference frame pose and the relative pose;
and calculating to obtain the first pose according to the determined relative pose and the second pose, so as to realize the positioning of the electronic equipment.
2. The method of claim 1, further comprising, prior to the step of determining the relative pose that minimizes the sum of the first photometric error and the relative motion error between the reference frame and the current frame:
obtaining an image quality factor of the current frame, wherein the image quality factor is used for representing gradient change of gray values of all pixel points in the reference frame;
according to the distribution principle that the larger the value of the image quality factor is, the smaller the weight value distributed to the relative motion error is, distributing the weight value lambda to the relative motion erroref
Accordingly, the step of determining the relative pose that minimizes the sum of the photometric error and the relative motion error between the reference frame and the current frame comprises:
it is determined so that L _ E + λefThe resulting minimum relative pose of MV _ E, where MV _ E represents the relative motion error between the current frame and the reference frame, and L _ E represents the first photometric error between the current frame and the reference frame.
3. The method of claim 2, wherein the larger the value according to the image quality factor isDistributing weight lambda to the relative motion error according to the distribution principle that the smaller the weight distributed to the relative motion error isefThe method comprises the following steps:
determining the weight lambda of the relative motion error according to the motion parameter represented by the pose of the current frame and the image quality factoref
4. The method according to claim 3, wherein the weight λ of the relative motion error is determined according to the motion parameter of the current frame pose representation and the image quality factorefThe method comprises the following steps:
calculating a first weight according to the motion parameter represented by the pose of the current frame;
calculating a second weight according to the image quality factor;
determining the weight lambda of the relative motion error according to the first weight and the second weightef
5. The method of claim 4, wherein the step of calculating the first weight according to the motion parameter characterized by the pose of the current frame comprises:
acquiring the linear acceleration, centripetal acceleration and speed of the electronic equipment when the image collector collects the current frame;
and calculating a first weight value by using the obtained linear acceleration, centripetal acceleration and speed.
6. The method of claim 5, wherein the calculating the first weight using the obtained linear acceleration, centripetal acceleration, and velocity comprises:
calculating the first weight using the following expression:
λe=α*exp(-ω*(β1*al2*ar3*v))
wherein λ iseRepresents the first weight value, alRepresents the current frame acquired by the image collectorLinear acceleration of electronic equipment, arRepresenting the centripetal acceleration of the electronic device when the image collector collects the current frame, v representing the speed of the electronic device when the image collector collects the current frame, α, w, β1、β2、β3Is a preset coefficient.
7. The method of claim 4, wherein the step of calculating the second weight value according to the image quality factor comprises:
obtaining a quality factor of a target pixel point selected by using a sliding window, wherein the target pixel point is as follows: the pixel points in the current frame, the gray difference value of which with the adjacent pixel points is greater than a preset value, are obtained;
determining the proportion of the remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, wherein the mature points are as follows: a pixel point for which depth information is known;
and calculating a second weight according to the obtained quality factor and the determined proportion.
8. The method of claim 7, wherein calculating the second weight based on the obtained quality factor and the determined ratio comprises:
calculating a second weight using the following expression:
λc=f1(qimage_quality),
Figure FDA0001965450920000031
wherein λ iscRepresenting said second weight, f1() Representing an exponential function with said quality factor as an argument and a predetermined value as a base qimage_qualityRepresenting said image quality factor, ppix_qualityRepresents said quality factor, pph_ratioRepresenting said ratio, σgA standard deviation representing a grid gradient threshold when selecting the target pixel point using a sliding window,
Figure FDA0001965450920000032
means, n, representing the grid gradient threshold when selecting said target pixel point using a sliding windowphRepresenting the number of remaining mature points in the sliding window after the target pixel point is selected by using the sliding window, ndesiredAnd expressing the number of the mature points in the expected sliding window after the target pixel point is selected by using the sliding window.
9. The method of claim 1, wherein after the step of calculating the first pose from the determined relative pose and the second pose to enable positioning of the electronic device, further comprising:
obtaining an optical flow of the current frame;
taking the current frame as a new key frame when the optical flow is larger than a preset threshold value;
respectively taking the poses corresponding to the key frames of the preset number of frames and the pose corresponding to the new key frame as an initial pose;
adjusting each initial pose, and determining each adjusted initial pose which minimizes the sum of a first residual energy, a second residual energy and a third residual energy, wherein the first residual energy represents: the sum of second luminosity errors between every two adjacent frames of key frames corresponding to the adjusted initial poses, wherein the second luminosity errors are caused by the adjusted initial poses and are the difference values of the gray values between the two adjacent frames of key frames, and the second residual energy represents: the sum of motion information represented by the pose of the electronic device detected by the image acquirer when acquiring each key frame is converted into a plane coordinate system, and the third residual energy represents: a sum of relative motion constraints representing: aiming at two adjacent frames of key frames, calculating a constraint according to the pose of the electronic equipment when the image collector collects the two adjacent frames of key frames and the pose of the electronic equipment corresponding to the two adjacent frames of key frames respectively detected by an inertial sensor;
and taking the new key frame as a new reference frame.
10. The method of claim 9, wherein the second residual energy is obtained using the steps of:
determining the pose of the image collector under a world coordinate system;
calculating the error of plane motion constraint according to the determined pose;
a second residual energy is calculated from the calculated error of the planar motion constraint.
11. The method of claim 10, wherein the error of the planar motion constraint and the second residual energy are represented by the following expressions, respectively:
Figure FDA0001965450920000041
Figure FDA0001965450920000042
wherein E isgRepresents the second residual energy, ΩgRepresenting a weight matrix, n representing the number of key frames, eg_iError, X, representing planar motion constraint-1Representing observation of planar motion, TecRepresenting the conversion relation between the pose of the image collector and the pose of the inertial sensor,
Figure FDA0001965450920000043
representing the pose, T, of the image grabber in the world coordinate systemceAnd representing the conversion relation between the pose of the inertial sensor and the pose of the image collector.
12. The method of claim 9, wherein the third residual is obtained using the steps of:
aiming at two adjacent frames of key frames, calculating the relative motion constraint between the two adjacent frames of key frames according to the poses of the inertial sensor and the image collector under the world coordinate system respectively when the two adjacent frames of key frames exist;
and calculating the third residual by using the relative motion constraint between each two adjacent key frames.
13. The method of claim 12, wherein the relative motion constraint between two adjacent key frames and the third residual energy are expressed by the following expressions:
Figure FDA0001965450920000051
Figure FDA0001965450920000052
wherein E iseRepresenting the third residual energy, n representing the number of key frames, λthRepresents the weight, ΩeRepresenting a weight matrix, ee_thRepresenting a relative motion constraint between the T frame key frame and the h frame key frame, Te2w’_tRepresenting the pose of the inertial sensor in the world coordinate system at the time of the key frame of the t-th frame,
Figure FDA0001965450920000053
representing the pose of the inertial sensor in the world coordinate system in the h frame key frame, Tc2w_tRepresenting the pose of the image collector under the world coordinate system when the key frame of the t-th frame is displayed,
Figure FDA0001965450920000054
representing the pose of the image collector in the world coordinate system when the key frame of the T-th frame is represented, TecRepresenting the transformation relationship between the pose of the image collector and the pose of the inertial sensor, TceAnd representing the conversion relation between the pose of the inertial sensor and the pose of the image collector.
14. The utility model provides a positioner, its characterized in that is applied to electronic equipment, wherein, be provided with image collector and be used for detecting the inertial sensor of electronic equipment motion information on electronic equipment, the device includes:
the first acquisition module is used for acquiring the current frame acquired by the image acquisition device;
the second acquisition module is used for acquiring the current frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the current frame;
the third acquisition module is used for acquiring the reference frame pose of the electronic equipment, which is detected by the inertial sensor when the image collector collects the reference frame;
a first determining module, configured to determine a relative pose that minimizes a sum of a first photometric error and a relative motion error between the reference frame and the current frame, where the first photometric error is a function of the relative pose and is a difference between gray values of the current frame and the reference frame, the relative pose represents a variation between a first pose and a second pose, the first pose is a pose of the electronic device detected when the image acquirer acquires the current frame, the second pose is a pose of the electronic device detected when the image acquirer acquires the reference frame, and the relative motion error is an error calculated according to the current frame, the pose of the reference frame, and the relative pose;
and the calculation module is used for calculating the first pose according to the determined relative pose and the second pose so as to realize the positioning of the electronic equipment.
15. The apparatus of claim 14, wherein the apparatus further comprises:
a first obtaining module, configured to obtain an image quality factor of the current frame, where the image quality factor is used to represent a gradient change of a gray value of each pixel in the reference frame;
an assignment module for assigning the relative motion according to a larger value of the image quality factorDistributing the weight lambda to the relative motion error according to the distribution principle that the smaller the weight of the error distribution is, the smaller the weight lambda isef
Accordingly, the first determining module is used for determining L _ E + lambdaefThe resulting minimum relative pose of MV _ E, where MV _ E represents the relative motion error between the current frame and the reference frame, and L _ E represents the first photometric error between the current frame and the reference frame.
16. The apparatus of claim 14, wherein the apparatus further comprises:
a second obtaining module, configured to obtain an optical flow of the current frame;
a first as module, configured to, when the optical flow is greater than a preset threshold, take the current frame as a new key frame;
the second acting module is used for respectively taking the poses corresponding to the key frames of the preset number of frames and the poses corresponding to the new key frames which are stored in advance as initial poses;
a second determining module, configured to adjust each initial pose to determine each adjusted initial pose with a minimum sum of a first residual energy, a second residual energy, and a third residual energy, where the first residual energy represents: the sum of second luminosity errors between every two adjacent frames of key frames corresponding to the adjusted initial poses, wherein the second luminosity errors are caused by the adjusted initial poses and are the difference values of the gray values between the two adjacent frames of key frames, and the second residual energy represents: the sum of motion information of the electronic device represented by the pose after the pose of the electronic device detected by the image acquirer is converted into the plane coordinate system when each key frame is acquired by the image acquirer, and the third residual energy represents: a sum of relative motion constraints representing: aiming at two adjacent frames of key frames, calculating a constraint according to the pose of the electronic equipment when the image collector collects the two adjacent frames of key frames and the pose of the electronic equipment corresponding to the two adjacent frames of key frames respectively detected by an inertial sensor;
and the third is a module for using the new key frame as a new reference frame.
17. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1-13 when executing a program stored in the memory.
18. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 13.
CN201910100000.1A 2019-01-31 2019-01-31 Positioning method, device and equipment Active CN111507132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910100000.1A CN111507132B (en) 2019-01-31 2019-01-31 Positioning method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910100000.1A CN111507132B (en) 2019-01-31 2019-01-31 Positioning method, device and equipment

Publications (2)

Publication Number Publication Date
CN111507132A true CN111507132A (en) 2020-08-07
CN111507132B CN111507132B (en) 2023-07-07

Family

ID=71873978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910100000.1A Active CN111507132B (en) 2019-01-31 2019-01-31 Positioning method, device and equipment

Country Status (1)

Country Link
CN (1) CN111507132B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112179355A (en) * 2020-09-02 2021-01-05 西安交通大学 Attitude estimation method aiming at typical characteristics of photometric curve
CN113409391A (en) * 2021-06-25 2021-09-17 浙江商汤科技开发有限公司 Visual positioning method and related device, equipment and storage medium
CN113701760A (en) * 2021-09-01 2021-11-26 火种源码(中山)科技有限公司 Robot anti-interference positioning method and device based on sliding window pose graph optimization
CN113847907A (en) * 2021-09-29 2021-12-28 深圳市慧鲤科技有限公司 Positioning method and device, equipment and storage medium
WO2023050634A1 (en) * 2021-09-29 2023-04-06 深圳市慧鲤科技有限公司 Positioning method and apparatus, device, storage medium, and computer program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075609A1 (en) * 2016-09-12 2018-03-15 DunAn Precision, Inc. Method of Estimating Relative Motion Using a Visual-Inertial Sensor
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN108492316A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 A kind of localization method and device of terminal
CN108827315A (en) * 2018-08-17 2018-11-16 华南理工大学 Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075609A1 (en) * 2016-09-12 2018-03-15 DunAn Precision, Inc. Method of Estimating Relative Motion Using a Visual-Inertial Sensor
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN108492316A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 A kind of localization method and device of terminal
CN108827315A (en) * 2018-08-17 2018-11-16 华南理工大学 Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIN-CHUN PIAO ET AL.: "Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices", 《SENSORS》 *
姚二亮等: "基于Vision-IMU的机器人同时定位与地图创建算法", 《仪器仪表学报》 *
徐晓苏 等: "室内环境下基于图优化的视觉惯性SLAM方法", 《中国惯性技术学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112179355A (en) * 2020-09-02 2021-01-05 西安交通大学 Attitude estimation method aiming at typical characteristics of photometric curve
CN113409391A (en) * 2021-06-25 2021-09-17 浙江商汤科技开发有限公司 Visual positioning method and related device, equipment and storage medium
CN113409391B (en) * 2021-06-25 2023-03-03 浙江商汤科技开发有限公司 Visual positioning method and related device, equipment and storage medium
CN113701760A (en) * 2021-09-01 2021-11-26 火种源码(中山)科技有限公司 Robot anti-interference positioning method and device based on sliding window pose graph optimization
CN113701760B (en) * 2021-09-01 2024-02-27 火种源码(中山)科技有限公司 Robot anti-interference positioning method and device based on sliding window pose diagram optimization
CN113847907A (en) * 2021-09-29 2021-12-28 深圳市慧鲤科技有限公司 Positioning method and device, equipment and storage medium
WO2023050634A1 (en) * 2021-09-29 2023-04-06 深圳市慧鲤科技有限公司 Positioning method and apparatus, device, storage medium, and computer program product

Also Published As

Publication number Publication date
CN111507132B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN111507132B (en) Positioning method, device and equipment
US11704833B2 (en) Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium
JP6734940B2 (en) Three-dimensional measuring device
CN107223330A (en) A kind of depth information acquisition method, device and image capture device
CN102959586A (en) Motion estimation device, depth estimation device, and motion estimation method
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
CN112927279A (en) Image depth information generation method, device and storage medium
WO2019216005A1 (en) Self-position estimation system, autonomous movement system, and self-position estimation method
CN113052907B (en) Positioning method of mobile robot in dynamic environment
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN110428461B (en) Monocular SLAM method and device combined with deep learning
CN114022639A (en) Three-dimensional reconstruction model generation method and system, electronic device and storage medium
CN111508025A (en) Three-dimensional position estimation device and program
CN114111776A (en) Positioning method and related device
JP3633469B2 (en) Inter-vehicle distance setting device
CN111742352B (en) Method for modeling three-dimensional object and electronic equipment
JP2008298589A (en) Device and method for detecting positions
JP6602089B2 (en) Image processing apparatus and control method thereof
CN112967228B (en) Determination method and device of target optical flow information, electronic equipment and storage medium
JP7452620B2 (en) Image processing device, image processing method, and program
CN114140659A (en) Social distance monitoring method based on human body detection under view angle of unmanned aerial vehicle
CN115239815B (en) Camera calibration method and device
JP6973570B1 (en) Image processing device, image processing program, and image processing method
JP7258250B2 (en) Position/posture estimation device, position/posture estimation method, and program
CN111586299B (en) Image processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310052 5 / F, building 1, building 2, no.700 Dongliu Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant