CN113379839B - Ground visual angle monocular vision odometer method based on event camera system - Google Patents

Ground visual angle monocular vision odometer method based on event camera system Download PDF

Info

Publication number
CN113379839B
CN113379839B CN202110569036.1A CN202110569036A CN113379839B CN 113379839 B CN113379839 B CN 113379839B CN 202110569036 A CN202110569036 A CN 202110569036A CN 113379839 B CN113379839 B CN 113379839B
Authority
CN
China
Prior art keywords
event
camera
point
frame
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110569036.1A
Other languages
Chinese (zh)
Other versions
CN113379839A (en
Inventor
余磊
周立凤
杨文�
刘凢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shizhi Technology Co ltd
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110569036.1A priority Critical patent/CN113379839B/en
Publication of CN113379839A publication Critical patent/CN113379839A/en
Application granted granted Critical
Publication of CN113379839B publication Critical patent/CN113379839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods

Abstract

The invention provides a ground visual angle monocular vision odometer method based on an event camera system. The invention uses the downward-looking event camera to shoot the ground texture, can stably run in a high-speed and high-dynamic scene, and is not interfered by shielding and moving objects in a forward-looking view angle. The method comprises the steps of compressing event points output by an event camera along a time axis to construct an event frame image; extracting Harris characteristic points on the event frames and carrying out LK optical flow tracking to obtain characteristic point pairs matched between adjacent event frames; solving the translation amount and the rotation amount of the camera according to the back projection model and the motion model; and taking the translation amount and the rotation amount of the camera as the optimized initial values of the luminosity error minimization function, and solving the minimization function by a Gauss-Newton method to obtain the optimized translation amount and rotation amount. The invention combines the characteristic point method and the direct method, realizes the VO algorithm of the event camera by the semi-direct method, and improves the accuracy and the stability of pose estimation.

Description

Ground visual angle monocular vision odometer method based on event camera system
Technical Field
The invention belongs to the field of image processing, and particularly relates to a ground visual angle monocular vision odometer method based on an event camera system.
Background
With the development and popularization of artificial intelligence technology, mobile robots and unmanned systems have received more and more attention and research, and have made major progress and breakthrough. The mobile intelligent platform can autonomously execute various complex tasks without manual interference, and has wide application in military, medicine, space, entertainment, household appliances and other aspects. In these applications, mobile robots are often required to perform tasks such as autonomous navigation, path planning, object detection, etc. in complex and dynamic indoor and outdoor environments. In order to accomplish these tasks efficiently and safely, the robot needs to be able to position itself in its environment.
In the field of mobile robots and automatic driving, Visual Odometry (VO) is an important technology for positioning a mobile robot, and estimates the motion of the robot by acquiring surrounding environment information using one or more Visual cameras attached to a mobile platform. The precision of the VO algorithm and the agility of the mobile robot are closely related to the visual sensor, and depend on the accuracy and the time delay of the sensor. The output frequency of the sensor data and the time consumed for processing the data determine the delay, and in the current state of the art, the delay based on the CMOS camera is at least 50-250 ms, and the sampling rate is 15-40 hz, which causes great limitation to the flexibility of the mobile robot. Meanwhile, due to the fixed exposure time, when the illumination change of a scene is large and the moving speed of the camera is high, the imaging quality of the camera is rapidly reduced, so that the VO algorithm is unstable and the robustness is poor.
The event camera (DVS) is a Vision Sensor emerging in recent years, inspired by bionics, has the advantages of low latency, high Dynamic range, low redundancy, sensitivity to edge information, and the like, and is an ideal Sensor of a Vision odometer system. Firstly, the low time delay ensures that the mobile robot can obtain real-time pose information when moving rapidly, secondly, the high dynamic range improves the robustness of the VO system to extreme illumination change, and finally, the low redundancy output stream reduces the requirement on data transmission bandwidth. However, event cameras have a distinct output format from conventional cameras, with individual pixels that trigger an "event" when there is a change in brightness in the scene. Instead of intensity images, asynchronous event streams with microsecond resolution are output, and therefore, a visual algorithm for an event camera needs to be developed to exert its advantages, so as to improve the accuracy and stability of the visual odometer.
The VO algorithm based on the event camera is dedicated to make up for the defects of a general camera, improves the applicability and robustness of the self-positioning algorithm under extreme conditions, and achieves good results in recent years. However, most of current VO algorithms based on event cameras require additional sensors (such as IMU, general camera, RGB-D camera) to provide extra information, and alignment and clock synchronization of data between sensors can cause great uncertainty to the algorithms. In addition, most VO algorithms use a forward-looking camera as a data acquisition sensor, and are easily influenced by shielding, moving objects and illumination changes, so that positioning drift is caused.
Researchers have found that ground scenes such as carpets, asphalt, tiles, etc. have abundant texture information and are permanently visible and useful for localization. The mobile robot moves on a two-dimensional plane, the downward-looking vision sensor artificially sets a scene as the ground, the uncertainty of unnecessary calculation and depth estimation is avoided, and meanwhile, the ground visual angle is not interfered by shielding and moving objects. Therefore, the ground view of the event camera is used herein to realize a more accurate and stable VO algorithm, thereby realizing the positioning of the mobile robot.
Disclosure of Invention
The invention provides a ground visual angle monocular vision odometer based on an event camera aiming at the positioning problem of a mobile robot, which is used for solving the problem of low positioning precision caused by large dynamic range of ambient light or poor self motion condition of a VO algorithm based on common light, simplifying the motion estimation problem by using a ground visual angle vision sensor and avoiding the influence of moving objects and shielding.
The method integrates the advantages of a characteristic point method VO and a direct method VO, uses an event camera with a ground view angle to estimate the motion of the camera, and compared with a robot capable of moving in a 3D scene at will, the camera which is arranged in front of the robot and has a downward view angle is fixed relative to the height of the ground, so that the depth of the scene is a fixed value, the motion of the camera is relatively limited, the camera can only move on a two-dimensional plane, the degree of freedom of the motion is low, and only the translation of two degrees of freedom and the rotation of one degree of freedom exist.
The technical scheme of the invention is a ground visual angle monocular vision odometer method based on an event camera system.
The event camera system includes: an event camera, a computer terminal;
the event camera is arranged at a certain height from the ground, and the visual angle of the event camera faces the ground;
the computer terminal is connected with the event camera;
the event camera is used for collecting event point data and transmitting the event point data to the computer terminal;
the computer terminal is used for obtaining the optimized translation amount and the optimized rotation amount through a ground view monocular vision odometer method of the event camera;
the ground visual angle monocular vision odometer method comprises the following steps:
step 1: acquiring a plurality of event point data through an event camera, and compressing the event point data to any trigger time according to the trigger time in each event point data to construct an event frame image;
step 2: extracting Harris characteristic points from the event frame images through a Harris characteristic detection algorithm, and then obtaining Harris characteristic point pairs matched between the adjacent event frame images through an LK optical flow tracking algorithm;
and step 3: carrying out back projection on the Harris characteristic point pair through a camera back projection model to obtain a 3D space point corresponding to the characteristic point pair; taking the average translation amount of the 3D space points corresponding to the feature points as the translation amount of the camera; the characteristic points on the two frame event frame images meet the homography relation, the homography matrix is solved according to the matched characteristic point pairs, the homography matrix is decomposed to obtain a rotation matrix, and the yaw angle around the z axis, namely the rotation amount of the camera, is obtained according to the conversion formula of the rotation matrix and the Euler angle;
and 4, step 4: taking the translation amount and the rotation amount of the camera as the optimization initial values of the luminosity error minimization function, and solving the minimization function through a Gauss-Newton method to obtain the optimized translation amount and the optimized rotation amount;
preferably, in step 1, the event point data is:
ek=(xk,tk,pk)
xk=(xk,yk)
k∈[1,K],xk∈[1,C],yk∈[1,R],pk∈{0,1}
wherein e iskFor the kth event point data, K is the number of event points, xkIs the pixel coordinate, x, of the kth event pointkIs the X-axis pixel coordinate, y, of the kth event pointkIs the Y-axis pixel coordinate, t, of the kth event pointkIs the trigger time of the kth event point, pkIs the polarity of the kth event point, pk1 indicates that the polarity of the k-th event point is in an increasing trend, pk0 represents that the polarity of the kth event point is in a decreasing trend, C is the number of columns of the event frame image, and R is the number of rows of the event frame image;
step 1, the process of establishing the event frame image comprises the following steps:
Figure BDA0003081927680000031
wherein, I is an event frame image, x is a pixel coordinate of the event frame image, and delta is a Dirac function;
preferably, in step 2, the Harris characteristic points are as follows:
Figure BDA0003081927680000032
i∈[1,N],j∈[1,M]
wherein the content of the first and second substances,
Figure BDA0003081927680000041
and the j-th characteristic point on the ith frame of event frame image is shown, N is the number of event frames, and M is the number of characteristic points.
Step 2, the Harris characteristic point pairs are as follows:
Figure BDA0003081927680000042
wherein the content of the first and second substances,
Figure BDA0003081927680000043
representing the j-th pair of feature points on the i frame and i +1 frame event frame images;
preferably, the back projection model in step 3 is:
P=d·(K-1p)
wherein p is a Harris characteristic point, K is a camera internal reference matrix, and d represents the height of the camera from the ground. P is a 3D space point corresponding to the characteristic point;
step 3, the translation amount of the camera is as follows:
Figure BDA0003081927680000044
wherein the content of the first and second substances,
Figure BDA0003081927680000045
for the 3D space point corresponding to the jth characteristic point on the i-frame event frame image,
Figure BDA0003081927680000046
a 3D space point corresponding to the jth characteristic point on the i +1 frame event frame image, M represents the number of the characteristic points, ti+1,iThe amount of translation of the camera from frame i to frame i + 1;
the homography in the step 3 is as follows:
Figure BDA0003081927680000047
i∈[1,N],j∈[1,M]
Figure BDA0003081927680000048
wherein, H is a homography matrix, i is an event frame image number, j is a characteristic point number, N represents the number of event frames, and M represents the number of characteristic points. R is a rotation matrix containing three axes of rotation angles, nTA normal vector representing the ground;
the yaw angle around the z axis in the step 3 is as follows:
θz=atan2(r21,r11)
wherein r is11Is an element of the 1 st row and 1 st column of the matrix R, R21Is an element of the 2 nd row, 1 st column of the matrix R.
The rotation amount of the camera in the step 3 is as follows:
Figure BDA0003081927680000051
wherein R isi+1,iThe amount of translation of the camera from frame i to frame i + 1.
Preferably, the photometric error minimization function in step 4 is:
Figure BDA0003081927680000052
Figure BDA0003081927680000053
wherein ξi+1,iRepresenting the pose of the camera between frame i and frame i +1, ejIs the luminosity error of the jth feature point, W is the jth feature point pjSurrounding image block, ω (p)j,d,ξi+1,i) Denotes the j-th feature point p on the i framejUsing d and xii+1,iTransition to i +1 frame.
The invention has the advantages that:
and an event camera is used for capturing scene information, so that the VO algorithm can stably run in a high-speed and high-dynamic scene.
The visual angle of the event camera is aligned to the ground scene, so that the interference of shielding and moving objects in the forward-looking visual angle can be avoided, the depth of the scene is fixed, and the uncertainty of unnecessary depth calculation and the depth estimation in the initialization stage of the traditional direct method is avoided.
The direct method VO relies on the strong assumption that the gray scale is unchanged, the VO has no characteristic point method and has high tolerance to illumination, and the optimization is easy to obtain a minimum value due to the non-convexity of an image.
The invention combines the characteristic point method and the direct method, realizes the VO algorithm of the event camera by the semi-direct method, and improves the accuracy and the stability of pose estimation.
Drawings
FIG. 1: is a hardware platform.
FIG. 2: is a flow chart of the method implementation of the invention.
FIG. 3: is the point of the event.
FIG. 4: is an event frame image.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention relates to a ground visual angle monocular vision odometry method based on an event camera.
Fig. 1 shows a hardware platform of a system according to an embodiment of the present invention, which includes: an event camera, a computer terminal;
the event camera is arranged at a certain height from the ground, and the visual angle of the event camera faces the ground;
the computer terminal is connected with the event camera;
the event camera is used for collecting event point data and transmitting the event point data to the computer terminal;
the computer terminal is used for obtaining the optimized translation amount and the optimized rotation amount through a ground view monocular vision odometer method of the event camera;
the event camera is selected as a Davis346 event camera;
the computer terminal is a dell notebook computer and is configured as i5-8300H cpu 2.3GHz RAM 8GGTX 1050;
the event camera is installed 25cm away from the ground;
the computer terminal is connected with the event camera through USB3.0 to micro USB 3.0;
the overall flow chart of the method is shown in the attached figure 2, and the specific steps are as follows:
step 1: acquiring a plurality of event point data through an event camera, wherein the event points are shown in figure 3, compressing the event point data to any trigger time according to the trigger time in each event point data to construct an event frame image, and the event frame image is shown in figure 4;
step 1 the event point data is:
ek=(xk,tk,pk)
xk=(xk,yk)
k∈[1,K],xk∈[1,C],yk∈[1,R],pk∈{0,1}
wherein e iskFor the kth event point data, K6000 is the number of event points, xkIs the pixel coordinate, x, of the kth event pointkIs the X-axis pixel coordinate, y, of the kth event pointkIs the Y-axis pixel coordinate, t, of the kth event pointkIs the trigger time of the kth event point, pkIs the polarity of the kth event point, pk1 indicates that the polarity of the k-th event point is in an increasing trend, pk0 represents that the polarity of the k-th event point is in a decreasing trend, C346 is the number of columns of the event frame image, and R260 is the number of rows of the event frame image;
step 1, the process of establishing the event frame image comprises the following steps:
Figure BDA0003081927680000071
wherein, I is an event frame image, x is a pixel coordinate of the event frame image, and delta is a Dirac function;
step 2: extracting Harris feature points from the event frame images in the step 1 through a Harris feature detection algorithm, and then obtaining matched Harris feature point pairs between adjacent event frame images through an LK optical flow tracking algorithm;
step 2, the Harris characteristic points are as follows:
Figure BDA0003081927680000072
i∈[1,N],j∈[1,M]
wherein the content of the first and second substances,
Figure BDA0003081927680000073
the image of the ith frame of the event frame is represented by the jth characteristic point, wherein N is 10000 and M is 180.
Step 2, the Harris characteristic point pairs are as follows:
Figure BDA0003081927680000074
wherein the content of the first and second substances,
Figure BDA0003081927680000075
representing the j-th pair of feature points on the i frame and i +1 frame event frame images;
and step 3: carrying out back projection on the Harris characteristic point pairs obtained in the step 2 through a camera back projection model to obtain 3D space points corresponding to the characteristic points; taking the average translation amount of the 3D space points corresponding to the feature points as the translation amount of the camera; the characteristic points on the two frame event frame images meet the homography relation, the homography matrix is solved according to the matched characteristic point pairs, the homography matrix is decomposed to obtain a rotation matrix, and the yaw angle around the z axis, namely the rotation amount of the camera, is obtained according to the conversion formula of the rotation matrix and the Euler angle;
step 3, the back projection model is as follows:
P=d·(K-1p)
wherein, p is Harris characteristic point, K is camera internal parameter matrix, and d is 0.25(m) to represent the height of the camera from the ground. P is a 3D space point corresponding to the characteristic point;
step 3, the translation amount of the camera is as follows:
Figure BDA0003081927680000076
wherein the content of the first and second substances,
Figure BDA0003081927680000081
for the 3D space point corresponding to the jth characteristic point on the i-frame event frame image,
Figure BDA0003081927680000082
the 3D space point corresponding to the jth characteristic point on the i +1 frame event frame image, M is 180 to represent the number of the characteristic points, ti+1,iThe amount of translation of the camera from frame i to frame i + 1;
the homography in the step 3 is as follows:
Figure BDA0003081927680000083
i∈[1,N],j∈[1,M]
Figure BDA0003081927680000084
where H is a homography matrix, i is an event frame image number, j is a feature point number, N10000 indicates the number of event frames, and M180 indicates the number of feature points. R is a rotation matrix containing three axes of rotation angles, nTA normal vector representing the ground;
the yaw angle around the z axis in the step 3 is as follows:
θz=atan2(r21,r11)
wherein r is11Is an element of the 1 st row and 1 st column of the matrix R, R21Is an element of the 2 nd row, 1 st column of the matrix R.
The rotation amount of the camera in the step 3 is as follows:
Figure BDA0003081927680000085
wherein R isi+1,iThe amount of translation of the camera from frame i to frame i + 1.
And 4, step 4: taking the translation amount and the rotation amount of the camera obtained in the step (3) as optimized initial values of a luminosity error minimization function, and solving the minimization function through a Gauss-Newton method to obtain an optimized translation amount and an optimized rotation amount;
the photometric error minimization function described in step 4 is:
Figure BDA0003081927680000086
Figure BDA0003081927680000087
wherein ξi+1,iRepresenting the pose of the camera between frame i and frame i +1, ejIs the luminosity error of the jth feature point, W is the jth feature point pjSurrounding image block, ω (p)j,d,ξi+1,i) Denotes the j-th feature point p on the i framejUsing d and xii+1,iTransition to i +1 frame.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. A ground visual angle monocular vision odometry method based on an event camera system is characterized in that:
the event camera system includes: an event camera, a computer terminal;
the event camera is arranged at a certain height from the ground, and the visual angle of the event camera faces the ground;
the computer terminal is connected with the event camera;
the event camera is used for collecting event point data and transmitting the event point data to the computer terminal;
the computer terminal is used for obtaining the optimized translation amount and the optimized rotation amount through a ground view monocular vision odometer method of the event camera;
the ground visual angle monocular vision odometer method comprises the following steps:
step 1: acquiring a plurality of event point data through an event camera, and compressing the event point data to any trigger time according to the trigger time in each event point data to construct an event frame image;
step 2: extracting Harris characteristic points from the event frame images through a Harris characteristic detection algorithm, and then obtaining Harris characteristic point pairs matched between the adjacent event frame images through an LK optical flow tracking algorithm;
and step 3: carrying out back projection on the Harris characteristic point pair through a camera back projection model to obtain a 3D space point corresponding to the characteristic point pair; taking the average translation amount of the 3D space points corresponding to the feature points as the translation amount of the camera; the characteristic points on the two frame event frame images meet the homography relation, the homography matrix is solved according to the matched characteristic point pairs, the homography matrix is decomposed to obtain a rotation matrix, and the yaw angle around the z axis, namely the rotation amount of the camera, is obtained according to the conversion formula of the rotation matrix and the Euler angle;
and 4, step 4: and taking the translation amount and the rotation amount of the camera as the optimization initial values of the luminosity error minimization function, and solving the minimization function by a Gauss-Newton method to obtain the optimized translation amount and the optimized rotation amount.
2. The event camera system-based ground perspective monocular visual odometry method of claim 1, wherein the event point data of step 1 is:
ek=(Xk,tk,pk)
Xk=(xk,yk)
k∈[1,K],Xk∈[1,C],yk∈[1,R],pk∈{0,1}
wherein e iskFor the kth event point data, K is the number of event points, XkIs the pixel coordinate, x, of the kth event pointkIs the X-axis pixel coordinate, y, of the kth event pointkIs the Y-axis pixel coordinate, t, of the kth event pointkIs the trigger time of the kth event point, pkIs the polarity of the kth event point, pk1 indicates that the polarity of the k-th event point is in an increasing trend, pk0 represents that the polarity of the kth event point is in a decreasing trend, C is the number of columns of the event frame image, and R is the number of rows of the event frame image;
step 1, the process of establishing the event frame image comprises the following steps:
Figure FDA0003544385610000021
wherein, I is the event frame image, X is the pixel coordinate of the event frame image, and δ is the dirac function.
3. The event camera system-based ground perspective monocular visual odometry method of claim 1, wherein the Harris feature points of step 2 are:
Figure FDA0003544385610000022
i∈[1,N],j∈[1,M]
wherein the content of the first and second substances,
Figure FDA0003544385610000023
representing the jth characteristic point on the ith frame of event frame image, N representing the number of event frames, and M representing the number of characteristic points;
step 2, the Harris characteristic point pairs are as follows:
Figure FDA0003544385610000024
wherein the content of the first and second substances,
Figure FDA0003544385610000025
representing the j-th pair of feature points on the i-frame and i + 1-frame event frame images.
4. The event camera system-based ground perspective monocular visual odometry method of claim 1, wherein the back projection model of step 3 is:
P=d·(K-1p)
wherein p is a Harris characteristic point, K is a camera internal reference matrix, and d represents the height of the camera from the ground; p is a 3D space point corresponding to the characteristic point;
step 3, the translation amount of the camera is as follows:
Figure FDA0003544385610000026
wherein the content of the first and second substances,
Figure FDA0003544385610000027
for the 3D space point corresponding to the jth characteristic point on the i-frame event frame image,
Figure FDA0003544385610000028
a 3D space point corresponding to the jth characteristic point on the i +1 frame event frame image, M represents the number of the characteristic points, ti+1,iThe amount of translation of the camera from frame i to frame i + 1;
the homography in the step 3 is as follows:
Figure FDA0003544385610000029
i∈[1,N],j∈[1,M]
Figure FDA00035443856100000210
h is a homography matrix, i is an event frame image number, j is a feature point number, N represents the number of event frames, and M represents the number of feature points; r is a rotation matrix containing three axes of rotation angles, nTA normal vector representing the ground;
the yaw angle around the z axis in the step 3 is as follows:
θz=atan2(r21,r11)
wherein r is11Is an element of the 1 st row and 1 st column of the matrix R, R21Is an element of the 2 nd row and 1 st column of the matrix R;
the rotation amount of the camera in the step 3 is as follows:
Figure FDA0003544385610000031
wherein R isi+1,iThe amount of translation of the camera from frame i to frame i + 1.
5. The event camera system based ground perspective monocular vision odometry method of claim 1, wherein the photometric error minimization function of step 4 is:
Figure FDA0003544385610000032
Figure FDA0003544385610000033
wherein ξi+1,iRepresenting the pose of the camera between frame i and frame i +1, ejIs the luminosity error of the jth feature point, W is the jth feature point pjSurrounding image block, ω (p)j,d,ξi+1,i) Denotes the j-th feature point p on the i framejUsing d andξi+1,itransition to i +1 frame, N represents the number of event frames.
CN202110569036.1A 2021-05-25 2021-05-25 Ground visual angle monocular vision odometer method based on event camera system Active CN113379839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110569036.1A CN113379839B (en) 2021-05-25 2021-05-25 Ground visual angle monocular vision odometer method based on event camera system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110569036.1A CN113379839B (en) 2021-05-25 2021-05-25 Ground visual angle monocular vision odometer method based on event camera system

Publications (2)

Publication Number Publication Date
CN113379839A CN113379839A (en) 2021-09-10
CN113379839B true CN113379839B (en) 2022-04-29

Family

ID=77571937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110569036.1A Active CN113379839B (en) 2021-05-25 2021-05-25 Ground visual angle monocular vision odometer method based on event camera system

Country Status (1)

Country Link
CN (1) CN113379839B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140365B (en) * 2022-01-27 2022-07-22 荣耀终端有限公司 Event frame-based feature point matching method and electronic equipment
CN116188536B (en) * 2023-04-23 2023-07-18 深圳时识科技有限公司 Visual inertial odometer method and device and electronic equipment
CN116188533B (en) * 2023-04-23 2023-08-08 深圳时识科技有限公司 Feature point tracking method and device and electronic equipment
CN117808847A (en) * 2024-02-29 2024-04-02 中国科学院光电技术研究所 Space non-cooperative target feature tracking method integrating bionic dynamic vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335337A (en) * 2019-04-28 2019-10-15 厦门大学 A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network
CN112541946A (en) * 2020-12-08 2021-03-23 深圳龙岗智能视听研究院 Real-time pose detection method of mechanical arm based on perspective multi-point projection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9955056B2 (en) * 2015-03-16 2018-04-24 Qualcomm Incorporated Real time calibration for multi-camera wireless device
CN109544636B (en) * 2018-10-10 2022-03-15 广州大学 Rapid monocular vision odometer navigation positioning method integrating feature point method and direct method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335337A (en) * 2019-04-28 2019-10-15 厦门大学 A method of based on the end-to-end semi-supervised visual odometry for generating confrontation network
CN112541946A (en) * 2020-12-08 2021-03-23 深圳龙岗智能视听研究院 Real-time pose detection method of mechanical arm based on perspective multi-point projection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于地面特征的移动机器人单目视觉里程计算法;王可等;《光学学报》;20150510(第05期);全文 *
面向室内动态环境的半直接法RGB-D SLAM算法;高成强等;《机器人》;20181212(第03期);全文 *

Also Published As

Publication number Publication date
CN113379839A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN113379839B (en) Ground visual angle monocular vision odometer method based on event camera system
US11381741B2 (en) CMOS-assisted inside-out dynamic vision sensor tracking for low power mobile platforms
CN108154550B (en) RGBD camera-based real-time three-dimensional face reconstruction method
Bodor et al. Optimal camera placement for automated surveillance tasks
CN108363946B (en) Face tracking system and method based on unmanned aerial vehicle
CN109102525B (en) Mobile robot following control method based on self-adaptive posture estimation
Zhang et al. Pirvs: An advanced visual-inertial slam system with flexible sensor fusion and hardware co-design
US20220327792A1 (en) 3-d reconstruction using augmented reality frameworks
CN110139031B (en) Video anti-shake system based on inertial sensing and working method thereof
CN108319918B (en) Embedded tracker and target tracking method applied to same
Chen et al. Esvio: Event-based stereo visual inertial odometry
CN114494150A (en) Design method of monocular vision odometer based on semi-direct method
Fang et al. Self-supervised camera self-calibration from video
CN116619358A (en) Self-adaptive positioning optimization and mapping method for autonomous mining robot
Fernandez et al. Visual odometry for an outdoor mobile robot
Zhu et al. PairCon-SLAM: Distributed, online, and real-time RGBD-SLAM in large scenarios
Pandey et al. Efficient 6-dof tracking of handheld objects from an egocentric viewpoint
CN112945233B (en) Global drift-free autonomous robot simultaneous positioning and map construction method
CN113345032A (en) Wide-angle camera large-distortion image based initial image construction method and system
CN114419259B (en) Visual positioning method and system based on physical model imaging simulation
US20230306640A1 (en) Method of 3d reconstruction of dynamic objects by mobile cameras
Liu et al. Semi-dense visual-inertial odometry and mapping for computationally constrained platforms
CN115278049A (en) Shooting method and device thereof
Ren et al. Self-calibration method of gyroscope and camera in video stabilization
Pandey et al. Egocentric 6-DoF tracking of small handheld objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230902

Address after: No. 1999, Middle Section of Yizhou Avenue, Chengdu High tech Zone, China (Sichuan) Pilot Free Trade Zone, Chengdu, Sichuan, 610095, China

Patentee after: Chengdu Shizhi Technology Co.,Ltd.

Address before: 430072 Hubei Province, Wuhan city Wuchang District of Wuhan University Luojiashan

Patentee before: WUHAN University

TR01 Transfer of patent right