CN110414558A - Characteristic point matching method based on event camera - Google Patents
Characteristic point matching method based on event camera Download PDFInfo
- Publication number
- CN110414558A CN110414558A CN201910551377.9A CN201910551377A CN110414558A CN 110414558 A CN110414558 A CN 110414558A CN 201910551377 A CN201910551377 A CN 201910551377A CN 110414558 A CN110414558 A CN 110414558A
- Authority
- CN
- China
- Prior art keywords
- point
- description
- characteristic
- characteristic point
- event camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a kind of methods to the feature point extraction having detected that description, and are matched using description of generation to characteristic point.It is an object of the invention to the feature point description subalgorithms for solving the problems, such as traditional possibly can not stablize suitable for event camera, the present invention, which describes son to feature point extraction using the timestamp information of event camera, can preferably utilize the advantage of event camera, keep description sub-information more abundant, keeps matching result more acurrate.
Description
Technical field
The invention belongs to field of image processings, for generating feature point description based on event camera, and carry out feature
Point matching.
Background technique
Machine vision depends on the camera based on frame, these traditional cameras are come with regular time exposure and frame rate
Entire frame is obtained, stores and handle image information in the matrix form.This simple image storage format for image procossing and
May be unsatisfactory for feature extraction, this is largely because the image based on gray scale and contains many redundancies.
Pixel intensity information is particularly useful for the identification and retrieval of human eye, however but increases the difficulty of the image procossing based on machine
Degree.Meanwhile sequence image reading may adversely affect image processing hardware, because necessary before the feature needed for obtaining
Handle a large amount of unnecessary data.General camera is more sensitive to illumination variation simultaneously, is easy in face of high dynamic illumination scene
Dark or overexposure, under high-speed motion state, motion blur or imaging effect caused by ordinary optical image are bad, these are all serious
Influence picture quality.Ordinary optical camera imaging is as shown in Figure 1 under high-speed motion state.
Event camera has obtained more and more concerns in field of machine vision, it be it is a kind of imitate human retina it is new
Type visual sensor, pixel array face by light intensity change triggers output occur light intensity variation pixel position, the time with
Intensity, therefore the sequence of frames of video of its output not instead of standard camera, a series of asynchronous event streams.More specifically, in tjWhen
It carves, uj=(xj, yj) brightness gain on location of pixels reaches threshold value ± c (c > 0), then an event ej=(xj, yj, tj,
pj) will be triggered, pj∈ {+1, -1 } indicates that the polarity positive sign of event indicates that brightness increases, and negative sign indicates that brightness reduces, therefore thing
The output of part camera is asynchronous event stream, since only recording increment changes event, so the absolute brightness value of scene is no longer
Directly visible.This camera is not limited by traditional time for exposure and frame rate, and time coordinate precision can reach microsecond
Rank effectively filter background redundancy, absorbed capture camera motion information can save data transfer bandwidth and deposit with data are reduced
Store up pressure, with high dynamic, low delay, low-power consumption advantage, this is allowed them to during high-speed motion or with height
Reliable visual information is provided in the scene that dynamic range is characterized.Event camera imaging is as shown in Figure 2 under high-speed motion state.
In certain difficult scenes, in the state of high-speed motion or illumination condition variation acutely, ordinary optical image is produced
The bad detection for leading to characteristic point of raw motion blur or imaging effect, feature point description and Feature Points Matching exist great
It is difficult.However, the real-time status of the available high-speed moving object of event camera, and possess high dynamic range.
What it is due to the output of event camera is point isolated one by one, for same object, different moments different motion state
Under be presented on point in event frame may and it is unstable.Therefore, traditional feature point description subalgorithm possibly can not be stablized applicable.
Meanwhile compared to ordinary optical image, the case point of event output further comprises respective temporal information.And directly utilize tradition
Description subalgorithm does not use this information then well.Therefore, affair character point is extracted from the angle of timestamp information
Description can preferably utilize the advantage of event camera.
Summary of the invention
The present invention generates feature point description based on event camera, and carries out Feature Points Matching according to description.In shape
Under the inspiration of shape contextual algorithms, a kind of sub- generation method of feature point description for event image imaging characteristics, and root are proposed
Characteristic point is matched according to description of generation.Wherein, the method for generating feature point description is slightly different with conventional method,
Since time camera can obtain position, time and the polarity of case point, the present invention has used timestamp information, this
The advantage of event camera can preferably be used.
Technical solution provided by the invention is a kind of sub- matching process of the feature point description based on event camera, specific steps
It is as follows:
Step 1, with characteristic point piFor in the center of circle, R1 are outermost radius of circle, R2 is most inner circle radius local by logarithm
Distance interval establishes N number of concentric circles, i.e., (log10 (R1), log10 (R2)) is divided into N according to by logarithm (i.e. logspace)
A element generates grid then by this region along the circumferential direction M equal part, i.e. bins, as shown in Figure 4.
Step 2, comparative feature point piThe timestamp t at placepiWith the timestamp t of case point each in localqiSize, if tpi
< tqi, then the point is set to " 1 ", if tpi> tqi, then the point is set to " 0 ".
Step 3, statistical nature point piIn local in each bins " 1 " number, i.e. the statistical distribution of these points in bins
Histogram hi(k), referred to as characteristic point piDescription son, wherein describe sub- size be M*N.
Step 4, all characteristic points are traversed, corresponding description of all characteristic points is obtained.
Step 5, according to hi(k) similitude, i.e. cost value between every two feature point set are calculated, is then calculated using Hungary
Method counts one group of minimum point set corresponding relationship of overall cost value, to obtain Feature Points Matching relationship.Calculate cost value
Formula are as follows:
Wherein, k is kth position in description, hi(k), hj(k) characteristic point p is respectively indicatediAnd qjDescription son.
Step 6, error hiding is removed using vector coherence method, obtains best match.
Compared with prior art, the advantages of the present invention: the case point of event camera output further comprises respectively
From timestamp information, and in existing matching algorithm describe son extract do not use this information.Utilize event camera
Timestamp information, which describes son to feature point extraction, can preferably utilize the advantage of event camera, keep description sub-information more abundant,
Keep matching result more acurrate.
Detailed description of the invention
Fig. 1 is ordinary optical camera imaging.
Fig. 2 is event camera imaging.
Fig. 3 is the sub- generation method of description and Feature Points Matching flow chart.
Fig. 4 is that feature point description sub-grid divides schematic diagram.
Fig. 5 is Feature Points Matching result.
Specific embodiment
Present invention is primarily based on event cameras, handle eventstream data.In view of the characteristic of event camera, propose
Sub- extracting method is described in a kind of pair of characteristic point, and carries out Feature Points Matching according to description.In order to make mesh of the invention
, technical solution and advantage be more clearly understood, below in conjunction with example, the present invention will be described in further detail.It should manage
Solution, specific example described herein are used only for explaining the present invention, be not intended to limit the present invention.
As shown in figure 3, the sub- matching process of a kind of feature point description based on event camera provided in an embodiment of the present invention, tool
Body realizes that steps are as follows:
Step 1, N=8, M=12, R1=12, R2=1 are set.With characteristic point piIt is outermost radius of circle, R2 for the center of circle, R1
To establish N number of concentric circles by logarithm distance interval in the local of most inner circle radius, this region along the circumferential direction M equal part generates
Grid.
Step 2, comparative feature point piThe timestamp t at placepiWith the timestamp t of case point each in localqiSize, if tpi
< tqi, then the point is set to " 1 ", if tpi> tqi, then the point is set to " 0 ", i.e. grid interior coding.
Step 3, statistical nature point piIn local in each bins " 1 " number, i.e. the statistical distribution of these points in bins
Histogram hi(k), referred to as characteristic point piDescription son.
Step 4, all characteristic points are traversed, corresponding description of all characteristic points is obtained.
Step 5, calculate every two feature point set between cost value, using be X2 test statistics (Chi-square Test count
Amount, the departure degree between the actual observed value and theoretical implications value of statistical sample).Totality is counted using Hungary Algorithm
One group of minimum point set corresponding relationship of cost value, to obtain Feature Points Matching relationship.
Step 6, error hiding is removed using vector coherence method, obtains best match.
As shown in figure 5, initial matching is 1642 pairs, removal misses result after removing error hiding using vector coherence method
Number of matches is 1101 pairs after matching, ratio 0.6705, as shown in Figure 5, the feature after being matched by the method for the invention
Point is more, and line is matched between two width figures substantially in same direction, and matching result is accurate.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention
The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method
In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.
Claims (2)
1. the characteristic point matching method based on event camera, which comprises the steps of:
Step 1, with characteristic point piFor in the center of circle, R1 are outermost radius of circle, R2 is most inner circle radius local by between logarithm distance
Every establishing N number of concentric circles, then by this region along the circumferential direction M equal part, grid is generated;
Step 2, comparative feature point piThe timestamp t at placepiWith the timestamp t of case point each in localqiSize, if tpi< tqi,
Then the point is set to " 1 ", if tpi> tqi, then the point is set to " 0 ";
Step 3, statistical nature point piIn local in each bins " 1 " number, i.e. the statistical distribution histogram of these points in bins
hi(k), referred to as characteristic point piDescription son;
Step 4, all characteristic points are traversed, corresponding description of all characteristic points is obtained;
Step 5, according to hi(k) similitude, i.e. cost value between every two feature point set are calculated, is then united using Hungary Algorithm
One group of minimum point set corresponding relationship of totality cost value is counted out, to obtain Feature Points Matching relationship;
Step 6, error hiding is removed using vector coherence method, obtains best match.
2. the characteristic point matching method as described in claim 1 based on event camera, it is characterised in that: calculate cost in step 5
The formula of value is,
Wherein, k is kth position in description, hi(k), hj(k) characteristic point p is respectively indicatediAnd qjDescription son.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910551377.9A CN110414558B (en) | 2019-06-24 | 2019-06-24 | Feature point matching method based on event camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910551377.9A CN110414558B (en) | 2019-06-24 | 2019-06-24 | Feature point matching method based on event camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110414558A true CN110414558A (en) | 2019-11-05 |
CN110414558B CN110414558B (en) | 2021-07-20 |
Family
ID=68359703
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910551377.9A Active CN110414558B (en) | 2019-06-24 | 2019-06-24 | Feature point matching method based on event camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110414558B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111696143A (en) * | 2020-06-16 | 2020-09-22 | 清华大学 | Event data registration method and system |
CN111931752A (en) * | 2020-10-13 | 2020-11-13 | 中航金城无人系统有限公司 | Dynamic target detection method based on event camera |
CN112367181A (en) * | 2020-09-29 | 2021-02-12 | 歌尔科技有限公司 | Camera network distribution method, device, equipment and medium |
CN114140365A (en) * | 2022-01-27 | 2022-03-04 | 荣耀终端有限公司 | Event frame-based feature point matching method and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103727930A (en) * | 2013-12-30 | 2014-04-16 | 浙江大学 | Edge-matching-based relative pose calibration method of laser range finder and camera |
US20160030766A1 (en) * | 2014-08-01 | 2016-02-04 | Hygenia, LLC | Hand Sanitizer Station |
CN106934465A (en) * | 2017-03-08 | 2017-07-07 | 中国科学院上海高等研究院 | For the removable calculating storage device and information processing method of civil aviaton's industry |
US20170365102A1 (en) * | 2012-02-23 | 2017-12-21 | Charles D. Huston | System And Method For Creating And Sharing A 3D Virtual Model Of An Event |
CN109801314A (en) * | 2019-01-17 | 2019-05-24 | 同济大学 | A kind of binocular dynamic visual sensor solid matching method based on deep learning |
-
2019
- 2019-06-24 CN CN201910551377.9A patent/CN110414558B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170365102A1 (en) * | 2012-02-23 | 2017-12-21 | Charles D. Huston | System And Method For Creating And Sharing A 3D Virtual Model Of An Event |
CN103727930A (en) * | 2013-12-30 | 2014-04-16 | 浙江大学 | Edge-matching-based relative pose calibration method of laser range finder and camera |
US20160030766A1 (en) * | 2014-08-01 | 2016-02-04 | Hygenia, LLC | Hand Sanitizer Station |
CN106934465A (en) * | 2017-03-08 | 2017-07-07 | 中国科学院上海高等研究院 | For the removable calculating storage device and information processing method of civil aviaton's industry |
CN109801314A (en) * | 2019-01-17 | 2019-05-24 | 同济大学 | A kind of binocular dynamic visual sensor solid matching method based on deep learning |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111696143A (en) * | 2020-06-16 | 2020-09-22 | 清华大学 | Event data registration method and system |
CN111696143B (en) * | 2020-06-16 | 2022-11-04 | 清华大学 | Event data registration method and system |
CN112367181A (en) * | 2020-09-29 | 2021-02-12 | 歌尔科技有限公司 | Camera network distribution method, device, equipment and medium |
CN112367181B (en) * | 2020-09-29 | 2022-10-18 | 歌尔科技有限公司 | Camera network distribution method, device, equipment and medium |
CN111931752A (en) * | 2020-10-13 | 2020-11-13 | 中航金城无人系统有限公司 | Dynamic target detection method based on event camera |
CN114140365A (en) * | 2022-01-27 | 2022-03-04 | 荣耀终端有限公司 | Event frame-based feature point matching method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110414558B (en) | 2021-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110414558A (en) | Characteristic point matching method based on event camera | |
Rebecq et al. | High speed and high dynamic range video with an event camera | |
CN107578390B (en) | Method and device for correcting image white balance by using neural network | |
CN108111749B (en) | Image processing method and device | |
CN106875437B (en) | RGBD three-dimensional reconstruction-oriented key frame extraction method | |
CN108055452A (en) | Image processing method, device and equipment | |
CN110334635A (en) | Main body method for tracing, device, electronic equipment and computer readable storage medium | |
CN103079034A (en) | Perception shooting method and system | |
CN107133969A (en) | A kind of mobile platform moving target detecting method based on background back projection | |
CN108875619A (en) | Method for processing video frequency and device, electronic equipment, computer readable storage medium | |
CN105187723A (en) | Shooting processing method for unmanned aerial vehicle | |
Zheng et al. | Deep learning for event-based vision: A comprehensive survey and benchmarks | |
CN112396562A (en) | Disparity map enhancement method based on RGB and DVS image fusion in high-dynamic-range scene | |
CN108024054A (en) | Image processing method, device and equipment | |
CN110428477B (en) | Method for forming image of event camera without influence of speed | |
WO2020062393A1 (en) | Initial data processing method and system based on machine learning | |
CN108156369A (en) | Image processing method and device | |
CN111724317A (en) | Method for constructing Raw domain video denoising supervision data set | |
CN114245007A (en) | High frame rate video synthesis method, device, equipment and storage medium | |
CN110619652B (en) | Image registration ghost elimination method based on optical flow mapping repeated area detection | |
CN103578121B (en) | Method for testing motion based on shared Gauss model under disturbed motion environment | |
CN111897433A (en) | Method for realizing dynamic gesture recognition and control in integrated imaging display system | |
CN109784215B (en) | In-vivo detection method and system based on improved optical flow method | |
CN114612507A (en) | High-speed target tracking method based on pulse sequence type image sensor | |
Wu et al. | Video surveillance object recognition based on shape and color features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |