CN111696044B - Large-scene dynamic visual observation method and device - Google Patents
Large-scene dynamic visual observation method and device Download PDFInfo
- Publication number
- CN111696044B CN111696044B CN202010548500.4A CN202010548500A CN111696044B CN 111696044 B CN111696044 B CN 111696044B CN 202010548500 A CN202010548500 A CN 202010548500A CN 111696044 B CN111696044 B CN 111696044B
- Authority
- CN
- China
- Prior art keywords
- event
- view
- feature point
- stream data
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000009466 transformation Effects 0.000 claims abstract description 51
- 239000011159 matrix material Substances 0.000 claims abstract description 43
- 238000001514 detection method Methods 0.000 claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 14
- 238000006243 chemical reaction Methods 0.000 claims abstract description 9
- 239000013598 vector Substances 0.000 claims abstract description 8
- 238000004364 calculation method Methods 0.000 claims abstract description 7
- 230000001360 synchronised effect Effects 0.000 claims description 6
- 230000001960 triggered effect Effects 0.000 claims description 4
- 239000000126 substance Substances 0.000 claims description 3
- 238000011179 visual inspection Methods 0.000 claims 5
- 238000007689 inspection Methods 0.000 claims 2
- 230000000694 effects Effects 0.000 abstract 1
- 238000000605 extraction Methods 0.000 abstract 1
- 230000007423 decrease Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003592 biomimetic effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Multimedia (AREA)
- Algebra (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a large scene dynamic visual observation method, which comprises the following steps: a multi-view event data acquisition step of acquiring multi-view event stream data by using an event camera array; a multi-view event image conversion step of converting event stream data into an event count image; event image feature point detection, which is to perform feature point detection and feature vector extraction on the event counting image; matching event image feature points, namely calculating to obtain a matching relation between the feature points; a spatial coordinate transformation matrix calculation step, namely calculating a spatial coordinate transformation matrix according to the characteristic point matching relation; and a multi-view event stream data fusion step, namely performing space coordinate transformation on the multi-view event stream data, splicing the transformed multi-view event stream data, and acquiring large-scene dynamic observation event stream data. According to the invention, on the basis of scene dynamic visual observation based on the event camera, a large scene dynamic observation effect with a larger view field range is obtained.
Description
Technical Field
The invention relates to the field of computer vision and computational photography, in particular to a large scene dynamic vision observation method and device.
Background
The event camera is a bio-inspired sensor, and the working principle is very different from that of the traditional camera. Unlike a conventional camera that captures scene absolute light intensity at a fixed frame rate, such a camera outputs data if and only if the scene light intensity changes, such output data being referred to as an event stream. Compared with the traditional camera, the event camera has the advantages of high dynamic range, high time resolution, no dynamic blurring and the like.
The event camera as a new type of vision sensor outputs data in a form completely different from that of the conventional camera, and various algorithms of the conventional camera and images cannot be directly applied. Conventional cameras acquire light intensity values of a scene at a fixed rate (i.e., frame rate) and output as picture data at the fixed rate. The event camera does not have the concept of frame rate, each pixel point of the event camera works asynchronously, and when the light intensity change is detected, an event is output. Each event is a quadruple (x, y, t, p) including a pixel abscissa (x, y), a timestamp t, and an event polarity p (where p-1 indicates that the light intensity of the pixel decreases, and p-1 indicates that the light intensity of the pixel increases). The event data output by all the pixel points are collected to form an event list consisting of one event, and the event list is used as the event stream data output by the camera. An example of video data obtained by a conventional camera having a length of 20 seconds and event stream data output by an event camera corresponding thereto is shown in fig. 1. Therefore, the algorithms and methods applied in the conventional camera and conventional image processing fields cannot be directly applied to the event camera and the event data.
The single traditional camera has the problem of small field range, and can adopt a multi-view camera array to acquire multi-view images, and realize the purpose of large field observation by utilizing image registration and splicing technology. The single event camera also has the problem of small field range, but because the event camera outputs event stream data, the difference from the traditional camera is large, and the existing image registration and splicing technical method cannot be directly used. That is, a related art method for registration and fusion of multi-view event stream data is currently lacking.
Disclosure of Invention
In order to solve the problem that a related technical method for registration and fusion of multi-view event stream data is lacked at present, the invention provides a large-scene dynamic visual observation method and a large-scene dynamic visual observation device, which can realize registration and fusion of event stream data acquired by a multi-view event camera, namely realize acquisition of event stream data with a larger view field.
The invention provides a large scene dynamic visual observation method which is characterized by comprising the following steps:
step 2, converting the multi-view event images, processing multi-view event stream data, converting the event stream data of each view into an event counting image of two channels, and counting the number of positive and negative events occurring at each pixel point of the event counting image;
step 3, event image feature point detection, namely performing feature point detection and feature description on the multi-view event counting image by using a Speeded Up Robust Features (SURF) algorithm, and acquiring a feature point set of the event counting image of each view and feature description of each feature point;
step 4, matching event image feature points, namely taking event stream data acquired by an event camera at a first view angle as a reference view angle and taking the other view angles as non-reference view angles, and sequentially performing feature matching on a feature point set of each non-reference view angle and a reference view angle feature point set to acquire a matching relation;
step 5, calculating a space coordinate transformation matrix, namely calculating the space coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the mutually matched feature point coordinates;
and 6, fusing multi-view event stream data, performing space coordinate transformation on the multi-view event stream data by using the space coordinate transformation matrix, splicing the transformed event stream data, and acquiring large-scene dynamic observation event stream data.
The invention also provides a large scene dynamic visual observation device, which comprises: the system comprises a multi-view event data acquisition unit, a multi-view event image conversion unit, an event image characteristic point detection unit, an event image characteristic point matching unit, a spatial coordinate transformation matrix calculation unit and a multi-view event stream data fusion unit; the method is characterized in that:
a multi-view event data acquisition unit that acquires multi-view event stream data, wherein the multi-view event stream data is acquired by a synchronized event camera array;
the multi-view event image conversion unit is used for processing multi-view event stream data, each multi-view event stream data is converted into an event counting image of two channels, and each pixel point of the event counting image counts the number of positive and negative events occurring at the pixel point;
the event image feature point detection unit is used for carrying out feature point detection and feature description on the multi-view event counting image by utilizing a speedUp Robust Features (SURF) algorithm and acquiring a feature point set of the event counting image of each view and feature description of each feature point;
the event image feature point matching unit is used for taking event stream data acquired by the event camera at a first visual angle as a reference visual angle and taking the other visual angles as non-reference visual angles, and sequentially performing feature matching on a feature point set of each non-reference visual angle and a reference visual angle feature point set to acquire a matching relation;
the spatial coordinate transformation matrix calculation unit is used for calculating a spatial coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the feature point coordinates matched with each other;
and the multi-view event stream data fusion unit is used for carrying out space coordinate transformation on the multi-view event stream data by utilizing the space coordinate transformation matrix, splicing the transformed event stream data and acquiring the large-scene dynamic observation event stream data.
The invention has the beneficial effects that: the method can solve the problem that a related technical method for registration and fusion of multi-view event stream data is lacked at present. The large-scene dynamic visual observation method can realize registration and fusion of multi-view event stream data on the basis of small-view event stream data acquired by a multi-view event camera, namely, realize scene dynamic observation of a larger view field.
Drawings
Fig. 1 is a schematic diagram of video data obtained by a conventional camera with a time length of 20 seconds and stream data obtained by an event camera corresponding thereto.
FIG. 2 is a schematic flow chart of a large scene dynamic visual observation method of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to fig. 2.
As shown in fig. 2, the embodiment provides a large scene dynamic visual observation method for realizing registration and fusion of event stream data acquired by a multi-view event camera, including the following steps:
in the step, an event camera array is built by using the existing event cameras with small viewing fields, and each camera is synchronously triggered to acquire multi-view event stream data. Event cameras are a new type of biomimetic camera that outputs data if and only if the scene light intensity changes, such output data being referred to as an event stream. The event stream data output by the event camera may be represented in the form of formula (1):
wherein epsilon is an event stream set, i represents the number of events in the event stream, xiIs the spatial abscissa, y, of the ith eventiIs the spatial ordinate, t, of the ith eventiTime stamp, p, representing the ith eventiThe polarity of the ith event. p is a radical ofi1 indicates that the light intensity of the pixel point is increased, piThe light intensity of this pixel decreases as indicated by-1.
Assuming that the event camera array includes N event cameras in total, numbered 1 to N, the multi-view event stream data can be recorded as:
E={ε1,ε2,...,εN} (2)
wherein epsiloniRepresenting event stream data output by the ith event camera.
Step 2, converting the multi-view event images, processing multi-view event stream data, converting the event stream data of each view into an event counting image of two channels, and counting the number of positive and negative events occurring at each pixel point of the event counting image;
in this step, each pixel point of the event counting image counts the number of positive and negative events occurring at the pixel point, and a specific conversion formula is as follows:
wherein, I is an event counting image, M is the total event number in the event stream data, and delta is a unit pulse function.
Step 3, completing event image feature point detection, and performing feature point detection and feature description on the multi-view event counting image by using a speedup Robust Features (SURF) algorithm to obtain a feature point set of the event counting image of each view and a corresponding feature description thereof;
step 4, matching event image feature points, taking event stream data acquired by an event camera at a first view angle as a reference view angle, taking the other view angles as non-reference view angles, and performing feature matching on the feature point set of each non-reference view angle and the reference view angle feature point set in sequence to acquire a matching relation;
in this step, the feature point matching is obtained by using the following algorithm:
wherein, A and B are respectively a characteristic point set extracted from the event counting image of a reference visual angle a and any non-reference visual angle B, AiRepresents the ith feature point in the feature point set A,a feature vector representing the feature point, BjRepresents the jth feature point in the feature point set B,a feature vector representing the feature point, for the feature point A in the feature point set AiThe matched characteristic points are characteristic vectors B in the characteristic point set Bj。
And 5, calculating a spatial coordinate transformation matrix, and calculating the spatial coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the mutually matched characteristic point coordinates.
In the step, a spatial coordinate transformation matrix of each non-reference view angle relative to a reference view angle is calculated according to the feature point coordinates matched with each other. The spatial coordinate transformation matrix is obtained by the following algorithm:
wherein the content of the first and second substances,the ith feature point A in the feature point set A extracted for the event count image of the reference view angle aiIs determined by the coordinate of (a) in the space,feature points in feature point set B and feature points A extracted for event count image of any non-reference view BiMatched feature point BiN is the total number of matched feature point pairs in the feature point set a and the feature point set B, HabIs a spatial coordinate transformation matrix of the non-reference view b relative to the reference view a.
And 6, completing multi-view event stream data fusion, performing space coordinate transformation on the multi-view event stream data by using the space coordinate transformation matrix, splicing the transformed event stream data, and acquiring large-scene dynamic observation event stream data.
In the step, the spatial coordinate transformation matrix obtained in the step 5 is used for transforming the spatial coordinate of each event in the event stream data of the non-reference view, the multi-view event stream data after coordinate transformation are combined, repeated events are removed, and the large scene dynamic observation event stream data are obtained. The large scene dynamic observation event stream data is obtained by the following algorithm:
ε=ε1∪ε′2∪…∪ε′N
wherein epsilon is a spliced large scene dynamic observation event stream, epsilon1Event stream data, ε ', collected by an event camera for a reference view'jEvent stream data obtained by performing space coordinate transformation matrix operation on event stream data acquired by the event camera at the jth view angle, wherein N is the number of event cameras in the multi-view-angle event camera array, and H1jSpatial coordinate transformation matrix for jth non-reference view relative to reference view, xi jRaw spatial abscissa, y, of ith event in event data collected for event camera of jth view anglei jOriginal spatial ordinate, t, of ith event in event data collected for event camera of jth viewi jTimestamp, p, triggered by the ith event in event data collected by an event camera for the jth viewi jPolarity of an ith event in the event data collected for the event camera of the jth view angle.Obtaining an abscissa of an ith event in event data collected by an event camera at a jth visual angle after spatial coordinate transformation matrix operation,event data collected by the event camera for the jth view angle is emptyAnd (5) obtaining a vertical coordinate after matrix operation of inter-coordinate transformation.
The embodiment also provides a large scene dynamic visual observation device, which includes: the system comprises a multi-view event data acquisition unit, a multi-view event image conversion unit, an event image characteristic point detection unit, an event image characteristic point matching unit, a spatial coordinate transformation matrix calculation unit and a multi-view event stream data fusion unit;
a multi-view event data acquisition unit that acquires multi-view event stream data, wherein the multi-view event stream data is acquired by a synchronized event camera array;
the multi-view event image conversion unit is used for processing multi-view event stream data, each multi-view event stream data is converted into an event counting image of two channels, and each pixel point of the event counting image counts the number of positive and negative events occurring at the pixel point;
the event image feature point detection unit is used for carrying out feature point detection and feature description on the multi-view event counting image by utilizing a speedUp Robust Features (SURF) algorithm and acquiring a feature point set of the event counting image of each view and feature description of each feature point;
the event image feature point matching unit is used for taking event stream data acquired by the event camera at a first visual angle as a reference visual angle and taking the other visual angles as non-reference visual angles, and sequentially performing feature matching on a feature point set of each non-reference visual angle and a reference visual angle feature point set to acquire a matching relation;
the spatial coordinate transformation matrix calculation unit is used for calculating a spatial coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the feature point coordinates matched with each other;
and the multi-view event stream data fusion unit is used for carrying out space coordinate transformation on the multi-view event stream data by utilizing the space coordinate transformation matrix, splicing the transformed event stream data and acquiring the large-scene dynamic observation event stream data.
The large-scene dynamic visual observation device described in the above embodiment is used to execute the large-scene dynamic visual observation method, and the related algorithm is used in a module corresponding to the large-scene dynamic visual observation device.
Although the present invention has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative of and not restrictive on the application of the present invention. The scope of the invention is defined by the appended claims and may include various modifications, adaptations and equivalents of the invention without departing from its scope and spirit.
Claims (10)
1. A large scene dynamic visual observation method is characterized by comprising the following steps:
step 1, acquiring multi-view event data, and acquiring multi-view event stream data, wherein the multi-view event stream data is acquired by a synchronous event camera array;
step 2, converting the multi-view event images, processing multi-view event stream data, converting the event stream data of each view into an event counting image of two channels, and counting the number of positive and negative events occurring at each pixel point of the event counting image;
step 3, detecting event image feature points, namely performing feature point detection and feature description on the multi-view event counting image by using a speedUp Robust Features (SURF) algorithm, and acquiring a feature point set of the event counting image of each view and feature description of each feature point;
step 4, matching event image feature points, namely taking event stream data acquired by an event camera at a first view angle as a reference view angle and taking the other view angles as non-reference view angles, and sequentially performing feature matching on a feature point set of each non-reference view angle and a reference view angle feature point set to acquire a matching relation;
step 5, calculating a space coordinate transformation matrix, namely calculating the space coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the mutually matched feature point coordinates;
and 6, fusing multi-view event stream data, performing space coordinate transformation on the multi-view event stream data by using the space coordinate transformation matrix, splicing the transformed event stream data, and acquiring large-scene dynamic observation event stream data.
2. The large scene dynamic visual observation method of claim 1, wherein in step 2, the event count image specific conversion formula is as follows:
wherein I is an event count image, xiIs the spatial abscissa, y, of the ith eventiIs the spatial ordinate, p, of the ith eventiIs the polarity of the ith event, N is the total number of events in the event stream data, and δ is the unit pulse function.
3. The large-scene dynamic visual observation method according to claim 1, wherein in step 4, the feature point matching relationship is obtained by using the following algorithm:
wherein, A and B are respectively a characteristic point set extracted from the event counting image of a reference visual angle a and any non-reference visual angle B, AiRepresents the ith feature point in the feature point set A,a feature vector representing the feature point, BjRepresents the jth feature point in the feature point set B,a feature vector representing the feature point, for the feature point A in the feature point set Ai,AiMatched feature pointsFor the feature point B in the feature point set Bj。
4. A method for dynamic visual inspection of large scenes as claimed in claim 1, characterized in that in step 5, the spatial coordinate transformation matrix is obtained by the following algorithm:
wherein the content of the first and second substances,the ith feature point A in the feature point set A extracted for the event count image of the reference view angle aiIs determined by the coordinate of (a) in the space,feature points in feature point set B and feature points A extracted for event count image of any non-reference view BiMatched feature point BiN is the total number of matched feature point pairs in the feature point set a and the feature point set B, HabIs a spatial coordinate transformation matrix of the non-reference view b relative to the reference view a.
5. The large scene dynamic visual inspection method of claim 1, wherein in step 6, the large scene dynamic inspection event stream data is obtained by the following algorithm:
ε=ε1Uε′2U…Uε′N
wherein epsilon is the spliced large fieldScene dynamic observation of event stream, epsilon1Event stream data, ε ', collected by an event camera for a reference view'jEvent stream data obtained by performing space coordinate transformation matrix operation on event stream data acquired by the event camera at the jth view angle, wherein N is the number of event cameras in the multi-view-angle event camera array, and H1jSpatial coordinate transformation matrix for jth non-reference view relative to reference view, xi jRaw spatial abscissa, y, of ith event in event data collected for event camera of jth view anglei jRaw spatial ordinate, t, of the ith event in the event data collected by the event camera for the jth view anglei jTimestamp, p, triggered by the ith event in event data collected by an event camera for the jth viewi jThe polarity of the ith event in the event data collected by the event camera for the jth view angle,obtaining an abscissa of an ith event in event data collected by an event camera at a jth view angle after spatial coordinate transformation matrix operation,and obtaining a vertical coordinate after the ith event in the event data collected by the event camera at the jth visual angle is subjected to space coordinate transformation matrix operation.
6. A large scene dynamic visual observation apparatus comprising: the system comprises a multi-view event data acquisition unit, a multi-view event image conversion unit, an event image characteristic point detection unit, an event image characteristic point matching unit, a spatial coordinate transformation matrix calculation unit and a multi-view event stream data fusion unit; the method is characterized in that:
a multi-view event data acquisition unit that acquires multi-view event stream data, wherein the multi-view event stream data is acquired by a synchronized event camera array;
the multi-view event image conversion unit is used for processing multi-view event stream data, converting the multi-view event stream data into a two-channel event counting image, and counting the number of positive and negative events occurring at each pixel point by each pixel point of the event counting image;
the event image feature point detection unit is used for carrying out feature point detection and feature description on the multi-view event counting image by utilizing a speedUp Robust Features (SURF) algorithm and acquiring a feature point set of the event counting image of each view and feature description of each feature point;
the event image feature point matching unit is used for taking event stream data acquired by the event camera at a first visual angle as a reference visual angle and taking the other visual angles as non-reference visual angles, and sequentially performing feature matching on a feature point set of each non-reference visual angle and a reference visual angle feature point set to acquire a matching relation;
the spatial coordinate transformation matrix calculation unit is used for calculating a spatial coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the feature point coordinates matched with each other;
and the multi-view event stream data fusion unit is used for carrying out space coordinate transformation on the multi-view event stream data by utilizing the space coordinate transformation matrix, splicing the transformed event stream data and acquiring the large-scene dynamic observation event stream data.
7. The large scene dynamic visual inspection device of claim 6, wherein the event count image is obtained using the following algorithm:
wherein I is an event count image, xiIs the spatial abscissa, y, of the ith eventiIs the spatial ordinate, p, of the ith eventiIs the polarity of the ith event, N is the total number of events in the event stream data, and δ is the unit pulse function.
8. The large scene dynamic visual inspection device of claim 6, wherein the feature point matching relationship is obtained using the following algorithm:
wherein, A and B are respectively a characteristic point set extracted from the event counting image of a reference visual angle a and any non-reference visual angle B, AiRepresents the ith feature point in the feature point set A,a feature vector representing the feature point, BjRepresents the jth feature point in the feature point set B,a feature vector representing the feature point, for the feature point A in the feature point set Ai,AiThe matched characteristic points are characteristic points B in the characteristic point set Bj。
9. The large scene dynamic visual observation apparatus of claim 6, wherein the spatial coordinate transformation matrix is obtained by the following algorithm:
wherein the content of the first and second substances,the ith feature point A in the feature point set A extracted for the event count image of the reference view angle aiIs determined by the coordinate of (a) in the space,feature points in feature point set B and feature points A extracted for event count image of any non-reference view BiMatched feature point BiN is the total number of matched feature point pairs in the feature point set a and the feature point set B, HabIs a spatial coordinate transformation matrix of the non-reference view b relative to the reference view a.
10. The large scene dynamic visual inspection device of claim 6, wherein the large scene dynamic inspection event stream data is obtained by the following algorithm:
ε=ε1Uε′2U…Uε′N
wherein epsilon is a spliced large scene dynamic observation event stream, epsilon1Event stream data, ε ', collected by an event camera for a reference view'jEvent stream data obtained by performing space coordinate transformation matrix operation on event stream data acquired by the event camera at the jth view angle, wherein N is the number of event cameras in the multi-view-angle event camera array, and H1jSpatial coordinate transformation matrix for jth non-reference view relative to reference view, xi jRaw spatial abscissa, y, of ith event in event data collected for event camera of jth view anglei jRaw spatial ordinate, t, of the ith event in the event data collected by the event camera for the jth view anglei jTimestamp, p, triggered by the ith event in event data collected by an event camera for the jth viewi jThe polarity of the ith event in the event data collected by the event camera for the jth view angle,obtaining an abscissa of an ith event in event data collected by an event camera at a jth view angle after spatial coordinate transformation matrix operation,and obtaining a vertical coordinate after the ith event in the event data collected by the event camera at the jth visual angle is subjected to space coordinate transformation matrix operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010548500.4A CN111696044B (en) | 2020-06-16 | 2020-06-16 | Large-scene dynamic visual observation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010548500.4A CN111696044B (en) | 2020-06-16 | 2020-06-16 | Large-scene dynamic visual observation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111696044A CN111696044A (en) | 2020-09-22 |
CN111696044B true CN111696044B (en) | 2022-06-10 |
Family
ID=72481264
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010548500.4A Active CN111696044B (en) | 2020-06-16 | 2020-06-16 | Large-scene dynamic visual observation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111696044B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112800860B (en) * | 2021-01-08 | 2023-10-17 | 中电海康集团有限公司 | High-speed object scattering detection method and system with coordination of event camera and visual camera |
CN112966556B (en) * | 2021-02-02 | 2022-06-10 | 豪威芯仑传感器(上海)有限公司 | Moving object detection method and system |
CN113220251B (en) * | 2021-05-18 | 2024-04-09 | 北京达佳互联信息技术有限公司 | Object display method, device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107578376A (en) * | 2017-08-29 | 2018-01-12 | 北京邮电大学 | The fork division of distinguished point based cluster four and the image split-joint method of local transformation matrix |
CN109829853A (en) * | 2019-01-18 | 2019-05-31 | 电子科技大学 | A kind of unmanned plane image split-joint method |
WO2019221580A1 (en) * | 2018-05-18 | 2019-11-21 | Samsung Electronics Co., Ltd. | Cmos-assisted inside-out dynamic vision sensor tracking for low power mobile platforms |
CN110727700A (en) * | 2019-10-22 | 2020-01-24 | 中信银行股份有限公司 | Method and system for integrating multi-source streaming data into transaction type streaming data |
CN111126125A (en) * | 2019-10-15 | 2020-05-08 | 平安科技(深圳)有限公司 | Method, device and equipment for extracting target text in certificate and readable storage medium |
-
2020
- 2020-06-16 CN CN202010548500.4A patent/CN111696044B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107578376A (en) * | 2017-08-29 | 2018-01-12 | 北京邮电大学 | The fork division of distinguished point based cluster four and the image split-joint method of local transformation matrix |
WO2019221580A1 (en) * | 2018-05-18 | 2019-11-21 | Samsung Electronics Co., Ltd. | Cmos-assisted inside-out dynamic vision sensor tracking for low power mobile platforms |
CN109829853A (en) * | 2019-01-18 | 2019-05-31 | 电子科技大学 | A kind of unmanned plane image split-joint method |
CN111126125A (en) * | 2019-10-15 | 2020-05-08 | 平安科技(深圳)有限公司 | Method, device and equipment for extracting target text in certificate and readable storage medium |
CN110727700A (en) * | 2019-10-22 | 2020-01-24 | 中信银行股份有限公司 | Method and system for integrating multi-source streaming data into transaction type streaming data |
Non-Patent Citations (2)
Title |
---|
丁辉等.融合GMS与VCS+GC-RANSAC的图像配准算法.《计算机应用》.2020,(第04期), * |
刘朝霞等.图结构在航空遥感图像特征点匹配中的应用.《计算机工程与应用》.2018,(第01期), * |
Also Published As
Publication number | Publication date |
---|---|
CN111696044A (en) | 2020-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111696044B (en) | Large-scene dynamic visual observation method and device | |
US8139823B2 (en) | Method for capturing images comprising a measurement of local motions | |
US8848035B2 (en) | Device for generating three dimensional surface models of moving objects | |
US9444978B2 (en) | Turbulence-free camera system and related method of image enhancement | |
CN112396562A (en) | Disparity map enhancement method based on RGB and DVS image fusion in high-dynamic-range scene | |
CN111696143B (en) | Event data registration method and system | |
CN103905746A (en) | Method and device for localization and superposition of sub-pixel-level image offset and video device | |
CN106012778B (en) | Digital image acquisition analysis method for express highway pavement strain measurement | |
JP2014116716A (en) | Tracking device | |
CN112470189B (en) | Occlusion cancellation for light field systems | |
EP4050553A1 (en) | Method and device for restoring image obtained from array camera | |
JP2001167276A (en) | Photographing device | |
Zuo et al. | Accurate depth estimation from a hybrid event-RGB stereo setup | |
JP4871315B2 (en) | Compound eye photographing apparatus, control method therefor, and program | |
KR20150050790A (en) | Method and Apparatus for Detecting Region of Interest Image by Using Variable Resolution | |
CN115409707A (en) | Image fusion method and system based on panoramic video stitching | |
US20090244313A1 (en) | Compound eye photographing apparatus, control method therefor, and program | |
CN113808070A (en) | Binocular digital speckle image related parallax measurement method | |
TWI668411B (en) | Position inspection method and computer program product | |
JP2009237652A (en) | Image processing apparatus and method, and program | |
CN116309218A (en) | Real-time synthesis method of polarization degree image | |
CN116866522B (en) | Remote monitoring method | |
WO2023176488A1 (en) | Moving bodies measurement method | |
CN112565630B (en) | Video frame synchronization method for video stitching | |
JP3759712B2 (en) | Camera parameter estimation method, apparatus, program, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |